Data study is a powerful data visualization tool that allows you to create dashboards on big data sources, such as the Chrome UX Report
(Crux). In this guide, learn how to create your own custom CrUX dashboard to track user experience trends from a source.
The CrUX dashboard is built with a Data Studio feature called Community connectors. This connector is a preset link between CrUX raw data in
and Data Studio visualizations. Eliminate the need for dashboard users to write queries or generate charts. Everything is done for you; all you need to do is provide a source and a custom dashboard will be generated for you.
Create a dashboard
To get started, go to g.co/chromeuxdash. This will take you to the CrUX community connector page, where you can provide the source for which the dashboard will be generated. Please note that new users may need to complete permission requests or marketing preferences.
The text input field only accepts origins, not full URLs. For example:
If you omit the protocol, HTTPS is assumed. Subdomains matter, for example
https://www.google.com they are considered of different origins.
Some common problems with origins are providing the wrong protocol, for example
http: // instead of
https: //and omitting the subdomain when necessary. Some websites include redirects, so if
http://example.com redirect to
https://www.example.com, then you should use the latter, which is the canonical version of the origin. As a general rule of thumb, use the origin that users see in the URL bar.
If your source is not included in the CrUX dataset, you may receive an error message like the one below. There are over 4 million sources in the dataset, but the one you want might not have enough data to include it.
If the source exists, you will be directed to the dashboard outline page. This shows you all the fields that are included: each effective connection type, each form factor, the month the dataset was published, the performance distribution for each metric, and of course the name of the source. There is nothing you need to do or change on this page, just click Create report continue.
Using the board
Each panel comes with three types of pages:
- Core Web Vitals Overview
- Metric performance
- User demographics
Each page includes a graph showing the distributions over time for each available monthly version. As new datasets are released, you can simply refresh the dashboard to get the latest data.
Monthly data sets are published on the second Tuesday of each month. For example, the dataset consisting of user experience data for the month of May is published on the second Tuesday in June.
Core Web Vitals Overview
The first page is an overview of the Core Web Vitals monthly performance from the source. These are the most important UX metrics that Google recommends that you focus on.
Use the Core Web Vitals page to understand how desktop and phone users experience origin. By default, the most recent month is selected at the time you created the dashboard. To switch between older or newer monthly versions, use the Month filter at the top of the page.
Note that the tablet is omitted from these graphics by default, but if necessary, you can remove the Without tablet filter in the bar chart settings, shown below.
After the Core Web Vitals page, you will find separate pages for everyone
in the CrUX data set.
Above each page is the Device filter, which you can use to narrow down the form factors included in the experience data. For example, you can specifically drill down into phone experiences. This setting persists on all pages.
The top views on these pages are the monthly distributions of experiences categorized as "Good," "Needs Improvement," and "Poor." The color-coded legend below the table indicates the variety of experiences included in the category. For example, in the screenshot above, you can see that the percentage of “good” experiences for Paint with Larger Content (LCP) fluctuates and gets slightly worse in recent months.
The percentages of "good" and "bad" experiences for the most recent month are displayed above the graph along with an indicator of the percentage difference from the previous month. For this origin, LCP 'good' experiences fell from 3.2% to 56.04% month-over-month.
Due to a quirk with Data Studio, you can sometimes see
No Data here. This is normal and is because the previous month's release was not available until the second Tuesday.
Also, for metrics like LCP and other Core Web Vitals that provide explicit percentile recommendations, you will find the metric "P75" between the "good" and "bad" percentages. This value corresponds to the 75th percentile of user experiences from the source. In other words, the 75% of the experiences are better than this value. One thing to note is that this applies to the general distribution in all devices about the origin. Toggle specific devices with the Device filter will not recalculate the percentile.
Note that the percentile metrics are based on BigQuery histogram data, so the granularity will be approximate: 1000 ms for LCP, 100 ms for FID, and 0.05 for CLS. In other words, a P75 LCP of 3800 ms indicates that the true 75th percentile is between 3800 ms and 3900 ms.
In addition, the BigQuery dataset uses a technique called "binning" in which the densities of user experiences are inherently grouped into very general bins of decreasing granularity. This allows us to include tiny densities in the tail of the distribution without having to exceed four digits of precision. For example, LCP values less than 3 seconds are grouped into 200 ms wide bins. Between 3 and 10 seconds, the containers are 500 ms wide. Beyond 10 seconds, the containers are 5000 ms wide, etc. Instead of having containers of different widths, the container layout ensures that all the containers have a constant width of 100 ms (the greatest common divisor) and the layout is linearly interpolated across each container.
The corresponding P75 values in tools like PageSpeed Insights are not based on the BigQuery public dataset and can provide millisecond precision values.
There are two dimensions
included in the demographic pages of the user: devices and types of effective connection (ECT). These pages illustrate the distribution of page views across the origin for users in each demographic.
The device distribution page shows the breakdown of phone, desktop, and tablet users over time. Many sources tend to have little or no tablet data, so you will often see a "0%" hanging from the edge of the graph.
Similarly, ECT's distribution page shows the breakdown of 4G, 3G, 2G, slow 2G, and offline experiences.
Effective connection types are considered effective because they are based on bandwidth measurements on user devices and do not imply that any particular technology is used. For example, a desktop user with fast Wi-Fi can be labeled 4G, while a slower mobile connection can be labeled 2G.
The distributions for these dimensions are calculated using segments from the First Contentful Paint (FCP) histogram data.
Frequently asked questions
When would you use CrUX Dashboard instead of other tools?
CrUX Dashboard relies on the same underlying data available in BigQuery, but you don't need to write a single line of SQL to extract the data, and you never have to worry about exceeding free quotas. Setting up a dashboard is quick and easy, all visualizations are generated for you and you are in control to share them with whoever you want.
Are there any limitations to using the CrUX panel?
Being based on BigQuery means that CrUX Dashboard also inherits all its limitations. You are restricted to source-level data with monthly granularity.
CrUX Dashboard also sacrifices some of the raw data versatility in BigQuery for simplicity and convenience. For example, metric distributions are only reported as "good," "needs improvement," and "bad," as opposed to full histograms. The CrUX dashboard also provides data at a global level, while the BigQuery dataset allows you to zoom in on particular countries.
Where can I get more information about Data Studio?
Review the Data Studio Features Page
for more information.