Understanding UX Metrics, Part 2


Dear Reader,

This edition of Beyond Aesthetics is part two of a series on UX metrics. Jump to Part 1 or Part 3

Q: What are UX metrics in practice?

Let's review:

  • UX metrics are the numbers we use to measure and improve the user experience.
  • UX metrics deals exclusively with quantitative data.
  • We measure the UX using a mix of behavioral and attitudinal approaches.

Here is the domain of UX metrics:

Let's start today by looking at some common UX metric examples 🏁

Behavioral UX Metrics, also called "performance metrics," are based on actual usage of a design. Here are some common things to measure in the top-right quadrant:

  • Task success
  • Time on task
  • Errors during tasks
  • Advanced metrics like Eyetracking & A.I.-assisted emotion detection

As you can see, most of these are task-based so you either measure a task during a research study or you set up an ongoing measurement in your product.

Attitudinal UX Metrics, also called "self-reported metrics," are based on what the user shares about their experiences. These are used in combination with behavioral task metrics to understand the perception of the user. Here are common self-reported metrics in the lower-right quadrant:

  • Post-task ratings like ease-of-use
  • Overall UX ratings like SUS or NPS
  • Self-reported preference
  • Open-ended questions

How to do it:
A common scenario would be to simulate a task for a new product feature to get behavioral data. You might have your user complete a task using two different design flows. You could measure the task success, the time it takes, and the error rate for both versions. Before the test, you could ask for preference or attitudes related to the experience to capture pre-test preference. After the test, you could have the user rank the experience on ease-of-use or usability with a 5-point scale. After analyzing this mix of behavioral and attitudinal UX metrics, you should be able to identify the design that best fits your user.

That's a very common use case for using UX metrics to determine usability on a feature. What if you want to check your whole product?

You might start with something like a System Usability Scale (SUS) or a Net Promoter Score (NPS). But eventually, you want to create a group of UX metrics that is custom to your product.

That's the ultimate goal of UX metrics: to have a tailored set of metrics that your team trusts and uses to improve the UX. Google did this with their H.E.A.R.T. framework, and you can, too.

But first, you need to understand what a metric can do. Here are 3 more dualities ☯︎☯︎☯︎ you should understand before you set a UX metric for your product.

Vanity ↔ Actionable Metrics ☯︎

Vanity metrics are numbers that make you feel good, but actionable metrics help you take a course of action.

Vanity metrics count up forever uselessly. “Total Users” is an example of a vanity metric. While it might be important to the business, it doesn’t help us learn.

A better metric might be “numbers of users acquired during the last two weeks.” This is already more actionable because it allows us to isolate the effects of our work, making the metric much more helpful.

Actionable metrics should be ratios. Speed (distance over time) is an example of a ratio. Ratios allow you to integrate essential factors easily.

Not sure what to measure? Pick a vital user action, put it in the numerator, and put a time-based number in. the denominator…the resulting metric will be far more helpful than that important user action by itself.

7 Vanity Metrics to Watch Out For

These metrics are everywhere, but they’re not helpful. Many analytics tools measure these things out of the box, but that doesn’t mean they’re actionable.

  1. Number of page views: Won’t give insight into who or what happened. Count people instead.
  2. Number of visits: Could be 1 person visiting 100 times or 10 people for 10 times. Go fish.
  3. Number of unique visitors: Better than above but doesn’t tell you much beyond # of eyeballs.
  4. Time on site: isn’t nearly as crucial as user behavior on the pages.
  5. Number of followers/friends/likes: doesn’t tell you much. Measure engagement instead.
  6. Emails collected: Same as above, open rates or click-through-rates (CTR) would be better.
  7. Number of downloads: it’s good to have app downloads, but User Activation, Engagement, and Retention is more helpful.

Leading ↔ Lagging Metrics ☯︎

Leading Metrics (also known as a leading indicator) are early indicators of user behaviors that will follow, known as Lagging Metrics.

For Example:
Customer complaints about the UI might be examples of Leading Metrics. Later, these UI problems might lead to cancellations or churn, an excellent example of a Lagging Metric. If you don’t act on the UI problems, you may not be able to stop the churn. If you focus your efforts on the Lagging Metric like Churn, you might be acting on the UI issues too late.

Understanding the relationship between Leading and Lagging Metrics will help you focus on the right metrics at the right time.

To better understand the relationship between leading and lagging metrics, you need to understand the final duality ☯︎ of UX metrics.

Correlated ↔ Causal Metrics ☯︎

Correlated metrics help you predict what will happen through the relationship of two variables. This relationship can be causal, but not necessarily.

For Example:
Ice cream and sunglasses are correlated, but the relationship isn’t causal. When ice cream sales go up, so do sunglasses sales...but not always. Ice cream and sunglasses have a correlated relationship.

Ice cream sales and sunglasses sales DO have a causal relationship with the temperature. As the temperature goes up, sales of both items go up in a direct cause-and-effect connection.

Causal metrics are powerful because once you discover them, you can directly manipulate the cause-and-effect relationship.

Correlated metrics have a terrible reputation thanks to bloggers and news reports that use them to create questionable, misleading graphs. Here is a comical example from the website Spurious Correlations:

Obviously, Nicolas Cage isn't drowning people with his films, but a graph like this can be very convincing. Be critical of any graph that you see and look for evidence of the causal relationship.

Correlated Metrics can be a good sign that you’re on the right track to Causal Metrics. Most causal metrics start out as correlated metrics.

So how do you know if something is causal? Experiments are a vital way to isolate variables from correlations and determine if there is causality (learn how to do that in our course on product experiments).

Before you can say that something "caused" the numbers to increase, you'll need to prove it in a statistically significant experiment. I'd start small with a lo-fi concept test and work my way towards a live A/B test with a confidence level of at least 90%.

After a few experiments, you may even discover that the leading metrics from an experiment are causal to your lagging business metrics. Causal leading and lagging metrics are a very desirable outcome in UX.

For example, if you can establish a causal relationship between the leading metric of task success and the lagging metric of revenue, you can show the return on investment for UX work (read a case study about a team that connected usability metrics with business metrics here).

If you're the one with expertise in UX metrics, you can be the one to show business leaders how UX meets their goals.

Any UX designer that can do that will always have a job.

Well, that's it for today! 🏁

You just learned Part 2 of UX metrics. 👏👏👏👏👏🏆👏👏👏👏👏


I hope you enjoyed this series so far.

Check out part 3

Jeff Humble
Designer & Co-Founder
The Fountain Institute

P.S. I'm giving a talk on UX metrics on Saturday, September 10th. If you liked this email, you'll love the talk. Grab a free ticket here. Here's the poster:

The Fountain Institute

The Fountain Institute is an independent online school that teaches advanced UX & product skills.

Read more from The Fountain Institute
A hand holding a illustration of a brain representing dyslexia

Turning Challenges into Confidence: Lessons from Dyslexia By Hannah Baker Dear Reader, When I was seven, I was an expert at pretending. I could "read" picture books without actually decoding the words, using context to fill in the gaps. It wasn’t until my mom, a teacher, noticed I was faking it that I was tested and diagnosed with dyslexia. What followed were years of frustration, advocacy, and learning how to embrace a brain that simply worked differently. While my initial reaction was...

4 risks of designing new products

28 Ways to Test an Idea (that is NOT an A/B Test) by Jeff Humble Dear Reader, Today I'm thankful for all the ways you can test that are not A/B tests. Executives and product people think A/B testing is the only thing on the testing menu. 🍽️ For me, it doesn't usually make sense to A/B test. Here's why: A/B tests should happen as late as possible. They might be the most scientific approach, but they require a lot of traffic. Plus, they're usually live and in code, so everything must be...

Big Updates and New Initiatives at the Guild of Working Designers By Hannah Baker Dear Reader, It’s been a transformative year for the Guild of Working Designers. We set out with a vision: to shape a community that’s driven by its members and creates real value for working designers. From co-creating our purpose and values with the community to building a core team, we’ve come a long way—and we’re only getting started! Here’s a quick look at everything that’s led us to this point, along with...