Dear Reader,
This edition of Beyond Aesthetics is part two of a series on UX metrics. Jump to Part 1 or Part 3
Let's review:
Here is the domain of UX metrics:
Let's start today by looking at some common UX metric examples 🏁
Behavioral UX Metrics, also called "performance metrics," are based on actual usage of a design. Here are some common things to measure in the top-right quadrant:
As you can see, most of these are task-based so you either measure a task during a research study or you set up an ongoing measurement in your product.
Attitudinal UX Metrics, also called "self-reported metrics," are based on what the user shares about their experiences. These are used in combination with behavioral task metrics to understand the perception of the user. Here are common self-reported metrics in the lower-right quadrant:
How to do it:
A common scenario would be to simulate a task for a new product feature to get behavioral data. You might have your user complete a task using two different design flows. You could measure the task success, the time it takes, and the error rate for both versions. Before the test, you could ask for preference or attitudes related to the experience to capture pre-test preference. After the test, you could have the user rank the experience on ease-of-use or usability with a 5-point scale. After analyzing this mix of behavioral and attitudinal UX metrics, you should be able to identify the design that best fits your user.
That's a very common use case for using UX metrics to determine usability on a feature. What if you want to check your whole product?
You might start with something like a System Usability Scale (SUS) or a Net Promoter Score (NPS). But eventually, you want to create a group of UX metrics that is custom to your product.
That's the ultimate goal of UX metrics: to have a tailored set of metrics that your team trusts and uses to improve the UX. Google did this with their H.E.A.R.T. framework, and you can, too.
But first, you need to understand what a metric can do. Here are 3 more dualities ☯︎☯︎☯︎ you should understand before you set a UX metric for your product.
Vanity metrics are numbers that make you feel good, but actionable metrics help you take a course of action.
Vanity metrics count up forever uselessly. “Total Users” is an example of a vanity metric. While it might be important to the business, it doesn’t help us learn.
A better metric might be “numbers of users acquired during the last two weeks.” This is already more actionable because it allows us to isolate the effects of our work, making the metric much more helpful.
Actionable metrics should be ratios. Speed (distance over time) is an example of a ratio. Ratios allow you to integrate essential factors easily.
Not sure what to measure? Pick a vital user action, put it in the numerator, and put a time-based number in. the denominator…the resulting metric will be far more helpful than that important user action by itself.
These metrics are everywhere, but they’re not helpful. Many analytics tools measure these things out of the box, but that doesn’t mean they’re actionable.
Leading Metrics (also known as a leading indicator) are early indicators of user behaviors that will follow, known as Lagging Metrics.
For Example:
Customer complaints about the UI might be examples of Leading Metrics. Later, these UI problems might lead to cancellations or churn, an excellent example of a Lagging Metric. If you don’t act on the UI problems, you may not be able to stop the churn. If you focus your efforts on the Lagging Metric like Churn, you might be acting on the UI issues too late.
Understanding the relationship between Leading and Lagging Metrics will help you focus on the right metrics at the right time.
To better understand the relationship between leading and lagging metrics, you need to understand the final duality ☯︎ of UX metrics.
Correlated metrics help you predict what will happen through the relationship of two variables. This relationship can be causal, but not necessarily.
For Example:
Ice cream and sunglasses are correlated, but the relationship isn’t causal. When ice cream sales go up, so do sunglasses sales...but not always. Ice cream and sunglasses have a correlated relationship.
Ice cream sales and sunglasses sales DO have a causal relationship with the temperature. As the temperature goes up, sales of both items go up in a direct cause-and-effect connection.
Causal metrics are powerful because once you discover them, you can directly manipulate the cause-and-effect relationship.
Correlated metrics have a terrible reputation thanks to bloggers and news reports that use them to create questionable, misleading graphs. Here is a comical example from the website Spurious Correlations:
Obviously, Nicolas Cage isn't drowning people with his films, but a graph like this can be very convincing. Be critical of any graph that you see and look for evidence of the causal relationship.
Correlated Metrics can be a good sign that you’re on the right track to Causal Metrics. Most causal metrics start out as correlated metrics.
So how do you know if something is causal? Experiments are a vital way to isolate variables from correlations and determine if there is causality (learn how to do that in our course on product experiments).
Before you can say that something "caused" the numbers to increase, you'll need to prove it in a statistically significant experiment. I'd start small with a lo-fi concept test and work my way towards a live A/B test with a confidence level of at least 90%.
After a few experiments, you may even discover that the leading metrics from an experiment are causal to your lagging business metrics. Causal leading and lagging metrics are a very desirable outcome in UX.
For example, if you can establish a causal relationship between the leading metric of task success and the lagging metric of revenue, you can show the return on investment for UX work (read a case study about a team that connected usability metrics with business metrics here).
If you're the one with expertise in UX metrics, you can be the one to show business leaders how UX meets their goals.
Any UX designer that can do that will always have a job.
Well, that's it for today! 🏁
You just learned Part 2 of UX metrics. 👏👏👏👏👏🏆👏👏👏👏👏
I hope you enjoyed this series so far.
Jeff Humble
Designer & Co-Founder
The Fountain Institute
P.S. I'm giving a talk on UX metrics on Saturday, September 10th. If you liked this email, you'll love the talk. Grab a free ticket here. Here's the poster:
The Fountain Institute is an independent online school that teaches advanced UX & product skills.
Designing an AI-Powered Automation System by Jeff Humble Dear Reader, This week, I want to share something I use behind the scenes. I want to share something very nerdy that I've been working on 🤓 It's an automation system that I'm using to experiment with AI. If you've ever used a tool like Zapier, then you might like this sort of thing. I built it with Discord and Activepieces, and it's completely free. It looks like this: Here are some of the use cases: Work Automation - a productivity...
Leadership Isn’t a Ladder—It’s a Leap By Hannah Baker Dear Reader, Most people think of leadership as a ladder: start as an individual contributor (IC), climb step by step, and eventually land at the top. My journey was far from that. I never had a traditional design job or a managerial title handed to me. Instead, I jumped straight into founding and leading a business—without a roadmap, a role model, or even a clear sense of what leadership was supposed to look like. It wasn’t always smooth....
Every 2024 Newsletter You Missed by Jeff Humble Dear Reader, It's that time of the year again! Here are all the newsletters you missed in 2024 from the Fountain Institute: A Designer's Uncertain Path to Success Design Strategy vs. Design Vision: What's the difference? Is Poor Communication Hindering Your Projects? Balancing Freelance Life with Maya McBeath Innovation by Design with Cristina Colosi Shaping the Future of the Guild of Working Designers Figma Skills Won't Get You Promoted See...