Understanding UX Metrics, Part 1


Dear Reader,

The next two weeks of Beyond Aesthetics will be a three-part series on UX metrics. Jump to Part 2 or Part 3

Q: What are UX metrics?

UX metrics are the numbers we watch to measure the user experience.

Designers have the opportunity to bring a human-centered viewpoint to metrics if they understand them.

We can and should measure the experience of our users using our products, but quantifying UX can be a balancing act of business and user metrics.

Q: How are UX Metrics different than business metrics?

UX metrics differ slightly from business metrics because of their use.

  • Common UX Metric = task success rate
  • Common Business metric = conversion rate

Sometimes the difference is more about the framing and point-of-view than the metric itself.

UX metrics are usually more specific than business metrics, and many UX Metrics are also Business Metrics.

The goal with UX metrics is to center our metrics on the user and their point of view.

I like to teach UX metrics through these five dualities:

  • Qualitative vs. Quantitative
  • Attitudinal vs. Behavioral
  • Vanity vs. Actionable
  • Correlated vs. Causal
  • Leading vs. Lagging

Today, I want to talk about the qualitative vs. quantitative and attitudinal vs. behavioral side of UX metrics

Let's go! 🏁

Qualitative vs. Quantitative UX Metrics

Qualitative data is messy, unstructured, and anecdotal. It’s difficult and time-intensive to measure qualitative data, but it’s worth it for the richness of understanding you will uncover.

Quantitative data is precise and easy to measure, but it’s tough to extract insights.

Quantitative can tell you the “how many” and “how much,” while qualitative data can get at the “why” and "how."

Design is traditionally a subjective, messy practice that values insights and isn’t easily quantifiable. That’s why it’s no surprise that many designers are stronger with qualitative data than quantitative data.

Here's a graphic that I made that helps me remember the difference:

At the Fountain Institute, I teach a course on qualitative data as well as a course on quantitative data. There is power in both, and if you can work with both, you will always have a job in product discovery.

Attitudinal vs. Behavioral UX Metrics

Attitudinal metrics are measurements of how users feel about your product. These metrics are dependent on the user’s ability to answer the questions honestly since they are self-reported.

We mainly gather these metrics from surveys, interviews, or any place where we ask the user about their experience. These data collection methods are time-intensive and manual.

Attitudinal metrics are better suited to pre-experiment research or post-experiment evaluations. They can give you the “why” behind the behavioral data.

Behavioral metrics cover what users “do,” and attitudinal metrics cover what users “say.”

Behavioral metrics are measured real-world interactions with your product, such as clicking, opening, or downloading.

We usually track behavioral metrics without the user’s knowledge using analytics tools such as Google Analytics or bit.ly.

Problems arise when users say one thing and do another.

For Example:
Many people will state an intention to live sustainably, but their behavior may not reflect their intent. Understanding the gap between the intention and the action can provide insights into a solution.

Intention and action can be miles apart, a concept known as the "Say-Do Gap." Learn more the Say-Do Gap in my free 7-day mini course on UX Research.

It’s important to closely watch any Say-Do Gaps because potential ethical issues arise when our products force behavior that doesn’t match user attitude.

For Example:
Designers might notice that users prefer scantily-clad models in advertisements. This emergent behavior might not reflect true intention. The intuition of the experimenter can avoid issues and discrepancies in behaviors vs. attitudes.

Behavioral data is generally more reliable and easier to count...but it takes more setup than a survey. To generate data, you need to simulate an experience with a concept test before you can gather any behavioral data.

Ok, now that you get the two dualities, let's put it all together and look at the domain of UX metrics:

Designers usually measure a user experience based on a mix of attitudinal and behavioral quantitative metrics.

UX metrics most often live on the right side of the graph because they're usually built on a combination of attitudinal and behavioral quant. metrics.

Of course, this will change depending on the company.

BONUS: Knowing the four poles allows you to better choose which UX method to use based on the resulting data:

Read more about this framework by Christian Roher here.

That's it for today! 🏆

You just learned Part 1 of UX metrics. 👏👏👏👏👏👏👏

Check out Part 2 where we'll get more advanced and talk about 3 more powerful dualities in UX metrics.

Talk to you next Thursday!

Jeff Humble
Designer & Co-Founder
The Fountain Institute

P.S. We just announced an event on September 10th called How to Lead with UX Metrics. RSVP for free here.

Here's the poster for the event:

P.P.S. Last week, we hosted a workshop called How to Interview Users. Grab the recording + Miro assets here for free.

The Fountain Institute

The Fountain Institute is an independent online school that teaches advanced UX & product skills.

Read more from The Fountain Institute
A hand holding a illustration of a brain representing dyslexia

Turning Challenges into Confidence: Lessons from Dyslexia By Hannah Baker Dear Reader, When I was seven, I was an expert at pretending. I could "read" picture books without actually decoding the words, using context to fill in the gaps. It wasn’t until my mom, a teacher, noticed I was faking it that I was tested and diagnosed with dyslexia. What followed were years of frustration, advocacy, and learning how to embrace a brain that simply worked differently. While my initial reaction was...

4 risks of designing new products

28 Ways to Test an Idea (that is NOT an A/B Test) by Jeff Humble Dear Reader, Today I'm thankful for all the ways you can test that are not A/B tests. Executives and product people think A/B testing is the only thing on the testing menu. 🍽️ For me, it doesn't usually make sense to A/B test. Here's why: A/B tests should happen as late as possible. They might be the most scientific approach, but they require a lot of traffic. Plus, they're usually live and in code, so everything must be...

Big Updates and New Initiatives at the Guild of Working Designers By Hannah Baker Dear Reader, It’s been a transformative year for the Guild of Working Designers. We set out with a vision: to shape a community that’s driven by its members and creates real value for working designers. From co-creating our purpose and values with the community to building a core team, we’ve come a long way—and we’re only getting started! Here’s a quick look at everything that’s led us to this point, along with...