Tracking UX
Finding Insights Across Major Features

People talking about insights on the UX metrics survey
Year
2024
Role
Product Design Lead
Team
1 Founder, 1 PM
Skills
Product Strategy
Design Ops

Context

When I started working at Vendoo I quickly realized that they didn’t have any way to measure if the design of released features worked or if it needed improvements.
This was all me. I lead the effort on having a process to document periodically how users received what we worked on in order to assess if we needed to improve something or move on to a different objective. I knew we had to build a process to know what users thought of all released features and also find out if they were struggling with specific flows.
I initially build a process based on two tests: a custom version of a CSAT (customer satisfaction score) and the standard SUS (system usability scale) tests.

So What?

Measuring UX is very tricky, but with these tests results I would have an ida if the feature was working as expected or not. But I didn’t want to just have a number (quantitative), I also wanted to have interviews (qualitative) so I added a CTA to all released surveys to get on calls with users that were willing to share their thoughts.
With the UX metrics (quantitative and qualitative) backing up all released features we assessed if the feature needed improvements right now (set priority) or if we could move on for now.
Virtual Conversation and CSAT Score

I noticed that every time we improved or released something, we didn’t follow up

We only followed what the CS team were getting and usage metrics from Mixpanel. I realized there was a big gap from the design side and we had to have direct feedback on everything we released.
By showing to the stakeholders the design impact of every released feature, they had another perspective on what should we focus on.
UX Metrics Overview
We document every single UX Metric test in Notion. Screenshot of some results.

Keeping the pulse of the users

I didn’t want to only launch the UX metrics surveys after we released the feature and call it a day. Because we know that the product is a living thing and it changes over time. Maybe a feature that was well received at one point, could then underperform  a year later... That’s why I also built a 1 year follow UX metrics survey for all released features.
By doing this, we kept our ears close to the user. This helped us keep iterating, and keep improving and even removing older features to avoid product bloat.
CSAT Overview
We document every single UX Metric test in Notion. CSAT screenshot of some results.
SUS Table
We document every single UX Metric test in Notion. SUS screenshot of some results on a table I set up feeding on Refiner's survey resutls.
SUS Overview
We document every single UX Metric test in Notion. SUS screenshot of some results broken down by questions.

What’s next?

The CSAT and the SUS are not enough to assess if the feature is essential to the product. I have a feeling we have a lot of features that aren’t being used not only because they are not being discovered but because they are not useful for users.
I want to create a third test based on the Sean Ellis score. But not to assess product market fit for a product. Have this on a feature level instead. Kind of weird because I haven’t seen it elsewhere but it makes sense for me to have something like this at the feature level.

other case studies

Optimizing Onboarding

How We Achieved a 2X Conversion Boost
more

5x faster

Revolutionizing Item Creation to Deliver Value Sooner
more

Scaling Saas Design

How a Unified System Boosted Consistency and Speed
more