Usability testing analysis plays a crucial role in the research process, helping to identify usability issues, gather feedback, and ensure that digital products meet users’ needs.
At Nomensa, our standard usability process typically involves the following steps:
- Recruitment of participants
- Creating a discussion guide
- Facilitating interviews
- Analysing the insights
- Reporting on the findings
We’re going to focus on just one of those steps – analysing insights. This process may vary based on client requirements, so before diving into analysis, it’s important to have the following:
- Project objectives
- Interview notes
- Recording of the interviews (transcripts are also nice to have)
At Nomensa, we use digital interactive whiteboard tools like Miro and Lucid for notetaking during sessions. This helps save time and resources, providing a solid starting point for analysis.
Step 0 (pre-analysis): establishing a note-taking structure
Set yourself up for success by creating a structured note-taking approach before you start conducting interviews.
- Allocate a different coloured post-it note and participant number tag to each participant. This is to know which insight is connected to which participant.
- Create sections to help group insights. Break down your discussion guide into separate areas to create insight categories. E.g. opening questions, separate tasks, summary questions etc. Add some empty post-its to each area so you are ready to capture all feedback.
- Incorporate screenshots. Having visuals of the screens you are testing will help group insights with the relevant screen or page location
- Include participant recruitment criteria. If needed add details like age, gender, persona type, device used, etc. This helps you remember each participant individually, and could affect how you analyse an insight.
Tip: Balancing note-taking and facilitating interviews takes practice. If you miss noting an important insight, record the time instead, so you can find it again when you watch back the recording.
To help give context to the following steps, imagine a round of usability interviews with 12 participants has taken place. Using a prototype of a retail website, users were asked to complete tasks such as purchasing a bike, comparing the details of two bikes, tracking their order and finding support information.
Step 1: Grouping insights
Due to your note-taking structure, your post-its are already categorised by question sections, screens, tasks any other section you chose to create. Which means you can jump straight into analysing the interview outputs.
Here are some examples of questions to focus on answering during this step:
- Task completion: How well were participants able to complete the tasks?
- User experience: How did participants feel using the product?
- Issues encountered: What problems did they face?
- Feedback provided: What feedback did they give?
Group recurring insights within separate categories and assign headings. These headings can vary depending on your analysis and project objectives. For example, headings could be based on:
- A user’s behaviour, e.g. navigation habits
- An individual question, e.g. would the participant take this offer?
- A page feature, e.g. Opinions on how useful the product description is
Expect your groups to evolve as you delve deeper into the analysis. During this process you will generate new groups and merge others depending on their size. You can group positive, neutral and negative insights. And it’s just as important to note aspects of the product that are working well as well as areas for improvement.
Some participants may have opposing opinions within a group, e.g. some may think there is too much information on a page and some may like the level of detail. In this case, don’t lose that detail. Create sub-groups to paint the most accurate picture of testing.
This is where you may want to look back at your interview recordings or transcripts to ensure you have captured every little detail.
Step 2: Tallying the insights
Time to count them all up, counting recurring themes. This answers initial questions and aids in evaluating project objectives. When communicating how many participants have the same insight, we use this scale:
- Most: 7-12/12 participants and above
- Some: 4 – 6/12 participants
- Few: 2 – 3/12 participants
Remember that a single participant’s unique insight can be valuable, so don’t always count them out. Leverage your expertise and understanding of the project objectives to gauge its importance. Counting up the recurring themes will help to start prioritising the findings.
Step 3: Summarising and prioritising the findings
Now for the significant bit, the ‘so what?’ stage. Why do we care about all these insight groups?
After the groups have been defined, craft statements communicating each group’s overall pattern/finding. In this statement highlight the impact of any issues, as this will demonstrate to clients or team members the problem’s significance and the potential benefits of addressing it.
While the main priorities will already become evident, all findings must be prioritised to provide clients or team members with a manageable list. This list will be turned into recommendations or actions in the next step. Decide on a format when communicating priorities. For example, you could use one of these sets of labels:
- Critical, serious and minor
- Traffic light system (red, orange, green)
- Must, should and could
Tip: This point in the process is a good place to seek input from a team member or colleague. Fresh eyes will ensure that the analysis is neutral and that no biases creep in.
Step 4: Presenting insights and recommendations
You have done the hard bit, after a thorough evaluation of all the data, grouping, and prioritisation, it’s time to offer recommendations and share findings with the team or clients. Your project approach may determine the communication format. For Nomensa, written research reports are one of the most common formats, which should include the following:
- Executive summary – the project and recommendations in a nutshell
- Method overview – what we did
- Research findings summary – high-level summary
- Detailed findings descriptions (showing positives and areas for improvement)
- Supporting evidence (screenshots, quotes, video clips)
- Prioritised recommendations – actionable next steps
- Participant details – anonymised list of relevant details
Usability testing analysis empowers teams to enhance digital products based on real user experiences. You ensure that user needs are met by analysing insights, prioritising recommendations, and iterating. Remember, usability testing is an ongoing process – repeat it to track progress and achieve optimal user satisfaction.
To amplify your efforts, complement usability testing with other methods. Benchmarking and discovery interviews provide diverse insights, while A/B testing and SEO/analytics audits guide data-driven design. Embrace a user-centred approach by incorporating expert reviews of your site and competitor analysis.
Usability testing will give you evidence-based actionable insights so you can equip your team with strategies to create products that truly resonate with users. Contact us today to begin your journey.