Nomensa.com

You need to turn on Javascript in your browser to use this site!

The active role of participants and facilitators in UX research | Nomensa

The active role of participants and facilitators in UX research

Posted on

7 minutes, 55 seconds

Having a psychological research methods background, I’ve always had an interest in understanding the dynamics of one on one research. This was reignited by some memorable talks at Collaborate Bristol conference earlier this month, where the testing process itself featured in several of the talks.

The roles and experiences of both the user and facilitator within the usability testing process need to be considered as an active, contributory factor in the outcomes of traditional usability testing. When a user comes into our test lab to volunteer for a usability testing session, we need to acknowledge that they are a not simply a unanimous voice of our audience (or that particular segment or persona) but a person entering a novel environment, with all of the concerns and behaviours that entails. How we engage with that person, and have communicated with them previously, can affect the expectations they have entering the session, and therefore the outcomes. Similarly, the way we structure the session and ask questions can affect how receptive a participant thinks we are to their suggestions and what they think our preferred outcome would be.  

 

‘User testing’

This person will have been told they are partaking in a research session, sometimes it will be called a testing session or interview. We spend vast amounts of time exploring users’ mental models of services or experience, what it means to book a holiday, how they mentally represent a charity, the interpretations of the language we use. But we don’t always consider the implications of our own language choice on how a user understands the process of usability testing itself. 

Even if every effort has been made to refer to the process as ‘research’ throughout initial recruitment, being greeted with a friendly ‘Are you here for the testing?’ will place testing as a concept clearly in the participants’ mind. Each of us will have different representations of the phrases ‘test’ or ‘interview’, but for many these phrases are likely to suggest a process of evaluation, where there is a right or wrong answer and failure is a potential outcome.

The basics of usability testing stress that we need to put the participant at ease, ask them to behave as naturally as possible and pretend that they are simply trying to use this product for themselves within their everyday lives. But how naturally can a participant behave when they have been primed with the word ‘test’? The old catchphrase ‘It’s the device/product/site/app we’re testing, not you!’ may not be enough to shake the idea of grades and school halls from the participants’ mind. The difference between usability testing (testing the product’s usability) and user testing (testing the person) is also an important distinction.

 

Demand Characteristics

Depending on the audience you are researching and the nature of the product you are testing, participants may enter a session with very clear ideas about what is expected of them. Some participants will believe that their role is to be a critic of this service, and focus on finding negative points to discuss in the testing. Others will enter the session and want to acknowledge the work that has gone into the site. Presuming that you are a representative of the team that has designed this product, by not wanting to offend those involved some users will proceed to complement each aspect of the process.

In either of these circumstances, the outcome is a participant approaching the task with artificial levels of positivity or negativity, and therefore resulting in a less than natural picture of how your product might be received by this audience.

Below is an example of an exchange that I had in a research session less than a month ago. We were testing a prototype that was only a couple of pages, and were interested to understand what this audience thought that the rest of the site would contain. This particular participant had been very eager to please, reassuring me of how enjoyable and beautiful every aspect of the prototype was throughout the session.

‘What would you expect to happen if you clicked on that symbol?’

‘It would turn blue’

This is an example of a situation where the participant was so keen to say the right thing, they answered without really considering their response. This answer wasn’t based on any previous interactions on the site, there was nothing on the page to indicate this potential, and the site in question didn’t use blue anywhere in the colour scheme. It came from a need to give an answer quickly, and a hope that was what I wanted to hear.

Although participants will always be keen to respond in a way that they think is appropriate, demand characteristics can be reduced by the way we frame research.  For example, reducing the personal association between yourself as facilitator and the product you are testing will allow participants to feel that they can be more honest. Similarly, I’ve been asked in the past by clients to show users the old design, then show them the new one and ask for feedback around the changes they’ve made and the great new design. Inevitably the participant looks at your old website, last redesigned in 2006, and your new, shiny responsive site and says ‘Wow! This new one is great’. Now although this approach may lead to happy stakeholders and some great soundbites to take away about what a good job we’ve all been doing, it’s fundamentally missing the point of testing the product. By framing the session as a comparison between old and new, we may miss a vital exploration of how well the new site actually meets any of the users’ key needs (as Mike discusses in Framing UX Research Questions).

 

Experimenter bias

Just as users enter the process with a pre-existing idea of how they are supposed to behave, as facilitators we also have ideas of how we expect, or at least would like the feedback to go.

In the world of medical trials, both doctors and patients will be blind to the treatment type to prevent any intentional or unintentional changes in behaviour as a result of knowledge of the treatment group.  This knowledge could bias the way that they behave around, speak to and treat the patient which could produce potentially misleading results.

Although we’re not comparing life changing medications, there are inevitably outcomes of testing that are more appealing than others.  This might be due to an awareness of the client’s objectives, personally as the designer of the process/interface, or because we’d just like to see a particular approach test well. A key skill as a facilitator of one to one testing must be to prevent these biases affecting cues given that a participant can pick up on and react to within the testing.

 

Leading questions

A classic example of experimenter bias comes in the form of leading questions. Pretty much anyone who has conducted this kind of user research will have been told at some point to avoid asking leadings questions, but what does this actually mean? Although we all know that leading a participant to a certain conclusion isn’t a great way of testing, some classic studies from the field of psychology and eye witness testimony help to highlight the extent to which a person’s judgement can be affected by the way a question is asked.

Loftus & Palmer’s first research into the area (1974) illustrated this clearly in a series of studies exploring how subsequent memory recall could be affected by leading questions. Participants were shown footage of a car crash, and later asked to estimate the speed the car was moving when the accident occurred.

‘About how fast were the cars going when they hit each other?’

They found that replacing the word ‘hit’ with others such as ‘smashed’, ‘contacted’, ‘collided’ or ‘bumped’ significantly affected the participant’s estimate of the speed of the car.  Despite having seen the same footage, participants who were asked about when the cars ‘smashed’ as opposed to ‘bumped’ estimated on average 10mph faster. Interestingly those who heard ‘smashed’ were also more likely to remember seeing broken glass on the film, despite there not being any in the footage.

Even the slightest of wording changes can affect how a participant processes the question and choses to phrase their response. Although the difference between ‘What do you think of the page?’ vs. ‘What do you think of the new page?’ is subtle, the impact on how a participant answers could be great.

 

Empathy for participants

Having empathy for the participant’s own experience of testing, as well as empathy for our anonymous users will lead to more detailed insight and useful outcomes from research.  Understand the context that participants are in, how this experience itself can be made easier and be aware of the role we as facilitators have in framing the research.  Acknowledging the artificial nature of testing and having an awareness of how the structure, context and language of the session can affect participant feedback will help to get the most value from usability testing research.

UX Research is at the very heart of what we do and a key component in any design and build work we do. Getting the research right at the start of your project will ensure that your project will run smoothly and meet the needs of your customers.

Related posts

  1. Persona Types

    Blog

    Persona Types

    User Experience (UX) design and understanding your target audience are essential. Personas serve as valuable tools in this process, enabling designers to empathise with users,…

We'd love to hear from you

We drive commercial value for our clients by creating experiences that engage and delight the people they touch.

Email us:
hello@nomensa.com

Call us:
+44 (0) 117 929 7333

Nomensa.com

Please update your browser to view this site!