When launching a new website or new functionality on your website, you want to be confident that it will be successful and provide a good return on investment. For example, you don’t want to build a forum that doesn’t attract any comments.
How can you be sure that your site will be successful? Research and testing with your audience is one way to help ensure confidence. However, doing the wrong type of test can generate false confidence and lead to costly mistakes. Coca-Cola is one such example.
Case Study: Coca-Cola
When Coca-Cola launched a new formulation of Coke (popularly known as “new Coke”), they’d done research and were confident that it would be a winner in the marketplace, after all it beat Pepsi in head-to-head taste tests. However, it quickly became apparent that it was not the success they’d hoped for. There was a public outcry and numerous campaigns to bring back “old coke”. Coca-Cola gave in to public pressure and re-introduced the original version back to the marketplace, under the name “Coca-Cola Classic”, within less than three months.
What Went Wrong?
Coca-Cola had done consumer research including surveys, focus groups and taste tests, all of which had lead them to think that the new formulation would be a winner. If people preferred “new Coke” in testing, why didn’t they like it when it was launched?
Why Did It Go Wrong?
One explanation is simply that Coca-Cola did the wrong type of test. The commonly used “sip test” involves the consumer taking a sip of each of the two drinks and stating which one he/she prefers. In this type of test, the sweeter drink usually wins. Hence “new Coke”, which was a much sweeter formula than original Coke, fared well in the sip tests.
However, think about how someone typically drinks a can or bottle of Coke. Firstly, they drink the whole amount, not just one sip. Also they’re probably doing something else at the same time, such as watching television or eating a meal. In this situation, “new Coke” was too sweet and people did not like it. Had Coca-cola tested a realistic situation, such as asking someone to drink a whole can of the new formula Coke whilst watching television, they probably would not have had such misleading findings.
On top of this, Coca-Cola may also have misinterpreted some of the data from the research. In the focus groups, some participants expressed misgivings about losing a product to which they had a lot of emotional attachment. However, these opinions were not given much weight in the light of the overwhelming success of “new Coke” in the taste tests. But when the new formula was launched some people felt the loss of old coke as a betrayal– there was great emotional attachment to the brand.
Thus, not only was the new formula too sweet – a fact that was not detected because the right type of test was not used, but it was also received poorly by people who felt a great emotional attachment to the brand – a fact that was uncovered in the research but that was dismissed as irrelevant.
So, what are the lessons that you can learn from the failures of the “new coke” launch if you’re planning to launch a new website or to update your existing site?
First and foremost, just doing a usability test is not enough. If you don’t test carefully, you will not really know whether your site is going to be successful, whether that new functionality is worth the investment, or whether users will accept the new features. This is where experience really helps.
Experience Really Is Everything
You need to think carefully about who uses your site, what they use it for, when they visit your site and for how long, and any other external factors that may affect them. This will ensure that the test scenarios are realistic. For example, if your customers normally browse a holiday booking site with their partner in a leisurely fashion in the evening, the test needs to reflect that. Testing the holiday booking site with individuals would miss vital data about how the site is used and may lead to ill-informed decisions. For example, testing with couples ensures that we fully understand how factors such as price, images of the holiday resort and reviews are balanced, enabling us to design a site that supports this.
Equally as important as the design of the test is its facilitation. Asking people to make a donation on your charity website will show you whether they can find the “donate now” button when asked. It will not tell you whether they will actually feel motivated to make a donation. This is where careful and experienced facilitation of the test session can elicit insight into motivations and real-world decision making. For example, our experienced analysts would ask questions in this situation such as “how much would you donate?” and “have you donated on other charity websites?” to gain a deeper understanding of motivations. This would be complemented by asking participants for more objective feedback such as rating the site and picking adjectives to describe the site. All of this gives a good picture of likely actual behaviour and enables us to design the site accordingly.
Finally, alongside careful test design and facilitation the third important factor is the interpretation of the data gained. How do you know what data is key and what can be ignored? Again, this is where experience tells. A good knowledge of psychology and general behavioural principles enables an experienced analyst to determine which datapoints cannot be ignored and which are outliers. For example, by knowing what typical search behaviour is, we can discount participants who display unusual search tactics.
In order to be confident that your investment in your website is going to be money well spent, you need to ensure that it is correctly tested with your audience. This means a test that is well designed, well facilitated and well interpreted. Only when all three things come together can you be sure in what the data are telling you.