I’m back from the recent client-side ESOMAR conference in NY. It was ESOMAR’s first client-side event in the US and was well attended by a great mix of clients from across multiple industries. Dig Insights participated as a sponsor.
A few things that I noted:
Clients are struggling to compare results directly across countries. Respondents in some countries (e.g. China) tend to give very high ratings across the board (e.g. saying that every attribute is important), while respondents in some countries (e.g. Germany) tend to give generally lower ratings with a few higher ones. The Irish Food Board solved this problem by indexing scores (dividing the score for each attribute in each country by the average score in that country). While that does help with cross-country comparability, it does not solve the issue of low differentiation in countries like China. Indexing does not create differentiation if respondents say that everything is important. At Dig Insights, we often address this with trade-off methodologies like MaxDiff. Instead of asking people if an attribute is important, we present them with a few attributes and ask them to identify which is most important and which is least important. The resulting data allows for direct comparability across countries, eliminates scale bias and creates differentiation because respondents must always prioritize.
Behavioural economics remains a big topic. A PepsiCo presentation made the interesting point that behaviour is a result of experience. For example, the behaviour of choosing something at a shelf influenced by the experience of being at that shelf. So we can’t claim that behavioural economics is a priority for us if we are not focused on researching the experience. There are a few solutions. One presentation talked about how they had used high-end VR sets to replicate the experience of ordering in a QSR. It was impressive, but can’t scale, so the client had used it qualitatively. Definitely a step forward vs. traditional renderings. They learned some insights that are purely behavioural. For example, one beverage dispenser the client was considering required respondents to reach inside with their glass (the dispenser would then fill the glass). People felt very uncomfortable putting their hand inside something that they could not see into, so they would first bend over to look into the hole. This lead to a redesign. While it is not a cool as 3D VR, Dig Insights does a lot of work with replicated environments (virtual shelves, virtual online ordering environments) that we manipulate to understand how changes affect behaviour. It’s a more modest approach that allows for scale and quantification.
And there was the question of norms and how norms might kill creative ideas. One client discussed how they were cautious to test really new ideas with mainstream consumers and to compare the performance of those ideas to norms. So they tested them with more cutting-edge consumers, but could not compare the performance of those ideas vs. norms. This made it hard to build clear action standards. At Dig Insights, we are not huge fans of norms in general. Buy us a drink sometime and we can explain why 😊. We prefer to include real-world benchmarks. So if you are testing a cutting-edge beverage idea among cutting-edge consumers, test it against in-market beverages, both mainstream and cutting-edge. This will give you a clear sense of your idea’s potential vs. the real products against which it needs to compete.