In over three decades of research, I have frequently been asked to determine the “importance” of various things to target customers. Often this is in the context of advertising (what features or benefits should we emphasize?). Sometimes it’s part of a product improvement effort (what should we improve to increase satisfaction?). Occasionally, it is to help optimize new product designs. While I don’t have a crystal ball, I do have some research techniques that I have used to help many leading manufactures discern what matters to their target customers. In this article, I discuss appropriate ways to measure “importance” and explain why the most common method is typically the least reliable.
Why simple “importance” ratings don’t work
A common mistake that many make is to simply conduct a survey and ask respondents to rate the “importance” of various items. For example, let’s say we wanted to know the importance of various attributes to boat owners. We have respondents rate several factors on a 10-point scale. The results might look something like the chart on the right. Look familiar?
The problem is, everything is bunched together because there is no incentive to rate anything as unimportant. Imagine being the Product Engineer tasked with developing a high-quality product that is also low price because of this data.
Another problem is that the range of options within each attribute are not taken into account. For example, going from a 2 year to 3 year warranty might not be a big deal but 10 years of protection could be a game-changer. Without this context, however, the full potential for this attribute is unknown.
How to Measure Importance
While there is no “one-size-fits-all” approach, the following are valid ways to measure “importance” in three common business situations:
1. What features or benefits should we emphasize?
Should the advertising focus on quality or ride and handling? Layout or value for the money? If the goal is to determine the relative importance of attributes in general, an excellent research technique to determine this is called “Max-Diff”.
In a Max-Diff exercise, respondents are shown several subsets of items and are asked to pick the best and worst (or most and least important) from each set. This is done repeatedly with various subsets (e.g., product attributes) in order to determine the rank order appeal or importance of each. Too see how this works, check out my brief ice-cream flavors example.
Unlike a simple “importance rating”, a Max-Diff analysis forces respondents to make tradeoffs. And, with this, you get much better discrimination.
Max-Diff offers a few other advantages too. One such advantage is the reliability of the results. People are better at evaluating extremes vs. subtle differences between items. By picking out the best and worst from each set, the respondent’s task is much easier and therefore consistent.
Another advantage of Max-Diff is that you can evaluate a fairly large number of items. This is because respondents are only shown a subset of items with each evaluation task. This makes it possible to evaluate 20 items, for example, without overwhelming respondents.
Finally, the results of a Max-Diff analysis are fairly easy to interpret. The percent of times an item is selected as the most and least favorite (or important) is displayed, and the difference between these two items determines the rank order appeal. You can see what this looks like but taking my brief Max-Diff survey on ice cream flavors.
While many survey tools do not offer Max-Diff as part of their base package, one mid-priced tool that I use (Alchemer, formerly Survey Gizmo) offers an excellent Max-Diff question type that is easy to administer.
2. How can we improve our existing products?
In order to remain competitive, it is important for manufacturers to periodically adjust and improve their products. And, an excellent way to identify key improvement opportunities is to perform a “drivers analysis” with your customer satisfaction data.
A drivers analysis is a multiple regression approach to derive the importance of various attributes based on how strongly each relates to, or is predictive of, overall satisfaction. From the resulting regression model, items with the largest coefficients have the greatest impact on satisfaction (assuming the rating scale was the same for each item included in the model). Therefore, to identify key improvement areas, focus on items with larger coefficients (higher importance) and relatively low average satisfaction scores.
One advantage of derived importance vs. simple “stated importance” techniques is that the process inherently employs tradeoffs. This is because various attributes “compete” with one another in the regression model. The more one attribute “explains” overall satisfaction, the less the remaining attributes can contribute to the regression equation (i.e., less important).
One caution, however, when conducting a drivers analysis. If you have a lot of attributes and/or the attributes are highly correlated with one another, then you can get misleading results. This is because items included in the model (e.g., quality) will reduce the incremental impact of other, correlated, items (e.g., reliability) and thereby understate their relationship. There are steps to take to mitigate this (eliminate correlated variables or perform a factor analysis first) but that is beyond the scope of this article. The key point is, if you have a robust customer satisfaction survey already in place, a drivers analysis could provide meaningful direction regarding where to prioritize your product improvement efforts.
3. How do we optimize our new product concept?
Let’s say you want to design the ultimate outboard motor. You know that you can vary things like the speed, fuel economy, weight, noise, price, etc. And, there are multiple levels possible for each attribute (e.g., Fuel economy: 4, 6 or 7.5 mpg; Price: $7,500, $8,000, $8,500, etc.). The number of potential combinations is enormous but you want to find the combination of attribute levels that generates the greatest consumer interest. This is a perfect scenario for a technique called Conjoint.
In a Conjoint analysis, product “bundles” are created by selecting various levels for each attribute under consideration . Respondents are then shown two or more bundles at a time and asked to indicate which bundle (i.e., product) they would prefer. This is done repeatedly with multiple sets of bundles. The example on the right is an excerpt from an actual study I performed years ago for a major marine manufacturer.
From this evaluation task, the relative importance or “utility” of each attribute (e.g., noise, fuel economy, reliability, speed, price) and attribute level (e.g., 4 mpg, 6 mpg, 7.5 mpg) can be estimated. These are then used in a “what if” simulator to identify the relative appeal of various new concept configurations vs. competitor offerings.
As with Max-Diff, Conjoint forces respondents to make tradeoffs with each preference task and so the data is much more realistic and discriminating. Plus, it enables you to explore reactions to a wide range of attribute levels to help you identify the optimal product concept.
While Conjoint can be incredibly powerful, there are a few key drawbacks. First, it can be hard to articulate some attribute levels – especially for subjective items. In the example above, we used analogies to help convey varying levels of smoothness. Second, you can only evaluate a relatively small number of attribute categories. Typically, six to eight items are about the maximum respondents can handle (there are eight in the example above). Finally, a Conjoint analysis can be fairly expensive to implement.
Conclusions
Knowing what is important to your customers is critical for success but the way to determine “importance” varies by situation. If the goal is to determine the relative importance of various features or attributes for advertising purposes, then Max-Diff is an excellent approach. On the other hand, if you are wanting to optimize a new product concept, then a Conjoint analysis should be considered. Finally, if you are looking for ways to improve your existing products and you have a customer satisfaction program in place, then a Drivers analysis is likely the place to start.
While there are many more situations than this, a key consideration in these and other efforts to measure importance is to incorporate tradeoffs in the process. This will give you far greater discrimination and more realistic results than doing a simple importance rating survey.
What are your “importance” challenges? Leave a comment or click on the box below and I will give you some suggestions for how to address your specific informational needs.
Leave A Comment