- On August 3, 2017
- In General
- By Michael Bamberger
This article was originally published by Alpha UX
The merits of experimentation and testing have been proven in virtually all contexts of business. From manufacturing to marketing to hiring practices, there seems to be limitless application to the scientific method in industry.
When it comes to user testing and validating product concepts, it can be all-too-easy to run into stumbling blocks and derive false conclusions from a lot of well-intended hard work. Fortunately, keeping your tests on track and providing real user insight can be accomplished with a well-defined, iterative process. Below are seven common mistakes to avoid to help ensure you achieve real understanding through your user tests.
1) Not clearly defining your goals
It sounds basic enough, but defining the goals of your experiments is often a more complex challenge than you might conceive. What does success look like? What are you actually trying to accomplish? If your goal is too vague or miscommunicated to your team, you risk having test results interpreted in different ways by different people.
For example, if your goal is to ‘improve our app,’ that can actually mean any number of things. Are you trying to increase the number of new users? Are you trying to increase revenue per user? Or are you trying to increase in-app engagement? Each of these goals are separate and have stark implications for not only the tests you run but also the insights you’re likely to glean. Goals need to be clearly defined and consistently tracked from one test to the next.
2) Not building tests from hypotheses
A test is only as good as the hypothesis from which it was framed. Data generated by product concept experiments can be overwhelming if there is no pre-identified direction to validate or invalidate. We write a lot about validating concepts before you start coding, but to ensure your experiments can even plausibly enable validation, you need to develop tests from specific hypotheses.
Follow the scientific method as closely as possible for all experiments and tests you run. Basically: ask questions, observe, construct hypotheses, test, then analyze. Each of these steps is critical for product concept testing and validation, but by ignoring hypothesis development, you’re dooming your data from the start.
3) Not questioning or challenging assumptions
“We know millennials love video.” How do you know that? How do you know they don’t prefer photos or music? Millennials actually have very complex media preferences, according to our research. Oftentimes invalid assumptions exponentially dilute the significance of test results and can set you on the wrong track early on.
Define your assumptions up front and consider how you might verify them. Even though assumptions are a necessary part of any tests, the fewer you take for granted the more insightful your conclusions will be. It’s always healthy practice to question everything and challenge that which is taken for granted.
4) Relying on self-reported data and not substantiating through observation
We recently ran a survey to understand how people make a decision when it comes to choosing a restaurant. We surveyed 1,000 people, making our data statistically significant at the 99% confidence level with a confidence interval of +/- 5. The results were clear: location, prices, and the menu are by far the most important factors when choosing a restaurant. So what happened when we put that data to the test and ran user testing sessions? Customer reviews were, by far, the most important factor. Users went to that information before any other and relied on other users’ feedback as the ultimate component for making a decision. Watch what users do and take what they say with a grain of salt.
5) Not segmenting your data or identifying your target demographic
People love cat videos. The data are quite clear on this issue. So should a men’s health magazine drop everything and start exclusively developing cat videos? Of course not. But if you’re not segmenting your data or targeting the demographic in question, this type of conclusion is plausibly drawn.
Though this is obviously a hyperbolic example of the concept, taking averages across a general population can diminish true understanding and leave the most valuable insights undiscovered. We typically look for outliers: where is a segment with an opportunity far exceeding others? That is where we try to focus our attention and understanding.
6) Not revalidating or testing over time
“This year, I invested in pumpkins. They’ve been going up the whole month of October and I got a feeling they’re going to peak right around January. Then, bang! That’s when I’ll cash in.” – Homer Simpson
Seasonality and cyclicality are well known business concepts. Far too often, however, when running tests and experiments one can lose sight of just how things might change with time. If you run a test on a level of interest in pumpkin-flavored beverages, you’ll likely see it as a significant out-performer in autumn. However, run that test again in June and you’ll probably see much less fervor around this idea.
Tests and experiments are run against panels of people. People’s attitudes, perspectives and opinions change considerably over time. Not only from season-to-season, but even day-to-day and hour-to-hour. This is just one of many reasons why testing needs to be a continual process and not a finite, project-based one.
7) Focusing on winning, not learning
Testing is a method for understanding customers and users. The only reason we test is so that we can learn about our target segment and get a better understanding of who they are, what they care about, and how to most effectively engage and satisfy them. Oftentimes, not finding a ‘winning’ concept can be perceived as a failure. But as long as your experiments provide greater understanding of your target market, every test is a success.
So instead of framing experiments as the search for winners, think of it as the search for understanding. By changing your approach to be focused on learning, you’ll be more successful not only in the experiments you run, but also the products you develop.