Three reasons why crowdsourcing fails (and how to fix them)
The 3 main reasons why crowdsourcing fails can cause poor usability, bad conversion and meager ROI. Here are the causes, and how to fix them.
Recently, several clients have come to me with problems that all derived from the same source: crowdsourced usability testing.
If you’ve not heard of “crowd-sourced usability testing,” my definition is;
“Crowdsourced usability testing is a technique of gathering user feedback in which website owners ask a population for input on a website or application.”
Several of my clients had major usability and task-flow errors on critical elements of their sites, and the root cause turned out to be bad feedback gathered from crowdsourced usability tests.
Now don’t get me wrong, crowdsourcing if used properly can provide wonderful benefits. Books like “The Wisdom of Crowds” or articles about “Crowdsourcing” can be helpful references for the benefits of correctly applied crowd sourced techniques.
3 Primary Reasons Why Crowdsourcing Fails:
But when it comes to usability testing, often crowd-sourced techniques fail, and there are 3 major reasons why:
- The Crowd Does Not Reflect Typical Users – A major failure of crowdsourced usability testing happens when the researcher gathers feedback from users who do not reflect the typical user of the website. For example, gathering feedback from a general population will skew results if your website or application is designed for Seniors (those age 65 and older). That’s because what may be easy or readily understood by younger audiences may not be easy or understood by older audiences. It’s important to remember that every website and application has a ‘typical’ user (another word for this is Persona) who you MUST understand and design the experience for. Gathering information from people that don’t reflect your Persona is a major way to introduce usability errors into your system.
- People Don’t Do What They Say They Do – Focus groups and surveys reveal over and over again that what people SAY they do is often not what they ACTUALLY do. Anyone who has studied famous focus group failures like the design of the Edsel or the launch of New Coke will understand that beliefs and attitudes don’t always reflect actions. Asking for opinions about designs and usability issues is capturing exactly that, opinions. There’s nothing wrong with gathering opinions, as long as that set of opinions is then validated with actual performance-based usability testing. It’s the second half of that statement that I’ve found is typically missing (the task-based usability testing) that causes problems for those who use crowdsourcing for usability optimization.
- Biased Questions Can Bias Results - Subtle differences in how a question is asked, and what sort of responses (scales or other mechanisms) are used to capture results, can GREATLY impact the viability of crowdsourced usability testing. Just changing a few words in a survey question can have a major impact, potentially skewing results. Surveys and questionnaires are notoriously difficult to get right, in this case ‘right’ meaning non-biased. I’ve noticed that when I evaluate a crowdsourced usability project to try to learn why it failed, there quite often are questions that are asked in such a way that they introduce a bias into the results. Users of SurveyMonkey Question bank will note that there are ‘official’ approved questions for gathering feedback, and that the second you start changing the wording of a question an alert let’s you know you are in danger of skewing results by unwittingly entering a bias into the question.
Crowdsourcing Can be Good or Evil
Crowdsourcing is the Dr. Jekyll and Mr Hyde of usability testing.
If applied correctly, crowdsourcing can be a powerful way to gather information for design decisions. But if used incorrectly it can introduce biased information that hurts usability and causes website owners reduced conversion and needless ROI loss.