SXSW 2011 – Stop Listening to Your Customers
Nate Bolt and Mark Trammell
Stop listening to your customers? Why, that’s CRAZY talk to a guy like me with a marketing and usability background! But if Nate says there’s something to it, then so be it (he’s been known to be right sometimes, just kidding Nate, you rock). I know Nate rocks because I read his book, Remote Research Testing and wrote a brief book review about it. Let’s find out why it doesn’t make sense to listen to our customers, shall we?
Nate starts, how do people get information from their customers he asks the audience: Various answers including focus groups, research labs, and surveys, all the usual suspects.
Nate says, in 1995, there was the Decision Theory & Adaptive Systems Group in Microsoft. Folks were playing around with such interesting concepts as Bayesian methods and Animated pedagogical theory – the idea was to infer when users need help, and offer it (seems easy to me, what could possibly go wrong?).
So, they conducted lots of research by asking people, “Hey, pretend you were in trouble. Would you like an animated helper to pop-up and give you some assistance?” And guess what, people nodded their heads and said, “yeeesss, of COURSE I would like that!”
All that work lead to one of potentially the most hated computer features on earth, “Clippy” the animated Microsoft paperclip helper. What went wrong? They asked people, would a friendly character help you? People said yes, why didn’t that work?
Nate says it’s because it turns out asking customers what they think is a bad way to listen to customers. In addition, not having an actual tool or working prototype to test was the second “aha” key learning.
Nate says, here are two bad ways to listen to customers:
2) Provide a false premise
Mark says, at Twitter, for research we observe users with the goal of ultimately trying to remove friction. Everything at Twitter is about watching how people use it, then changing the interface to improve it. Mark provides an example of trying to figure out a way to better handle retweets. They observed the users having difficulty with the concept, and tested various concepts to see if they could reduce that friction point.
Same issue for making groups – people needed different identities for different needs like sports, work, etc. They observed this, and created classification of user accounts to help users classify by groups.
Another interesting function that came from observing users was when Atlanta had gasoline shortages. They observed users Tweeting about locations of gas using hastags. An example being #atlgas. The researchers watched users solving categorizing and bookmarking tweets using the hashtag. They didn’t invent that, the users did.
Mark continues, coming up with a handle on twitter was caused by people doing the @ sign in front of their handle. Again, Twitter didn’t create that, the users invented it.
So, Mark summarizes, these are examples of good research, look at CURRENT behavior and observing users interacting with systems.
Nate: But this is not monitoring metrics, this is watching behavior. There is no easy way to monitor system behavior. He refers to Malcolm Gladwell and the 80s research. Nate mentions Howard Moskowitz, who had a client, Ragu. Ragu was losing sales of their spaghetti sauce and wanted to know what to do about it. Which sauce among 40+ potential sauces do people like? They had used focus groups, but Moskowitz said we need to not ask in focus groups, but actually rent a hall, get hungry people and feed them and watch them eat it. Each person got 10 bowls with some of the 40+ variations of Ragu. The findings? Moskowitz said there IS no one favorite. Turns out people mostly like one of three main types: Plain, Spicy, Extra Chunky (and extra chunky was something Ragu had never heard of). So Ragu went off and made extra chunky and it took off, and concentrated on the other big three flavors to great success.
The big idea is: it used to be that we ask people what they want, now we watch them.
Three things to learn from this:
So user research needs to be…
2. In the participant’s Timeline
3. Behavioral in focus
Nate then cracks the audience up by showing a video cartoon making fun of people doing intercepts in malls. Yeah, heh, mall intercepts are so 1980s.
Mark: If you are creating a survey, don’t bore people with the survey – nobody cares about demographic information, so don’t get it, it doesn’t matter about age, race or gender. Ask broad open ended questions, then get those answers to use for a quick multiple choice survey. Assumptions in your survey may be false, so based them on broad open ended questions.
Nate talks about; The Truth about Download Time. A study was completed on Amazon to see whether users felt it was or wasn’t fast. Even though the site wasn’t the fastest site, participants said it was. They confused the experience of shopping on the site with speed, that’s a key finding as to why surveys are unreliable.
Mark: What Twitter does is we do sprints to a functional prototype. We identify the quickest way to build something, then start testing using that. Could be a static site, wireframes, paper prototype, index cards, card sorting – if those work to answer the questions. But use real data. Twitter builds functional prototypes that work with actual user behavioral information. Then we iterate. Test over and over with quick rounds of testing.
Nate: How many people have heard of screenblocks? He shows Sifteo – interactive blocks that were the discussion of a recent Ted talk. They are blocks that sense other blocks around them. They are aware of each other. This started as research at MIT, they knew they couldn’t have wood blocks when conducting research, they need real data, not false data. They used blocks with wires hanging off it, so they got away from “What about if we did this” questions to users, to which people might say sure, we would use it, but without actually using it.
Mark: The new version of Twitter started as a prototype, we observed and iterated as we watched users interacting with it. We emphasize and focus on the moment. We try to provide our users with a simply as possible user experience.
Mark recommends testing using four simple steps, much of which came from the book he HIGHLY recommends: “Observing the User Experience”
1. Define the audience and their goals. Who’s using it and what’s their motivation?
2. Crate tasks that address those goals.
3. Get the right people. This is critical. And don’t give them false motivation.
4. Watch them try to perform the tasks.
Simple, but highly effective!
Mark shows a pic of their usability testing set-up. It’s a small table with three computers; a screen for a user, a screen for him (as moderator), and a screen for an observer (by the way, he uses Silverback). We use a little webcam to stream video and audio back to the developers in a big room of developers and designers, even projected on a giant screen so everyone can see the users going through the usability test.
It’s a very fast way to test and iterate. Everyone can see what’s broken, instead of waiting for a report a few days later, and people can start brain storming immediately with ways to fix.
He adds, we did our testing with four people – we knew the problems were so large that we could find them with only four people. We tested in one day, then did building the next day. We did this for 60 hours. We did this with friends and family. We were moving really quickly. We could try really crazy stuff and not worry about it.
Was he worried about testing being biased because of using friends and family? No, he says, I only care if they have biases that I don’t know about. With friends and family at least I know their biases.
Nate: People lie less at home. Some researchers feel labs are better, but in the past few years there’s been a bazillion new products coming up. Can these tools make us better researchers? Yes, but you have to pick and choose, never use just one tool by itself. We set up with a participant elsewhere, we use GoToMeeting, get everyone of the observers in the room in person – designers and developers. It’s way more valuable for the testing participant to be home, but everyone else in the room. Designers and developers should be beating down the door to attend these sessions.
For rdio.com, we talked to five people by intercepting them live on the rdio homepage using ethnio. We did the same testing with Usabilia, and used clicktracking to map clicks plus had users enter why they clicked there. We then did 10 users from usabilitytesting.com. Cost was minimal.
- GoToMeeting: 5 users
- UserTesting: 10 users
- Usabilla: 50 users
- Ethnio: 468 recruits from home page
We did all this in one day, using a Guerrilla method.
For Mobile research, Nate says, we just followed people in homes and stores using an interface with a webcam and stream live. We take two webcams, one for streaming and one for recording. Design folks could chat with us live, so we could follow up with questions. We did a test of QR in stores and learned that the plastic that wraps QR codes was stopping people getting the QR codes.
The big myth is: geniuses have genius ideas that turn into genius products. That’s not true, anyone can create genius products if the watch their users and iterate often.
- Great ideas come from other great ideas.
- You need to get to the motivation to understand the why of behavior.
- Imaginative research facilitates imaginative products.
Nate saysExpand the notion of what research is:
Nate finishes by saying: Time aware is the overlooked part of research, be on the participants timeline, do not force them to be in your timeline.
And they head into questions, and I introduce myself to Nate in person and then run off to my next session!