If you’ve read my blog for a while, you’ll know that I’ve done a lot of user testing over the course of my PhD (I think I’m at something like 6 different experiments, and some were run more than once). And I have a very much love-hate relationship with user testing, that often veers towards hate-hate.
Our startup has also done some user testing, which was a much different experience, and way more relaxed. We didn’t have to worry about the whole ethics approval (although ethics was still very much in our minds as we decided what to do). We didn’t have to worry about having control groups and whether participants were going to try to ‘cheat’ the system. We didn’t have to worry about all the statistics that would rear their heads when it was time to analyze the data. Most of this, was because we were just looking for feedback (mostly qualitative) on the user interface and experience. And so we weren’t collecting a lot of “hard” data, to analyze later (although we should do one of these experiments at some point.
The other week, I had an opportunity to participate in user testing for another company. And it reminded me how useful having all that grad school experience on running user tests is. Because this test was a disaster. To start off with, they couldn’t figure out how to even correctly contact the participants they had selected to participate. I was asked if I could confirm I could attend at a specific location at a specific time, but they neglected to say which day (which was incredibly important, since they were running the experiment three days in a row, at the same time, at the same place). They also never mentioned that you needed to bring a smart device. My friend was told to come on “Tuesday the 26th (the 26th was a Wednesday), and it turned out they had actually scheduled her for Thursday.
The person who tried to explain about the whole point, was so nervous, and pretty much read from a paper. By the end of her short talk, no one had any clue what the company was trying to do, what the purpose of the app we were testing was, and who their target market is. They tossed around a lot of domain-specific keywords, that I’m sure meant a lot to them, but meant nothing to any of us.
When it came time for “testing”, they had us all log in and told us they wanted to see if a) we could figure out how and b) how long it would take us, to do four tasks. They then stated the four tasks out loud, but didn’t write them down or hand them out. Which meant the next 20+ minutes involved people continually asking “what was the next task?” They wanted us to explore the app, but when we found information within it, there was an embarrassed “you saw that?” moment – um, yeah, it’s not hidden and you said explore. The app itself was definitely at most an alpha version, and barely worked. You’d end up on a loading page that said loading in the center and provided no more information, making you wonder if it was broken.
Finally, they had us fill out a survey. But instead of just allowing us to fill it out, they wanted us to both record our answers on the paper, but also share them out loud. Yikes. When you don’t have positive things to say, the last thing you want to do is bash a group of people in front of them, to their face. So much easier on paper. Eventually they decided to just let us finish on paper.
I did get a gift card out of participating. And, while it was a frustrating experience, I’d probably do it again, just because I know how important and useful that feedback is, even if it’s not what you want to hear. And, because I’m naturally judgmental, critical and honest, I’d probably do it for free. Also, it made me feel a hundred times better about the alpha tests we did with our start up, and the experiments I’ve run for my research. It’s really true – they could’ve gone a whole lot worse.