In an oddly furnished room in their office, I sat down recently with Chrissie Brodigan, design and UX researcher at GitHub to talk about how designers and developers can measure which features are most vital by removing them, and seeing how upset their users get.
Can you tell us about deprivation studies in theory?
In theory, what you do first is give your users or your customers something new to play with. They get familiar with it and they start to develop patterns around it. You learn about how those patterns are working and you learn about the things that, if it is an interface change, what are they experiencing in the new interface? How is the usability? What do they like? What don’t they like?
How you are using feature deprivation here?
We take it over the course for a few days, because on day one, change is always hard. But after a few days, users start to get use to their new surroundings—and then you take those new surroundings away from them. On that last day, we consider that the actual deprivation study: You are putting the old thing that they were used to back in front of them. Then you measure the emotion around those three days of changes. Are they disappointed to have the old thing? Do they miss the new thing?
What questions are you asking yourself when you design these studies?
We want to know what can we learn about that experience, and the trajectory of the user over this new thing we introduced. Maybe the feedback is, "It was hard to use at first," or "I was frustrated" to "I got adjusted to it. It was not so shocking," to "Oh man! Now it’s gone." What is the pattern that develops there?
How long have you been testing this way?
This is a brand-new program. Actually, our cameras just arrived a couple of weeks ago so we are actually going to be doing some of that testing where we are not only capturing what is on screen, but we’re also capturing what’s going on with facial expressions.
How do you do deprivation testing without a fancy camera setup?
Through a technique called a diary study, where you roll out the new feature and you create a diary. We actually used our own tools to do this. This turned out to be a really interesting way to do it, because I just created one repo and the entire group participated in that workflow. Right away the designer who is working on a product is able to keep up with it in real time as well, which made it fascinating. Usually, when you do a diary study, you have to wait for the person to actually bring it back to you at the end. This way, actually we got to gauge the response over a few days, so that was really fun for us.
Do you test on employees or do you bring in people from the world?
We always pilot internally first. [GitHub] has had some real success building for what we need, and by doing our pilot studies, we get to still keep up with what we need. But you also want to go out there into the world and actually get that contact with people. What we've learned from that is, as GitHub has gotten bigger, we actually have all of these new types of users, like people who are not coders, for instance. We have them represented in our own workflow here, so we are building for ourselves, but we are also changing internally, and this makes the internal testing really fun and complicated. I would say our empathy levels are increasing.
Where did you learn the concept of a deprivation study?
From Mozilla, actually. I came to GitHub from Mozilla. That was the first time I had ever done one of them before, was working at Mozilla. I almost feel like we were raised that way, right? Our parents give us toys and we enjoy them and then something happens and they take them away. Deprivation studies actually happen a lot in real life, right? People are always experimenting with things or you might try let’s say, a new type of olive oil, and then you run out of it, and you might have some other olive oil in your house, but you’re really disappointed, because you really miss that new olive oil. You develop this sensation like "Wow, I really liked that thing; that thing that I enjoyed." It's the same feeling when you break your phone, right?
What's the underlying principle here?
When you do not have access to the thing that you need, you actually end up learning a lot and you learn a lot about what are the habits that people form, and the emotional response around that. Something great about deprivation studies is when you hear from the user "I did not really miss the thing that you gave me. It did not really matter either way." It might sound disappointing, but maybe it's a good thing. Maybe we were trying to be subtle in the change that we were making, and it was too subtle? Or maybe people didn’t notice that anything changes. Maybe really the change is not as provocative as we had thought it would be. People, in their diary studies, would write things like, "I’m so sorry, but I didn’t really notice anything." That is great feedback.
How long do you usually do these studies for?
With diary studies, I try to do them between three and four days, at three days for getting a user to develop a pattern around something, the fourth day being the deprivation. The fifth day of the week is when I compile all the data into a report that we share.
Does this work better for some sorts of features or what is the best use case?
The best use case is when we have a new feature or a core change to the UI and we want to get that out into your hands. We’ve got to figure out how we include people who are brand new to the service and people who are sort of veteran users of the service, so we can also gauge what a new user thinks about this thing we changed. In the future, after we launch new features or different types of UI, we would want to wait for a short period and then grab people who are brand new to the service and see from a usability standpoint, if they are just learning and they’re not unlearning an old habit.
Is this something that is useful mostly for controls and interactive things, or can you do this with, say, branding too?
You can definitely do it with all kinds of design. One of the things that came up in this recent study is we used different colors. Color turned out to be a real hot button in this study. I am just compiling all the responses now and I am noticing that people universally had a strong reaction to a particular color that got used in this interface change. It’s interesting too because the other problem with "easier methods of testing," like split testing or whatever, is that you just don’t get any feedback—and so it’s really hard, because you can never compartmentalize and test a billion things at once. You end up with a conclusion like, "People don’t like this, but I don’t know why." In this case, it is like "Actually, the design is way better, but that color is too strong." Then at least you hear it in that context.
Are there other tips or tricks that people should know when trying this form of testing?
Yes. I would say make sure that you give a user enough time to develop a usage pattern around the thing. The study that we did was four days, but I actually launched it on a Friday, so that it was less intrusive into a Monday workflow. They had time over the weekend, but I did not count that towards the four days. It let them settle in and actually experience it under a time constraint. It was a four-day study, but we actually built in seven days, so that they could take their time. They did not have to get stressed out about it and do not let it go on too long. Three days, in my experience, is enough for a pattern. Stay out of the way during those days. When users are giving you feedback in daily diaries, do not get hung up on day one—look forward to what the story is going to tell you at the end and just enjoy the ride. The deprivation part, that’s the gift, the thing that is the exciting part of this study. Just get excited about what is it going to be like after that.
[Image: Flickr user whyamiKeenan]