Kris Angell 17:09
[MVP Research Ops] So the first big question, what does it look like? What is an MVP? And how do we get started? So, I’m going to just open this up to the chat as who’s got experience setting up an MVP.
Kris Angell 17:51
Philip was this Air Table and MVP for you?
Yeah, definitely the Air Table and the template and all that are definitely just something that we put together. And just say, hey, let’s try it. Because it’s I don’t want to like keep talking about what should be done and all that I’d rather just put it out there and see the designers who are actually going to use it what they will, you know, take their reaction to it. Yeah.
Kris Angell 18:18
So then, what’s your plan for after this? After these three initial projects? How are you going to assess whether or not it worked?
Well, so I’m going to keep getting the feedback from designers, as they go through this. Hear if the original pain points that I mentioned about it being you know, getting hard to get started and the experience being too fragmented, at least if those are solved for this new template, that would be a start, and then seeing if that can be improved upon or is that if that’s the right direction still.
Kris Angell 18:57
I see that Janet’s got a question here around: Whereas everyone add on the benefits of democratizing research to more of a DIY across the organization. And she’s got an example from Molly Stevens, who was at the UXR conference, admitted to having a huge mistake in the past and encouraging this to happen, that she now regrets. As the quality of research approach and output has declined and past practices are all over the place.
Kris Angell 19:30
That ties in with both this MVP as well as a future question on Self Service Research and which are best practices? So I’m curious here. Philip, have you noticed whether or not in the prior version of your research platform, how was the quality of the research from your team?
Yeah, I think from from our perspective that the quality of the requests were not were… So basically the problems that I would see in requests for that, tasks were very leading very biased. The, the approach of what they were asking wasn’t exactly right. Like when I would look at their designs it wouldn’t make sense with what the design actually said. So, little things like that is where we were like, we need to kind of be there in that beginning step to kind of just understand what’s going on, so we can help with it right there. So, that’s, that’s my experience of that at least. Okay.
Kris Angell 20:40
And, Janet, can you speak to what you’re hearing from Molly?
Yes, sure. Hi, guys. Um, yeah, I think what was interesting was that, you know, her point was, you would never have a software engineer who expected everybody else in the company to start coding and being able to do their job. And it’s taken years of experience they’ve gained over time and expertise. And, you know, similarly with designers who are one of the biggest users and a lot of this seems to be one of her points was that it designers they’re almost frustrated. They’d rather be researchers, a lot of them that sort of get bored with being designers. And so they’re creeping into researchers’ territory and assuming they can just do it. Because Isn’t it just having a conversation? Isn’t it easy to be a researcher?
So it’s interesting listening to Philip, you know, some it, you know, that is in a way, that’s the pleasurable bit and the nice bit right, where you actually get to have the conversations. But it’s all the pre planning and the pre design and the, you know, the being able to sort of scale it and repeat some consistencies. And and how point was we as researchers just sort of like letting it happen. And that just devalues, you know, the power of the expertise and experience that takes time to build up. And all the different aspects and the skill, you know, the skills and competencies required to be a really good research or research team. And she she literally led to the mayor culpa. So I realized I made a huge mistake.
And now, I think Phillip, you made the point in the chat about, yeah, it’s actually really important. We have to do the research components, the design, the planning, the actual research, you know, act together. The researcher needs to be part of that you can’t just hand that part over and expect other people to get it right. And in a way, undoing the mess people creating can be more time consuming. And also they will trust it if they go to the trouble of doing it and all the effort required and realize what’s involved. Then whatever their results are, they’re going to live by them because you know, they like this. They own them. They did all this work and how do you know what you don’t know? How do you know whether to Trust, the quality because you don’t have exposure to the breadth of best practices out there. As to whether what you’ve, what your learnings are, or what you’ve created, is something you should trust and follow. And you’re going to do it anyway. Because you’ve just, you know, put all this effort and time in into it.
So it’s been living with me in the back of my mind clear for the last three weeks now since then, every time I see these threads, I just worry more and more about, you know, how do you not let go when everybody is under so much pressure to service so many people, it’s like, if you prove that you’re successful and helpful, then everybody wants more of you. And then very quickly, you can’t service everybody. So that’s this difficult balance everybody’s trying to manage and I totally get that. So I love Philips in a beginning to systemize it and try to control it. And I was actually wondering if that’s why usertesting.com have shifted To this new model, as a way of trying to prevent this terrible sort of letting go of it all, short term, it’s more pain, but maybe ultimately will grapple back some respect for the role of the researcher.
Kris Angell 24:13
It looks like Carly has something to say regarding this, Carly.
Sure. I’m happy to jump in. I’m not a user researcher. I do res ops but I do specific res ops because I work with mostly musicians and other music industry participants that are different than consumers. And there’s just been a need to talk to more of our customers.
But you know, something that my research team and I have spent a lot of time talking about is like, we obviously haven’t wanted to gate this. We’ve wanted to educate designers and other members of our product team on how to do research, but we’ve really tried to do a couple things. One is like make a process out of it. And also, to really have it be very lightweight. And by lightweight, I would describe it as usability.
One of the things I find in tech companies is we have a lot of very, very talented people that want to do a lot of foundational work, or research work that is best left for researchers. And that’s where I can see some of the friction happen. Um, because, you know, this goes back to the same thing as like, we’re not going to try to do an engineer’s code. The answer on my side, I’ve spent literally the past like year, year and a half, like interviewing people trying to roll out a process testing on. Is that you just have to make it a partnership where you have the user researchers actually partner with designers and really make it something that’s like a team project. That you have education, that you have feedback, and that you’re really willing to iterate on self service. Because I want self service to succeed because we all have this issues of that there’s too much research to be done. Um, but I also understand that like, there’s no point in doing bad research, like don’t put good resources to doing bad research, and I’m cognizant of that.
Kris Angell 26:24
Okay. So, Carly, are you actually pairing your project teams with a researcher for usability studies or for the foundational research?
For usability, or usability, okay.
Kris Angell 26:38
And so that helps increase their output, the quality of the output? How do you then identify that your team is asking for more foundational research? And, how do you qualify like, what type of research they need and how you’re going to plug that into your process? The timeline?
Well, on the foundational side, I think those requests more like sit with the researchers themselves. So it’s their choice, you know, to decide whether something’s important. But what we’re saying is there are some other things that are maybe not as prioritized to the user researcher. There, we’re willing to say, Okay, well, you know, we’re going to educate you and equip you with the resources to do the lightweight work and that my team is going to help make it happen in terms of production quality. I mean, the reality is that the participants we’re working with are not consumers, so recruiting is much more difficult.
Kris Angell 27:46
I see that GKim and Philip are talking a lot about velocity and getting the quality of work out as well as like, the different user types that think they can do their own research. GKim, do you want to give us a little background on what you’re seeing and, and then where the questions or challenges are coming from?
Oh, yeah. So I think Philip mentioned earlier that he had this issue where, when he set up self service designers weren’t biting as much as you’d hoped. I actually had the opposite problem where I have too many, I have a lot of, maybe not too many, but I have many designers and PMS coming to me, and really keen on doing their own research. But to the point where, like, I had a designer who wanted to do a diary study on her own. That made me a little bit nervous, because, you know, although I’m very, like, it’s great that there’s a lot of interest in buying research. However, it was a little concerning to me that the well it was like a little bit of the devalue like what research has to offer in terms of expertise.
But I’ve been digging around a little bit with designers who are doing their own research. That usually some of the motivation behind wanting to just jump ahead and do research on their own is feeling like… they feel like things are moving slowly and they just want to do something. So there is some sense of, I think, for us of frustration. Although our research team like our turnarounds really quick, it’s very fast the way we turn our projects, but I don’t think it’s like very clear to them how quickly we can turn around research or how long it takes.
So I don’t know this may go into the whole issue of like a combination of we may need to be focusing more on education on research. Maybe it’s also being very specific about timelines down to like, I see Philip here mentioned 15 to 30 minute consultation, you know, like being very specific about exactly how much time it takes for different types of research requests. Because that’s something that we’re having issues here in terms of like, we’ve set up a ticketing process as one of the ways that you can have a concerted conversation with us. And like, very few people have been actually submitting tickets. When I dug around to figure out why they weren’t submitting tickets. A lot of times they felt like it was just going into black hole because they weren’t aware of like, Well what happens after I submit a ticket like what do I expect. So maybe this is being a bit more clear about what they can expect in terms of like timeline specific, more specific timelines for specific types of projects, or specific types of requests. So that could be one that we can experiment with.
Kris Angell 31:16
It looks like Carly and Philip both responded with adding timelines and how that has effectively supported their teams. And I wanted to pull us back up to something that Carly mentioned earlier that Cece commented on here. That if you’re bringing your teams into research, and you’re placing your product project team and designers and in with a researcher that they train each other and they help make sure that the quality is good. So that when you do give them more free rein they’ll have more ability there. Cece did you want to comment on what you’ve done?
Unknown Speaker 32:08[mic issues]
Kris Angell 32:15
Philip, can you speak to that? Like what you mean by these 15 to 30 minute consults and test for building the session together?
Yeah, so the question originally was about the velocity of the research and how [designers] feel they might be bogged down by the research process. This is an effort to kind of say, Okay, so here’s the time commitment we’ll need and here’s like steps for it. Right. So the consultation and building session are two things that we’ll do synchronously and this is for remote unmoderated testing. So we’ll do this part synchronously the first time or the second time, so that we you get trained in a way of like, how we’ll do this. And then later on if you know, the designer is pretty good with this process, then there’s no reason why we shouldn’t be able to just give them credentials to the test account, and have them be able to create those tests asynchronously. But at least the first part can be synchronous. And we tell them okay, we’ll have one session when we talk and have like a 30 minute call about what is it you want to test and how we can do that. And the next call, we actually build that test together so they can really see how what they write gets manifested to users in a preview at least. And then once they’ve gone through that enough, because I know that helps me when I build tests to iterate as I see the preview of it. And so that I feel like if we do that, then they’ll be able to do it on their own later, asynchronous matter.
Hi, my name is Lindsey. I’m sorry. And butting in here and not doing chat and making tortillas at the same time as listening to all you find people talking about stuff. I just wanted to mention, and I see my colleague, Na Me is on here too. And I don’t know if she has some insights to add to this. But when I was at eBay, we actually did have a full program. Exactly what we’re all talking about here. And it was pretty robust. We had a training kind of thing that you went through, I was tangentially related. So I actually don’t know that much about the mechanics of how it all works. But I just wanted to share that. Even when you train people, there was a committee of people that was in charge of kind of helping to make sure that studies were well rounded and had good approaches and not leading questions, that kind of thing. Even in trying to control for quality there, it ended up not working that well. The program got shut down, because what tended to happen is people sort of cherry picked their results. And as designers, I think it’s really hard to not kind of have ideas in your head about what you think you hear people saying versus having a maybe more objective perspective.
So I just wanted to share that. I’ve seen this kind of program run before and it does run into all the challenges that have been brought up before. And I will ask my one of my colleagues that was running it, she’s not on the call today. And I don’t know if that’s on purpose or not. She doesn’t want to talk about that experience. I’ll ask her if she wouldn’t mind connecting with you all, to share her experience at some point, because she may have missed the original invites and all that. I know that was a lot at once, but I’m making tortillas at the same time.
I’m hungry. Thank you, Chris. I’m sorry. I finally got my microphone working.
Kris Angell 36:11
Go ahead CeCe
Yeah. So I think this is so interesting. And it’s like obvious that we’ve got like these range of like large companies that are able to kind of scale and drive research at a pretty large scale. And I’m sure there’s a lot of like even people who are working on their own. with clients, I’m sort of an in between working with a very engineering driven company. So we’ve been fighting hard to bring research into the company. And there is definitely amongst clients as well as our own team, actually, more so within our own internal team, a belief that research is just kind of call it you know, this swanning about wandering around looking around, okay, I know what to do. So it’s been just such a hard road to get research to be a part of projects.
We focused on scaling by doing good quality at the beginning. One of the things that was a principle was we have to deliver a good product, good output. Or else they will, you know, if we let it deteriorate by disseminating it too broadly, it’ll be diluted and people won’t believe in it. So we had to really focus on that thirst, which worked great. So we said, okay, let’s go small and good quality.
As we continue to grow, we now are pulling in more people because they’ve seen it and they’re excited. And then they’re a part of—we call them our extended team—and they’re excited and they learn. And we do have some structure and tools that we put in place, but we don’t set people loose. Because we’ve done it a couple times, just because we were stressed for time. And then it starts to do what we’re worried about, which is make people not believe in research because output wasn’t what it should be.
There was just one of those, how do you grow? How do you expand, being inclusive and come along for the ride for sure. Always as much as We can possibly do. And then we find those people are cherry picking is picking the people who we think are actually going to, you know honor what research should do and how it should be run. And you know, we put the effort in.
Kris Angell 38:15
That also speaks to what Lindsey was just saying about the challenges of people being in the room and hearing something that clicks with what they were hoping to hear, and not catching anything else that was said. So if you’re, you’re actually honing in on those individuals who are objective are able to capture both the good, the bad, and the latent within a conversation. Sounds like you’re able to help train them up so that they can be advocates and champions for this research process.
Yeah, this is Na. So things Lindsay mentioned that we were co workers at eBay and right now I’m actually coworkers with Paul. So such a wonderful community that connects everyone together. So I think I have a few thoughts to share. Because yesterday, our team just did a user testing day. So we went through all the challenges that many of you mentioned. I think it’s very important to sort of identify and make sure you’re really are cognizant of what you’re trying to achieve from the program.
For example, at our current company, which is a late stage startup, I think the use of the user experience or the UX research, maturity is actually relatively low, if I’m honest with you. So the whole purpose of the program is to advocate for user research. That’s why we set it up where I’ve been almost like hand holding our stakeholders all the way from getting participants recruiting criteria, to the research question they have, to the interview guide. So I actually invested a lot of time, just want to make sure we still meet a certain bar of research rigor into the results and findings. I think, so far it worked out for us.
But I totally understand what Lindsey mentioned. Like when I was at eBay, we were actually a very mature company. We had, I think, research on almost every single team, I think, for a big, mature UX mature company, probably having stakeholders do user research, and especially like, they’re just trying to, you know, look for like a confirmation from their own design or their own product. There could be certain caveats that, you know, sort of this program is facing. So I just wanted to chime in here with this quick thought.
Kris Angell 41:05
I’m seeing Whitney. Research ends up being a practice of confirmation bias if they don’t understand the nature of research at really, Whitney, how are you experiencing research within your own organization?
Unknown Speaker 41:46[Mic issues]