Research Ops, Scaling UXR 

Event Recap, 7/23/20

On July 23, 2020 a group of 40+ researchers convened to discuss managing internal research efforts at scale. The initial conversation was started in the Design User Research Forum by Philip. He started off the discussion by sharing the impetus to his forum post, to what he’s done since, and his overall approach. The discussion ranged from developing a Research Ops MVP to scaling processes and teams to best practices. Attendees posed questions via chat and others stepped up to answer them from their experience.

Chat text without Linkedin details.

Transcript (pdf) or read the transcript below

Slides

Resources

Research Ops Transcript:

Kris Angell 4:24
Philip Begel, did you want to introduce yourself quickly?

Philip 4:47[Intro to topic] Yeah, I’ll introduce myself. Hello everybody. My name is Philip. I originally posted this question and happy to see that it kind of blew up and people are responding well to it, so. I posted this question that started this topic on our forum. Happy to see the great turnout.

Philip 5:49
Hello, everybody. I’m a UX researcher. And the reason I want to talk about this was that we had a similar problem where we were trying to get designers to do remote, unmoderated, specifically usability testing. And we were using usertesting.com. And back in the before times, like a year ago, they everybody had, you just had a usertesting.com account, and they could all make their own tests. Our research team at that point was pretty small anyways, so it worked out. But then usertesting.com changed their business model actually. They made so that they want to kind of bottleneck who can make tests. So that made so that only one person can make a test at a time and launch it. So that meant that we had to come up with a way to kind of systemize that process so that people could say what they want on that test, and then someone will create it for them. And so we had a little thing going for about it, you know, since last year. And so I’ve been doing some work on trying to make that better and improve that. I can talk through a bit out what I’ve been doing and what I’ve learned from that. But uh, yeah, it’s up to you how you want to run it. Run. Thanks, Kris.

Kris Angell 7:01
Thanks Philip. Hi, everyone. My name is Kris Angell. I’m in San Diego, I have my own consultancy. And I’ve been running, I research ops for all my projects. I have a completely different experience than Philip. The goal today is really just to get all this stuff out. We have a lot of experts in the room. And we’ve got Philip, who, so kindly started this, this query for us. As we go, definitely Introduce yourself into the chat, provide a little bit about who you are and what your questions what your big questions are about research ops. And I’m going to try to moderate this Q&A for us. And if you do have something that you could speak to, as these questions come up, definitely note it in chat, and then I’ll call on you to comment. We’re at 43 people right now. So it’s a really good crowd. An I’m recording it with the intent of sharing it later. I believe I mentioned earlier that I’m going to try to get it transcribed, and then shared with Behzod, who will be doing a future event for us. So definitely keep an eye out on the researchers forum for that information.

Kris Angell 8:40
With that, Philip, when he we’re trying to figure out like, how do we systematize user research and specifically for usertesting.com did you end up solving for the issue that you had identified.

Philip 9:02
So our approach originally was we had a, we went for like a centralized approach where we would have a central place that people could submit a request and what they want, you know, in that, you know, test. Then we would work with them to iterate on that, and then put that in usertesting.com. Then send them the results and the link for that.

Philip 9:23
The problems with that I’ve been talking designers about are…the two biggest things are: it was hard to get started, to know what to do. It was an overwhelming amount of information on like, what’s a good screener? What’s a good task? What kind of metrics should I be looking for? And the template said, Yeah, you want tasks, here’s how you go to make tasks. Then it sends you out somewhere else. And so that made for a steep learning curve. And then that leads to the second big part, which is it was fragmented. So the experience of just making the template and to make that request is just fragments are there. So those things together, create a not so great experience. And so designers didn’t end up using as much as we want it to. And it relied mainly on the researchers to guide that heavily from the beginning and throughout.

Kris Angell 10:16
And then Did you get any results from your question to the forum?

Philip 10:21
Yeah, so I actually spoke with Behzod, and also Sarah from Coinbase. And they gave some good feedback on how they’ve been doing it and the approach they’ve taken. What I’ve learned from them is to basically see where the researcher and designer can share what they own and who owns what. And not necessarily be like, this is all yours and this is all mine, but more like the researcher can be playing the role of facilitator throughout the process. And so, rather than having a template that’s like, okay, you do that on your own, we can actually start and do that together and then launch it together. And build that together. And then the results analysis can fall in the designers scope of things. And so that’s how the current system that I’ve been working on now is kind of working. I’m testing it right now three design teams and so we’ll see how it goes. But so far, it’s been a they’ve had good feedback.

Kris Angell 11:18
Mallory Macmillan has a question about your experience. What was the most important thing to ask in the template or form for research requests?

Philip 11:32
Yeah, so the the template and all that all stems from like the design like principle of Progressive Disclosure for now. Rather than showing them everything, it’s like, just make it easy to follow and digestible. So step one, do this. Step two, do this. And so that made it much easier.

Philip 11:50
So step one is just what’s your you know, the resource plan, what’s your goals? What are you trying to test and see if this is the right approach? Because it may or may not be the right approach. Maybe you need interviews or something, some something else.

Philip 12:04
Then after that we’re asking, okay, so then what’s the screens? You want to test if it’s prototype or live site? And then what are those? What’s the scenario and tasks and then some little guided stuff in there to help them formulate that. Then after that, the analysis. The start of all this is using my favorite tool Air Table to put this together and have them analyze using that tool, instead of on their own and having it not be consistent across teams.

Kris Angell 12:37
With an Air Table, then do you have analysis templates for people?

Philip 12:42
Yeah, so Air Table is the place where it all begins. They start on an Air Table, they get the template from there, depending on what they prefer. If they prefer like a Google Drive Docs or Box Note or GitHub Template, like whatever they want to use it’s the same template. It now lives where they want to work rather than in a central place. So they feel less, you know, overwhelmed in the first place of having to go somewhere else outside their workspace.

Philip 13:10
So, Air Table then goes through in the dashboard, like what the steps are and like just follow those steps. And then they use a form to submit the findings from the videos. And then it, you know, computes all this all the metrics they might need or findings that they inputted and gives out some cool graphs too.

Kris Angell 13:29
That’s awesome. Have you noticed the change within your here’s a fellow employees, the people who are using it

Philip 13:39
Too early to tell. So this week is the week we’ve been starting it and so we’re on the first like phase of that where they’re sending out the video and building it and then we’ll see how the Air Table process after that works.

Kris Angell 13:51
Awesome. How long did it take to set up the Air Table?

Philip 13:56
Probably since I posed that question, so a couple weeks.

Kris Angell 13:59
Thank you. Today, I’ve got a slide deck to share, as well, some additional resources.

Kris Angell 14:13
Thank you, I see everybody’s been dropping their questions into chat. Thank you so much for doing that. If you have any additional questions for Philip, please add them in to chat. We’ll bring them up.

Kris Angell 14:30[Agenda] The goal today is to share our experiences with Research Operations.

  1. We started off from Philip’s perspective.
  2. Now I want to help us understand who is in the room.
  3. Then cover the three big topics that came up during the survey.
  4. I’d like to definitely leave it open at the end for any of the open questions that will show up in chat.

The three big topics that came up from the survey that I sent out were:

  • How do we do an MVP of this stuff? How do we make sure that we can set it up quickly and effectively?
  • The next was, about Scaling it now that we’ve got our MVP in place. What does it look like to get consistency across the board? How do we frequently accomplish research? And then make sure that all of the analysis, as Philip was saying, comes in in the same place, and in a place where people can use it and will use it?
  • Then the third topic was this big question around Self Service User Research. And the question is how closely related is it to research ops? And then how different is it? Now that, as Phillip was saying, User Testing.com has changed their business model, can we still have self service research and what does that look like?

Kris Angell 15:59[Who is in the room] Of the 15 people who responded to the survey, it looks like the majority of us have some experience running research projects. And as you can see, we’re all pretty close to seasoned vets here. It looks like based on people who are chiming in on chat that is probably a little bit more fluid than that. However, we do have a lot of people that are internally located within a very large organization, and then it starts stepping down from there. Research within the organization is middling. We don’t have a very extreme or very mature research. That’s good to know. And then some of the big challenges that we’re having here are related to like this processes, shortening the time to get things in place managing the structures and processes.

Kris Angell 17:09
[MVP Research Ops] So the first big question, what does it look like? What is an MVP? And how do we get started? So, I’m going to just open this up to the chat as who’s got experience setting up an MVP.

Kris Angell 17:51
Philip was this Air Table and MVP for you?

Philip 17:57
Yeah, definitely the Air Table and the template and all that are definitely just something that we put together. And just say, hey, let’s try it. Because it’s I don’t want to like keep talking about what should be done and all that I’d rather just put it out there and see the designers who are actually going to use it what they will, you know, take their reaction to it. Yeah.

Kris Angell 18:18
So then, what’s your plan for after this? After these three initial projects? How are you going to assess whether or not it worked?

Philip 18:29
Well, so I’m going to keep getting the feedback from designers, as they go through this. Hear if the original pain points that I mentioned about it being you know, getting hard to get started and the experience being too fragmented, at least if those are solved for this new template, that would be a start, and then seeing if that can be improved upon or is that if that’s the right direction still.

Kris Angell 18:57
I see that Janet’s got a question here around: Whereas everyone add on the benefits of democratizing research to more of a DIY across the organization. And she’s got an example from Molly Stevens, who was at the UXR conference, admitted to having a huge mistake in the past and encouraging this to happen, that she now regrets. As the quality of research approach and output has declined and past practices are all over the place.

Kris Angell 19:30
That ties in with both this MVP as well as a future question on Self Service Research and which are best practices? So I’m curious here. Philip, have you noticed whether or not in the prior version of your research platform, how was the quality of the research from your team?

Philip 19:59
Yeah, I think from from our perspective that the quality of the requests were not were… So basically the problems that I would see in requests for that, tasks were very leading very biased. The, the approach of what they were asking wasn’t exactly right. Like when I would look at their designs it wouldn’t make sense with what the design actually said. So, little things like that is where we were like, we need to kind of be there in that beginning step to kind of just understand what’s going on, so we can help with it right there. So, that’s, that’s my experience of that at least. Okay.

Kris Angell 20:40
And, Janet, can you speak to what you’re hearing from Molly?

Janet 20:46
Yes, sure. Hi, guys. Um, yeah, I think what was interesting was that, you know, her point was, you would never have a software engineer who expected everybody else in the company to start coding and being able to do their job. And it’s taken years of experience they’ve gained over time and expertise. And, you know, similarly with designers who are one of the biggest users and a lot of this seems to be one of her points was that it designers they’re almost frustrated. They’d rather be researchers, a lot of them that sort of get bored with being designers. And so they’re creeping into researchers’ territory and assuming they can just do it. Because Isn’t it just having a conversation? Isn’t it easy to be a researcher?

Janet 21:28
So it’s interesting listening to Philip, you know, some it, you know, that is in a way, that’s the pleasurable bit and the nice bit right, where you actually get to have the conversations. But it’s all the pre planning and the pre design and the, you know, the being able to sort of scale it and repeat some consistencies. And and how point was we as researchers just sort of like letting it happen. And that just devalues, you know, the power of the expertise and experience that takes time to build up. And all the different aspects and the skill, you know, the skills and competencies required to be a really good research or research team. And she she literally led to the mayor culpa. So I realized I made a huge mistake.

Janet 22:16
And now, I think Phillip, you made the point in the chat about, yeah, it’s actually really important. We have to do the research components, the design, the planning, the actual research, you know, act together. The researcher needs to be part of that you can’t just hand that part over and expect other people to get it right. And in a way, undoing the mess people creating can be more time consuming. And also they will trust it if they go to the trouble of doing it and all the effort required and realize what’s involved. Then whatever their results are, they’re going to live by them because you know, they like this. They own them. They did all this work and how do you know what you don’t know? How do you know whether to Trust, the quality because you don’t have exposure to the breadth of best practices out there. As to whether what you’ve, what your learnings are, or what you’ve created, is something you should trust and follow. And you’re going to do it anyway. Because you’ve just, you know, put all this effort and time in into it.

Janet 23:19
So it’s been living with me in the back of my mind clear for the last three weeks now since then, every time I see these threads, I just worry more and more about, you know, how do you not let go when everybody is under so much pressure to service so many people, it’s like, if you prove that you’re successful and helpful, then everybody wants more of you. And then very quickly, you can’t service everybody. So that’s this difficult balance everybody’s trying to manage and I totally get that. So I love Philips in a beginning to systemize it and try to control it. And I was actually wondering if that’s why usertesting.com have shifted To this new model, as a way of trying to prevent this terrible sort of letting go of it all, short term, it’s more pain, but maybe ultimately will grapple back some respect for the role of the researcher.

Kris Angell 24:13
It looks like Carly has something to say regarding this, Carly.

Carly 24:18
Sure. I’m happy to jump in. I’m not a user researcher. I do res ops but I do specific res ops because I work with mostly musicians and other music industry participants that are different than consumers. And there’s just been a need to talk to more of our customers.

Carly 24:39
But you know, something that my research team and I have spent a lot of time talking about is like, we obviously haven’t wanted to gate this. We’ve wanted to educate designers and other members of our product team on how to do research, but we’ve really tried to do a couple things. One is like make a process out of it. And also, to really have it be very lightweight. And by lightweight, I would describe it as usability.

Carly 25:09
One of the things I find in tech companies is we have a lot of very, very talented people that want to do a lot of foundational work, or research work that is best left for researchers. And that’s where I can see some of the friction happen. Um, because, you know, this goes back to the same thing as like, we’re not going to try to do an engineer’s code. The answer on my side, I’ve spent literally the past like year, year and a half, like interviewing people trying to roll out a process testing on. Is that you just have to make it a partnership where you have the user researchers actually partner with designers and really make it something that’s like a team project. That you have education, that you have feedback, and that you’re really willing to iterate on self service. Because I want self service to succeed because we all have this issues of that there’s too much research to be done. Um, but I also understand that like, there’s no point in doing bad research, like don’t put good resources to doing bad research, and I’m cognizant of that.

Kris Angell 26:24
Okay. So, Carly, are you actually pairing your project teams with a researcher for usability studies or for the foundational research?

Carly 26:35
For usability, or usability, okay.

Kris Angell 26:38
And so that helps increase their output, the quality of the output? How do you then identify that your team is asking for more foundational research? And, how do you qualify like, what type of research they need and how you’re going to plug that into your process? The timeline?

Carly 27:01
Well, on the foundational side, I think those requests more like sit with the researchers themselves. So it’s their choice, you know, to decide whether something’s important. But what we’re saying is there are some other things that are maybe not as prioritized to the user researcher. There, we’re willing to say, Okay, well, you know, we’re going to educate you and equip you with the resources to do the lightweight work and that my team is going to help make it happen in terms of production quality. I mean, the reality is that the participants we’re working with are not consumers, so recruiting is much more difficult.

Kris Angell 27:46
I see that GKim and Philip are talking a lot about velocity and getting the quality of work out as well as like, the different user types that think they can do their own research. GKim, do you want to give us a little background on what you’re seeing and, and then where the questions or challenges are coming from?

GKim 28:07
Oh, yeah. So I think Philip mentioned earlier that he had this issue where, when he set up self service designers weren’t biting as much as you’d hoped. I actually had the opposite problem where I have too many, I have a lot of, maybe not too many, but I have many designers and PMS coming to me, and really keen on doing their own research. But to the point where, like, I had a designer who wanted to do a diary study on her own. That made me a little bit nervous, because, you know, although I’m very, like, it’s great that there’s a lot of interest in buying research. However, it was a little concerning to me that the well it was like a little bit of the devalue like what research has to offer in terms of expertise.

GKim 29:03
But I’ve been digging around a little bit with designers who are doing their own research. That usually some of the motivation behind wanting to just jump ahead and do research on their own is feeling like… they feel like things are moving slowly and they just want to do something. So there is some sense of, I think, for us of frustration. Although our research team like our turnarounds really quick, it’s very fast the way we turn our projects, but I don’t think it’s like very clear to them how quickly we can turn around research or how long it takes.

GKim 29:55
So I don’t know this may go into the whole issue of like a combination of we may need to be focusing more on education on research. Maybe it’s also being very specific about timelines down to like, I see Philip here mentioned 15 to 30 minute consultation, you know, like being very specific about exactly how much time it takes for different types of research requests. Because that’s something that we’re having issues here in terms of like, we’ve set up a ticketing process as one of the ways that you can have a concerted conversation with us. And like, very few people have been actually submitting tickets. When I dug around to figure out why they weren’t submitting tickets. A lot of times they felt like it was just going into black hole because they weren’t aware of like, Well what happens after I submit a ticket like what do I expect. So maybe this is being a bit more clear about what they can expect in terms of like timeline specific, more specific timelines for specific types of projects, or specific types of requests. So that could be one that we can experiment with.

Kris Angell 31:16
It looks like Carly and Philip both responded with adding timelines and how that has effectively supported their teams. And I wanted to pull us back up to something that Carly mentioned earlier that Cece commented on here. That if you’re bringing your teams into research, and you’re placing your product project team and designers and in with a researcher that they train each other and they help make sure that the quality is good. So that when you do give them more free rein they’ll have more ability there. Cece did you want to comment on what you’ve done?

Unknown Speaker 32:08[mic issues]

Kris Angell 32:15
Philip, can you speak to that? Like what you mean by these 15 to 30 minute consults and test for building the session together?

Philip 32:27
Yeah, so the question originally was about the velocity of the research and how [designers] feel they might be bogged down by the research process. This is an effort to kind of say, Okay, so here’s the time commitment we’ll need and here’s like steps for it. Right. So the consultation and building session are two things that we’ll do synchronously and this is for remote unmoderated testing. So we’ll do this part synchronously the first time or the second time, so that we you get trained in a way of like, how we’ll do this. And then later on if you know, the designer is pretty good with this process, then there’s no reason why we shouldn’t be able to just give them credentials to the test account, and have them be able to create those tests asynchronously. But at least the first part can be synchronous. And we tell them okay, we’ll have one session when we talk and have like a 30 minute call about what is it you want to test and how we can do that. And the next call, we actually build that test together so they can really see how what they write gets manifested to users in a preview at least. And then once they’ve gone through that enough, because I know that helps me when I build tests to iterate as I see the preview of it. And so that I feel like if we do that, then they’ll be able to do it on their own later, asynchronous matter.

Lindsey 33:57
Hi, my name is Lindsey. I’m sorry. And butting in here and not doing chat and making tortillas at the same time as listening to all you find people talking about stuff. I just wanted to mention, and I see my colleague, Na Me is on here too. And I don’t know if she has some insights to add to this. But when I was at eBay, we actually did have a full program. Exactly what we’re all talking about here. And it was pretty robust. We had a training kind of thing that you went through, I was tangentially related. So I actually don’t know that much about the mechanics of how it all works. But I just wanted to share that. Even when you train people, there was a committee of people that was in charge of kind of helping to make sure that studies were well rounded and had good approaches and not leading questions, that kind of thing. Even in trying to control for quality there, it ended up not working that well. The program got shut down, because what tended to happen is people sort of cherry picked their results. And as designers, I think it’s really hard to not kind of have ideas in your head about what you think you hear people saying versus having a maybe more objective perspective.

Lindsey 35:26
So I just wanted to share that. I’ve seen this kind of program run before and it does run into all the challenges that have been brought up before. And I will ask my one of my colleagues that was running it, she’s not on the call today. And I don’t know if that’s on purpose or not. She doesn’t want to talk about that experience. I’ll ask her if she wouldn’t mind connecting with you all, to share her experience at some point, because she may have missed the original invites and all that. I know that was a lot at once, but I’m making tortillas at the same time.

CeCe 36:03
I’m hungry. Thank you, Chris. I’m sorry. I finally got my microphone working.

Kris Angell 36:11
Go ahead CeCe

CeCe 36:12
Yeah. So I think this is so interesting. And it’s like obvious that we’ve got like these range of like large companies that are able to kind of scale and drive research at a pretty large scale. And I’m sure there’s a lot of like even people who are working on their own. with clients, I’m sort of an in between working with a very engineering driven company. So we’ve been fighting hard to bring research into the company. And there is definitely amongst clients as well as our own team, actually, more so within our own internal team, a belief that research is just kind of call it you know, this swanning about wandering around looking around, okay, I know what to do. So it’s been just such a hard road to get research to be a part of projects.

CeCe 36:57
We focused on scaling by doing good quality at the beginning. One of the things that was a principle was we have to deliver a good product, good output. Or else they will, you know, if we let it deteriorate by disseminating it too broadly, it’ll be diluted and people won’t believe in it. So we had to really focus on that thirst, which worked great. So we said, okay, let’s go small and good quality.

CeCe 37:24
As we continue to grow, we now are pulling in more people because they’ve seen it and they’re excited. And then they’re a part of—we call them our extended team—and they’re excited and they learn. And we do have some structure and tools that we put in place, but we don’t set people loose. Because we’ve done it a couple times, just because we were stressed for time. And then it starts to do what we’re worried about, which is make people not believe in research because output wasn’t what it should be.

CeCe 37:52
There was just one of those, how do you grow? How do you expand, being inclusive and come along for the ride for sure. Always as much as We can possibly do. And then we find those people are cherry picking is picking the people who we think are actually going to, you know honor what research should do and how it should be run. And you know, we put the effort in.

Kris Angell 38:15
That also speaks to what Lindsey was just saying about the challenges of people being in the room and hearing something that clicks with what they were hoping to hear, and not catching anything else that was said. So if you’re, you’re actually honing in on those individuals who are objective are able to capture both the good, the bad, and the latent within a conversation. Sounds like you’re able to help train them up so that they can be advocates and champions for this research process.

Na 38:54
Yeah, this is Na. So things Lindsay mentioned that we were co workers at eBay and right now I’m actually coworkers with Paul. So such a wonderful community that connects everyone together. So I think I have a few thoughts to share. Because yesterday, our team just did a user testing day. So we went through all the challenges that many of you mentioned. I think it’s very important to sort of identify and make sure you’re really are cognizant of what you’re trying to achieve from the program.

Na 39:27
For example, at our current company, which is a late stage startup, I think the use of the user experience or the UX research, maturity is actually relatively low, if I’m honest with you. So the whole purpose of the program is to advocate for user research. That’s why we set it up where I’ve been almost like hand holding our stakeholders all the way from getting participants recruiting criteria, to the research question they have, to the interview guide. So I actually invested a lot of time, just want to make sure we still meet a certain bar of research rigor into the results and findings. I think, so far it worked out for us.

Na 40:21
But I totally understand what Lindsey mentioned. Like when I was at eBay, we were actually a very mature company. We had, I think, research on almost every single team, I think, for a big, mature UX mature company, probably having stakeholders do user research, and especially like, they’re just trying to, you know, look for like a confirmation from their own design or their own product. There could be certain caveats that, you know, sort of this program is facing. So I just wanted to chime in here with this quick thought.

Kris Angell 41:05
I’m seeing Whitney. Research ends up being a practice of confirmation bias if they don’t understand the nature of research at really, Whitney, how are you experiencing research within your own organization?

Unknown Speaker 41:46[Mic issues]

Kris Angell 41:57
So some of the things that we’re coming up with In the survey was that, you know, when you’re scaling, there’s this question of like, how realistic is it to be able to expect your team to be able to take on the research role? I think we’ve covered that quite a bit. We’ve also identified with some of those pitfalls of like the quality of the data, does that misinterpretation and some of the things that people have tried, which is more training, making sure that researchers are always involved. Philip, you mentioned there’s Nielsen Norman group article. Can you add that to the chat to share with everyone?

Philip 42:41
Yeah, if I’m understanding bias training, like in terms of like writing questions, or like bias and diverse diversity inclusion type bias because that there’s normally we’ll be talking about like the how you write questions, basically…

Kris Angell 43:01
I would assume that most of the bias training that we’ve had has been around how do you write questions? How do you get your ego out of the research project versus inclusivity and things like that? Has anyone seen any good reports on how to bring in more inclusivity within the research projects, or good training? … I know that the Forum has had quite a few questions and group discussions around them in past. I would encourage everyone to look through the past form discussions related to that. Thanks. Okay. So, some of the other things that we’re coming up of what are the barriers between allowing others to do the research And not giving away our main responsibilities.

Kris Angell 44:04
I’m curious from the group, which responsibilities are you trying to keep? And which ones are you able to or willing to give up?

Kris Angell 44:22
Janet, you mentioned something in the chat earlier about the things that you need to hold on to and how usability is easier to do self service. Can you speak to that for a little bit?

Janet 44:38
Yes, I think it comes back a little bit to actually the bias question as well. That the closer people are to, I guess, the more interaction there is more verbal interaction in the process of learning, the more likely bias and one’s own bias. has to come into the picture. So the way you ask the questions, but also the way you synthesize the learning, and the way you share out the learning. The three stages where your own biases can really start to influence and somebody mentioned that earlier that you know. And then therefore, the more verbal interaction there is more likely that’s going to color things. At the usability end, where it’s self guided, and there’s less, you know, less specific chance to sort of bias the way somebody is experiencing something. I think you’re more likely to just take what you’re seeing and it’s a very explicit data point, and less room for interpretation and coloring what you’re seeing and learning. I think that’s where I was making the point.

Janet 45:51
There’s sort of usability, and then there’s usage experience, like what’s the overall usage experience like of something. Where it’s somewhere halfway between the user experience and the usability. But when you get into the user experience and you’re really trying to understand the users broader experience of something, the more chance there is to color and bias the learning from it. So that was just the spectrum I was thinking about and have witnessed as well.

Kris Angell 46:26
As Jean Watanabi you mentioned a couple of things in the chat. And I’m specifically looking at your comment just below. Janet’s. You’re saying a large company you worked at non researchers were discouraged from conducting research. And now you’re at a smaller company where it sounds like it’s democratized. Can you speak to the differences that you see between the two models?

Jean W 47:00
Yeah, the larger company, it was a lot more. The roles were clear, very clearly defined. And obviously, because there were plenty of researchers, I think, and there wasn’t, and it felt like the priority projects didn’t get the support they needed. Whereas the company, I’m now I’m the only researcher, even the very highest priority project gets my attention. And then everything else, still, you know, potentially important is not getting necessarily the help that it could get if I were to work on it myself.

Jean W 47:35
So I’m wondering, yeah, how much of this is about pushing back? And doing the education and communicating the risks? And also even saying no, at times, I think I often hear well, it’s better than nothing. If you don’t, you know, allow them to do. They, they’ll have nothing to work with. And so I think what other people said before about the bad research and the risks that are involved with possibly having bad research, I think I need to figure out how to communicate that better. And to decline were in in cases where like somebody said was more on the user experience and than just usability to start to say, push back and say, you know, we, we just can’t do this, or we have to wait, or we have to somehow come up with another way to accomplish what you need.

Kris Angell 48:35
You’re reminding me of a local company here in San Diego ServiceNow. I believe we’ve got some offices all over the world, but they’re at when they were scaling up their design team. They also scaled up the research team, but they didn’t focus on usability. Initially, it was very much around lets focus on some foundational research. Which left all of the designers at a loss of like, we need information on the things that we’re developing, but the research team could not support them. So now, there was a big push to add individual researchers to each design team pod. So the amount of research that they can do has scaled tremendously. But that also creates different challenges related to like how do you centralize those insights? How do you make sure that you’re not duplicating efforts across these teams?

Kris Angell 49:35
And I’m curious, we’ve talked quite a bit about scaling. We’ve talked about MVP. I’m really curious about like, what are we doing for those best practices, those tools? And what tools have you guys identified to be very successful? And how do you make sure that everybody’s using them

Kris Angell 50:09
Okay, so I’m seeing some things here from Paul. Paul, can you share the types of things that you’re doing and what tools are supporting your efforts?

Paul 50:25
Yeah, so we’re trying a few different things to kind of build up this energy, there’s a lot of energy to get information from and about our users. So sometimes this looks like you know, product managers or designers really just getting in touch, you’re calling customers and asking them questions. And so there’s energy there that may be happening in kind of an undirected way. That can be you know, really valuable to kind of put some some shape around it and give people a little bit more of the toolset to have more reliable insights, you know, to base their decision making on. And so, you know, what seems to be working so far is kind of doing some things that you alluded to like giving people an idea of what’s kind of the timeline for collaboration, you know, having this kind of meeting at this stage, check in to get things started and then defining, okay, we need to decide, you know, who we’re going to talk to, let’s specify some criteria before sort of moving through the rest of the process. But, you know, other than that, I mean, we’re, we’re working with providing kind of a template for how to present your results in terms of, you know, insights and themes. You know, just working directly with folks in other parts of the company with its product management or design, to help them basically to vet the questions that they have and turn them into more sort of user centered questions rather than products, any questions or business questions. But it’s an ongoing process. And we’re still kind of learning. You know how to do this.

Paul 52:09
There’s a lot of energy and desire for research, but there’s only a limited amount of bandwidth that the two of us have, you know, to support it. So, you know, that’s definitely not something we got the answer to, but we’ve seen some successes and some things where, you know, could definitely go a little bit smoother.

Paul 52:30
But I did want to ask if anyone else has had the experience of sort of a, you know, I wanted to refer back to the concept of this sort of expanded team, or extended team, you know, I find that there’s a small group of folks, who are designers and PMS that I tend to work with repeatedly. They tend to be a sort of a major source of research requests to research projects. They want to be very involved in the execution. Have a high level of comfort with it, a certain amount of experience with it. More so than other folks on the team who also benefit from the research but just don’t have that same level of involvement in it. And I just wanted to see if other folks have had a similar experience and how they can kind of use that to the advantage to kind of the research practice maturity level.

Kris Angell 53:30
Paul, I’ve seen that here a couple in a couple different places. The research team at Illumina spoke about how they incorporate other people within their disparate teams. And I think that should be online. But the general challenge that they had was, much like CCS, how do you expand and show the value of research and then get those individuals To start to see the value to participate and to be able to do it as well. And it’s, there’s a big challenge there. It looks like Janet posted a article from Molly Stevens. And as well as something from the Implicit Association test from Harvard, thank you, Janet.

CeCe 54:25
I have a few more thoughts on that. Because it’s been a really important part of our, our growth, because, you know, it’s, you know, really going from zero to basically, we now have, you know, eight researchers, some of them are designers and researchers. Some of them aren’t we have a scientist wanted to become a researcher be able to do research, right. They’re still functioning in those roles, but they’re really highly functioning in research as well. And even an engineer. So it’s been actually part of our strategy for growing research into the organization in a really meaningful way.

CeCe 55:12
Getting people trained means, you know, my company is a consulting firm, we’re basically outsourced r&d for pretty complex systems. So it’s part of our skill set, as a researcher is also to be a consultant. And that is as many of you might have known, because I know a lot of you probably have do that now or have in the past, it’s, it’s a lot of skills and communication with the stakeholders. It was a great point at the very beginning, which is how do you create a contract with your client, your internal client or external to say, let’s go through step by step. I love that about, what is it, what are your research questions, that’s really just, you know, progressive disclosure, all the way through. Through and that’s good consulting and it’s good setting people up for success in research.

CeCe 56:05
And then it’s great because when you’re training someone to wants to be a part of that team, you know, you do the same thing really you do, tell me why, what’s interesting? And then as they learn, you say, Okay, let’s do some, let’s explore, you’ve been trained in, you know, what bias is and how to recognize it. Now, let’s uncover it, you know, just as you might in safety, what biases did anybody notice any biases that they did? And that session that you did? Did you notice anything else happening, and we’d search for it and uncover it and, you know, that becomes an active part of how we all learn and how we recognize that it can creep in. So it’s, um, it’s a part of growth. It’s a part of good consulting, it’s a part of and then when we know we open up a new group within the company, which should recently happen. the floodgates are opening. We have a ton of work now. We’re glad we have had this extended team who maybe weren’t able to do as much research as previously. So we did the sow the seeds before we needed it.

Kris Angell 57:09
That’s awesome. I saw Elba wrote above about pairing with internal ethics teams and external IR B’s. And I’ve also had a question around. Do you work with your r contract lawyers within the firm to develop any of your, your NDA is and such. So team? Has anyone had success or challenges working with internal resources that perhaps are not used to research? Okay. So now seeing anything pop up, so we won’t worry so much about that. I’m curious if there’s any additional open questions.

Kris Angell 58:08
I see a chat here from Janet, where she’s interested in how many people based inside companies feel if their role is an internal research consultant, or is there a better term, so research partner research colleague? Janet, What do you mean?

Janet 58:37
I guess I’ve heard the word playing a consultative role a couple of times. And I guess it’s an interesting word, whether what we do typically when we work as part of a within a company, whether we’re playing more of a consultative role or a hands on doing role and whether thinking of yourself as a consultant to others in the organization on when it comes to research is a good or a bad thing. So I was just interesting. I felt like it was a word that was an uncomfortable word and almost not an appropriate word to use anymore. And yet, I’ve heard it used a few times in the last hour. So I was just just interested as to how people perceive themselves, and whether it isn’t appropriate time after all, and I was imposing my own biases, that it’s an uncomfortable word for me.

CeCe 59:37
You know, I’ll just quickly give a thought on that I, you know, I, I can see how it could be uncomfortable, because it implies like, we’re just gonna give you some advice and go do whatever you want. But I think of it as you know, we’re, we actually think of ourselves as professionals that are there to, you know, shepherd the outcome of whatever we’re designing You know, with kind of a, do what we need to do to get that to happen. And sometimes, it’s not just saying what we think or saying it in a strong way. In consulting to me is how do you help? How do you help get people to listen? Good consultants are good at navigating relationships and listening and coachable moments, right? So there’s a lot of coaching involved. There’s a lot of, you know, what kind of tools do we need to help with this particular group or this stakeholder to understand the value and what’s at stake. And you know, who has the decision process and what might be at stake when they take that decision and ignore this. In terms of the success of the product or a risk of harm or whatever it might be. So it’s really revealing the truth in a way that the listener can understand it and act appropriately.

Kris Angell 1:01:00
Awesome. Well, thank you, everybody. It’s one o’clock. This has been a fantastic conversation. This has been a fantastic conversation. Thank you. Really appreciate it. Philip, thanks for starting this off. I think this is a great topic.

Philip 1:01:45
Thanks for organizing. It’s been a lot of work. I know. So appreciate that.

Janet 1:01:50
Thank you guys. Good to meet you all.

Transcribed by https://otter.ai