Welcome to Season 3 of UX Research Geeks. In this episode, we take a closer look at the world of user experience research terminology with Nikki Anderson-Stanier, a seasoned professional. We examine topics such as goals, needs, tasks, and pain points, providing valuable insights into the foundations of UX research. Stay tuned for an exciting season ahead, featuring experts in the field.
00:07:40 – Challenges in UX Terminology – Differentiating Goals, Needs and Tasks
00:20:29 – Mitigating Risk with User Research
00:23:20 – Normalization of Not Having Insights
00:28:58 – Qualitative vs. Quantitative Usability Testing
00:32:29 – Adapting Reporting Formats
00:39:50 – Defining Terms
00:41:01 – Connect with Nikki
About our guest Nikki Anderson-Stanier
Nikki Anderson-Stanier began her journey into the field of user research during her graduate studies, igniting her passion for this discipline in 2015. With over 8 years of professional experience, she has earned a reputation as an expert in user research, authoring more than 250 articles and speaking at numerous conferences and events. Nikki is committed to empowering fellow UX professionals, helping them build confidence in interviews, workshops, and career advancement. Her unique blend of clinical and social psychology, Buddhist philosophy, and teaching expertise informs her work.
Connect with her on LinkedIn to access her expertise, and explore the User Research Academy, and the “Dear Nikki” podcast, covering essential field topics.
[00:00:00] Tina Ličková:
Welcome to UX Research Geeks, where we geek out with researchers from all around the world on topics they are passionate about. I’m your host Tina Ličková, a researcher and a strategist, and this podcast is brought to you by UXtweak, an all-in-one UX research tool.
Hello UXR Geeks. This is Tina speaking and I’m happy to introduce the new season, which is going to be a legendary one. We are starting with Nikki, who is not only a great researcher and a great human, but a very structured teacher of user experience research, which you will hear in this episode where she will guide us through goals, needs, tasks, pain, and how it goes to motivate us.
So all the terminology. Overall in this season, we are going to offer you episodes with some big names like Teressa Torres, Kate Towsey, and some more people who really do a great job in this UXR space and publish a lot of beautiful things. So I’m really looking forward to this season and I’m wishing you a happy listening.
Hi Nikki. Welcome to the show. My first question, I know you are very well known in the research space. I think so, and I’ve been following you for a couple of months, maybe even years already. But tell us who you are and what are you up to these days?
[00:01:39] Nikki Anderson-Stanier: Yeah, Tina, thank you so much. I’m really excited to be here today and it’s so weird hearing that one is well known. It’s still something that I’m trying to wrap my head around, but I appreciate that and I appreciate the following. So from my side, I am essentially running my own business right now from the beautiful island of Jersey. So this is the old Jersey, not the New Jersey, which tends to confuse primarily Americans, but also other people.
Jersey is actually off the coast of France, but part of the UK when it wants to be, and then Jersey on its own, other times. So it’s a very interesting place to live. Very beautiful. But yes I run my own company called User Research Academy, and what I focus on within that business is helping user researchers learn and accelerate in their careers. So as a user researcher, I was primarily a team of one for about 10 years, and that was a very difficult space to sit in. It’s a bit of a vacuum, and can get a bit lonely. So I focus on bringing community to that and making people feel like, yeah, we might have imposter syndrome, which is very normal to have, but we don’t have to be alone there. Everybody else is going through this. There are hundreds, if not thousands of other people who are feeling this very way at this very moment.
So that’s primarily what I have been working on in my business. And outside of that, I am a fiction writer, so I’m a huge Stephen King fan. Very big fan. I stood in line to see him once for six hours and Wow. Maybe too much information, but I do have a very small bladder, so that was very interesting.
[00:03:33] Tina Ličková: The combination was unexpected, but I feel you.
[00:03:37] Nikki Anderson-Stanier: Yeah, I didn’t wanna lose my space in line. We’ll leave it at that. And so I do write fiction on the side, primarily horror, or psychological thrillers. So that’s my life goal, is to actually become a fiction author. But for the meantime, I’m hanging out in the user research space.
[00:03:54] Tina Ličková: Wow, okay. This is a user research podcast, so I have to stick to it, but you got me really curious. Okay. When we were speaking in our kick of call and looking for topics that you are now passionate about, you were mentioning that some of the things pop up every time your students come. And I was mentioning my experience of being a self-learner, following many kinds of resources and trying to figure out how to do research and stuff. And you’re mentioning that people that we as a business are not really consistent when it comes to terminology. That people used to mix up terms. Why do you think so and what to do about it?
[00:04:39] Nikki Anderson-Stanier: Yeah. It’s so hard. I feel like because our industry isn’t really standardized. This is what happens, and I think it happens in every industry that goes through the infancy and grows and changes, and then slowly there becomes some sort of standardization. As you said, a lot of people are self-learners because there isn’t one direct pass to becoming a user researcher.
You don’t necessarily start out going to school for this and really like having dedicated courses, a dedicated degree- that’s coming, I think, and especially as the field gets more popular, you can see courses and degrees in this space, but for most of us it’s, huh, oh, interesting. User research. What’s that? I have a background here, like a social science background or I’m a journalist. I once mentored somebody who was a police officer. I was like, how did you factor your way to research? But that’s so awesome, and that’s actually the beauty of not having a standardized field at this point. ’cause you just get so many perspectives.
But with that lack of standardization comes these inconsistencies because I have found maybe more broadly in the product and tech field, but specifically for user research, we like to have 50 different terms for the same thing. And maybe they’re all slightly different, or our mental models of them are different from how somebody else is thinking about them.
So there is this lack of shared understanding. I feel like that comes with these terms and when we ourselves are using them and then trying to communicate them without really thinking about what they mean, we might get places where miscommunication happens, where somebody else might say: oh, that means that I don’t understand why you’re using that term.
So for instance, desk research and secondary research, to me, I’m like, oh, these are really similar. But to other people, whenever I’ve posted something about desk research and also simultaneously, Referred to it as secondary research. People are like, there’s a difference between those. And I’m like: oh, cool, like what? What do you think that is? But that’s a low stake situation, right? Like I’m just having a conversation with somebody rather than really trying to put forth some sort of like educational content. So that’s where I see this has come from is that lack of standardization and what to do about it.
I think people are trying. I know the research ops community is huge, like the research ops community on Slack, they’re making a lot of really great strides to try and standardize, but I think we just have to wait. And what we have to do is we have to be really clear what these terms mean to us and then tell people what they mean to us, rather than assuming that people have that same mental model, if that makes sense.
[00:07:40] Tina Ličková: When I go back to my experiences in different companies, every time I came to a company, I had to take the time to figure out what people called stuff. Yeah, even if there wasn’t a research practice, which I also experienced, it was, I don’t know. For example, in the company that I run now, we have research labs all over Europe, so what do we call ourselves?
What struck me was that you were mentioning even the basic staff, like goals, needs, pain points, motivations that we are, and yes, we understand different things in different companies, in different contexts and in our business. So how do you explain to your students what goals, needs, pain points, motivation are? What is the difference? How to, and how to give them away what the meanings are?
[00:08:25] Nikki Anderson-Stanier: Yeah, absolutely. It’s so funny because at least for a really long time, especially when it came to synthesizing research, I was just taught, okay, do an affinity diagram. That’s what I learned and I was like, cool. And then, there are these quadrants, right? And one of them is pain points. One of them is goals, one of them is needs. There’s these quadrants that come with this with a diagram. And I was like, great, all right, let’s fill this stuff out. And I never really questioned what those things meant. And so for the longest time, pain points were a little bit easier, but for the longest time, the whole goals versus needs versus motivation versus tasks even- I was just flipping a coin. I was like: maybe this is a goal and this couldn’t be both. I loved that. When people were like, oh, is it a goal or a need? I’m like it could be both. That was a really cop-out answer that I used.
You know what? We do it. We have these moments. For me, what happened was I started to question what these things meant, especially when I started educating stakeholders, because what I couldn’t do, was just point at something and say, that’s a goal. Because they just came back at me and they were like why? What is a goal? What is a goal in this context? What is a need? What is the difference between goals and needs? And I remember one of my first educational presentations on synthesis with my stakeholders.
And luckily, I was well liked at this organization and I was pretty good friends with my stakeholders. And I went into this presentation and people started asking me questions that I could not easily answer. I was like, it’s a goal because it’s a goal, right? Because I had never questioned it, how? How was I meant to explain it in my own words?
So that’s when I really realized, okay, we need to think a little bit more critically about what these different things mean. And as I learned from my students or within my membership community, people always have these questions like, what? What are these things that you learn once and rolled with? And so that’s when I started to really break down these differences in a way that hopefully is accessible and understandable. So I can walk through that like a funnel. That’s how I think about it, it is like a little funnel and I can walk through that. So essentially we have goals, needs, we have tasks as well. We have pain points and then we have motivations.
If you can think about it as a funnel, it gets thinner at the bottom, the deeper you go into the funnel. So at first, on the surface level, we have a goal that’s something that we are trying or we, or the participant or the user is trying to achieve, right? It’s an outcome. I need to be able to, or I want to be able to do this at the end, right?
And that could be a very small goal. So let’s take something like, Saving an expense report or submitting an expense report that’s maybe on a day-to-day basis. That’s a pretty small goal that one has. Or it could be a very big goal of, let’s say, running a marathon, which might be small to some people. And the expense report might be quite big for some people too, depending on what software you’re using.
So we have this overarching goal, and then in order to get to that goal, we have needs. So we have things that we need to help us achieve that goal. So if we’re looking at that marathon light goal, we need sneakers. We do need running sneakers. So we have certain needs that we have. If we’re looking at the expense report as a goal, we need a receipt to attach to the expense report to prove the amount, right? So we have those, and then we have our tasks, right? And these are the things that we do, they’re just actions. These are the things that we do to get us towards that goal. So somebody asked me actually yesterday, does each need have an associated task?
And how many tasks are in a goal? And it depends. I wish I had an amazing answer to that, that wasn’t: it depends, but each task, I would say you have several tasks that get you to a larger goal. Usually you have to go through several tasks to get to the goal. The larger the goal is, the more tasks you might have to get there.
Typically, these tasks are behaviors that you’re doing to get you to that goal. So one of the tasks that I might have in getting this expense report done is opening the software -very basic. Something that I might have to do for running a marathon is getting off the couch, turning off the TV and stop watching crappy tv- married at first sight or something… love is blind. So these are the small, smaller tasks that get us towards that goal. And then you have pain points, and these are obstacles towards that goal. So these are things that you have to either overcome or that block you towards getting to that greater goal. And these can come in at really any point. They can stop you from doing a task. They can prevent you from getting to that next step. And then finally, you have motivation. Motivations are the deep, deeper reason. They’re not always conscious. They can live a little bit less in our awareness than something like a goal.
So you might have this goal of running a marathon. Okay. What’s your motivation for doing so? To get healthier? That could be a motivation. A bit shallow. So let’s dig a little bit more into that. Obesity runs in my family. Let’s just say. Okay. Maybe a little bit deeper. Oh, there are heart related problems in my family. I’m scared of having a heart attack early on, like other people in my family have .There, there’s your deep core motivation, right? You don’t always have motivations when it comes to synthesis because it’s so deep you might not get there. So that’s what I tend to think about. It’s almost like a funnel, and that’s how I think about all of these different things relating, but also differing from each other. I hope that made sense.
[00:14:53] Tina Ličková: Totally makes sense and it’s for me, going from the operational or tactical into the psychology. Yeah. And I would even at this point, anybody then listening to us invite to maybe tell their perspectives on it and maybe to see how we differently understand. Yeah, when it comes to the motivation, and I’m really stuck on the thing that you just said, that you don’t have to have the motivation because sometimes we don’t have either the time or the capability to go so deep.
Yeah, and that’s interesting how to get there. It’s a different discussion, but definitely something that I celebrate that you are saying out loud.
[00:15:35] Nikki Anderson-Stanier: For sure. You’re likely not gonna get any motivation when you’re doing usability testing. You don’t have those conversations when you’re doing usability testing, you’re, you are usually getting qualitative feedback on a stimulus that’s put in front of somebody, so the likelihood that you’re gonna get motivations from that type of conversation is pretty small.
And motivations do come into play more in 90 minute conversations, like one-on-one, in depth, 90 minute conversation, sometimes 60, depending on the scope of your project. But it’s totally fine not to have motivation. We can’t always get there and our participants might not be able to get there.
We might be able, we might be capable of asking these questions, but your participant might not be capable of answering them.
[00:16:20] Tina Ličková: I am a little bit going back into some of the studies that I did if I was really getting the motivations from the people or I was just assuming I know them on heart. Yeah. If I stay in the space of clarification of terminology, we were also talking about, and this is where I come clean up from my experience of not having terms clarified was that when my colleague, we are preparing a kind of UXR guide for the organization, she was like, let’s explain to people what are insights and findings. And I was like: I understand the difference, but I never actually did the work for myself to distinguish them and to explain it to people. And you are also putting it together versus not only incident findings, but versus observation versus one data. Can you enlighten me a little bit there?
[00:17:12] Nikki Anderson-Stanier: Yeah, of course. I do wanna touch really quickly before on something that you said is doing that work for yourself. So one thing that I highly encourage people to do is to define these things for themselves, right?
Because there is not one right definition at the moment, at least, and I think that the most comfortable one you will be able to use for your own projects and also teach others is the one that resonates most with yourself. Of course, we don’t wanna make things up, right? So we want to ground them in a sense of, in a sense of reality.
But coming up with your own definitions and tweaking those over time is so super important for just not feeling like an imposter, right? We all have slightly different variations, and it might also depend from organization to organization what these look like, but I had a very hard time with this as well.
I called everything Insights for forever. Yeah, I was like, look at all these insights and motivations, that’s super fun. You won’t always get insights from your study. There are so many studies in which there were zero insights, so let’s go through a little bit. First I’m gonna start actually with quant data and make our way to do it up to the beautiful insights that are these mysterious wonderful things.
So quantitative data for me is literally just looking at analytics metrics, like strict quantitative data. It can also look like any sort of usability testing data that you might have. So usability testing metrics, time on task tax, task success, number of errors. Or we can look at scales like the SEQ, the Single Ease questionnaire, the, what is it? The user experience metric. I can’t remember the exact name because it’s such a long name, but it’s the UMUX and the UMUX light, the SUS, so the system usability scale. So all of those, I look at quant data as what is happening, what’s just, what’s happening, what’s going on, what’s the behavior? What are they, what are these analytics telling us?
Then we have observations. These are things that we’ve watched people do, but don’t have meaning behind, right? So it’s literally just, I watched somebody do a task and I got no meaning behind how that person felt on the task, why they were doing the task in a certain way. Why they behaved in one particular way versus another. I have very little context. It’s what I saw, right? So I can’t interpret anything from that. It’s just strictly what I saw.
[00:20:01] Tina Ličková: This is what I see a lot on the product management side, that they have session replacements or I don’t know, some videos or heat maps, stuff like that, and they are trying to interpret them. And this is where I’m like, okay. And I will ask the question here. How do you explain to stakeholders like, okay, observations are observations, they’re limited, and you can’t really interpret because that’s also an assumption. This I struggle always with.
[00:20:29] Nikki Anderson-Stanier: Yeah, it’s, you know what, I would say that it’s even hard sometimes for researchers, because we see things and we want to attribute meaning to them, because we want our work to come through as impact. So it’s super easy for anybody to get stuck in this attribution model of assuming what somebody might, the intent behind why somebody’s acting in a certain way, just through an observation.
So what I generally do, is whenever someone comes to me saying, this is what’s happening. So we saw this happen, so we think that this is why, and we’re gonna make this change. I always go back to confidence. So how confident are you, that’s what that person is doing. And then I will ask something along the lines of what other intent or what other things could that person be thinking in that moment other than what you’re thinking right now they like, other than the assumption that you’re making.
So what are the other possibilities out there, which are generally endless when you don’t know the exact answer? So when you don’t ask why. So I tend to respond back to these types of assumptions and use assumptions as qualitative data with questions like that. So how confident are you that this is why this person is doing it?
What are the other possibilities that could have been occurring for this person? I will also just talk through in general, like when we use assumptions, we are not actually mitigating any risk. So user research, proper user research is meant to mitigate risk. We’re not meant to give an answer. No one department could give an answer to stakeholders like, yes, this is the best product ever and people are gonna buy this for a million dollars and it’s the best, wonderful, yay. No, no function, no rule could give that one answer. So we’re here to mitigate risk, and when we’re taking assumptions, we are not mitigating risk.
So I also talk them through, look at all these other possibilities of what could be happening, if we just choose one of them at random. We are not doing the thing that user research is meant to do, which is to help us make better decisions. So I try to do it that way. If people really aren’t getting it, I like role playing exercises, so what I would do is: I would put two stakeholders together and have one person start doing stuff on an app. And I would ask that stakeholder that’s watching, that’s just observing, why do you think that person’s doing it? Why do you think this person’s doing it right? And have them tell me, and then ask the original stakeholder that’s actually doing things.
Okay, so why did you do that? And see if there’s that mismatch. So I do try to demonstrate these things rather than just telling people, if that makes sense.
[00:23:20] Tina Ličková: Yeah, makes total sense. Then maybe tell me if not, but can we continue from the observation into findings and insights?
[00:23:27] Nikki Anderson-Stanier: Yes. What everybody wants. So findings are a fact. So they are something that we found out from people. They can be observations, which is what gets a little bit confusing sometimes is, why you see there can be a bit of an overlap between findings and observations, but findings usually have a little bit more context behind them.
So findings might be something like participant one couldn’t, let’s say we’re testing a checkout funnel and we’re trying to get people to input a coupon code for a, for some sort of product that they’re buying.
An observation would be to say the person could not find the coupon code. That’s just, we observed that the person could not find this, and observations tend to come from things like screen recordings, so something that has no context or sometimes unmoderated usability testing, for instance, where the person isn’t speaking out loud or you can’t gather the context.
A finding rather might be something where we’re talking to the participant. The participant saying things like, this is really irritating. I can’t find the coupon code, right? Or the place to put the coupon code in. And so then our finding switches from just like the person could not find this, to something like the person was frustrated that they could not find the place to put in the coupon code.
There’s not a lot of why behind that. There’s not a lot of understanding of what the consequences could be. We could assume the consequence is they might x out of the shop and not purchase anything. ’cause they might be frustrated and then they might not ever come back to our brand. But we’re still lacking that context.
And that’s that kind of context, that deeper understanding of why this person is frustrated and what that means to them is where you start to tumble more into the insight field from finding. So again, the finding is really, what did we find? Like we found that people were frustrated with not being able to find this place to put in the coupon code.
And alternatively, or not alternatively, but additionally five outta seven people couldn’t find it, right? So it’s like factual. And then the holy Grail, the tantalizing insights is going that step further and it includes why something might be happening. So why is the behavior happening as it is? So why can’t people find the coupon code, for instance, and what are the consequences of that?
So that’s why I say: for some reports you might not get insights because you might not have that depth of information. Same with motivations- you might not get there. I recommend that you always try to, but if you’re running an unmoderated usability test and you don’t have the ability to follow up with people to understand where they’re running into problems, you’re not gonna have any insights.
You’re not, you’re just not gonna get to that depth of information. You’re gonna have findings and there’s nothing wrong with not having insights. We wouldn’t, we shouldn’t strive for always having insights and having so many insights because it’s impossible. It’s setting us up for failure. What we should try and strive for is insights when they’re applicable, and that’s generally speaking in very qualitative settings.
I would say the majority, if not all of my usability tests don’t have insights. They have findings because we just didn’t go there, right? We’re testing a flow, we’re testing an experience, and our findings are, people can’t use this. We need to fix it, and it’s not up to the participants to tell us how to fix these things.
People don’t know how to fix… Like just normal people on the street don’t know how to fix usability issues. It’s up to us to find ways to fix it, iterate it on it, test it, et cetera. But it’s totally fine not to have finding or not to have insights and just have findings. And I think that we need to normalize these things so that people aren’t sitting there saying: oh, I haven’t done impactful research because I don’t have insights.
That’s fine. If you’re doing really deep generative interviews. Ideally you’d have some insights, but again, it’s not like hundreds or even 50 or even 20. I’ve had really deep 90 minutes, sometimes two hour conversations with people. I’ve come out with three, five insights and that’s fine.
[00:28:19] Tina Ličková: I like that you are saying let’s normalize it because I’ve seen. After a certain time, especially when you are more senior, you become addicted to the aha moments that you are delivering to people and you’re looking for the insights. And when it comes to usability testing, I like that normalization because it also points out how patience is important in research and somehow even the context, like when you work for a client and you do one project on usability listing, deliver insights isn’t probably the best idea to do, but when you do several usability testing or user testings for them, you might encounter some insights because you see the patterns.
[00:28:58] Nikki Anderson-Stanier: Yeah, absolutely. And I think something that’s also really interesting to say is, for me, there are two types of usability testing.
There’s the qualitative. Where you’re putting a stimulus, usually it’s more of an idea or a lower fidelity, maybe sometimes mid fidelity in front of somebody and you’re looking for their qualitative feedback. But again, your focus for that is looking for feedback on your design. Same with on the other side, we have quantitative usability testing, right?
Which is really looking out, is this usable? Is it efficient? Is it effective? Is it satisfactory? The three cornerstones of usability right there. You’re not asking questions, you’re seeing if something is usable or not, and usually it’s a high fidelity or live product that you’re looking at. But the entire point of usability testing both on the qual and the quant side, I guess more on the qual side is you’re getting feedback on your design.
On the quant side, you’re seeing if something you’re evaluating, if something is usable, and the spectrum of usability that design is on. We’re not looking for depth in either of those. We the entire point, and this is why goals are so important in research, like having user research goals and knowing what we’re actually going in to understand is so super important because we aren’t looking for huge amounts of depth in usability testing because we are hyper-focused by, on that point, on the solution in regards to, we’ve already done that depth of research. So we already have used and done research that has been more deep, that looks at these like goals and needs and sometimes motivations and pain points. It looks at that, and then we’ve created a solution based on that. So why are we trying to go to this level of depth in usability testing when we actually already have that in, should already have that information so that I just.
It’s just really important to think like what is the goal of this research? If it is to get like these really big understandings and these huge discoveries and these huge aha moments, then yeah, insights do make sense to come as an outcome. But if we’re really looking at what the feedback that people have on this design is on the spectrum of usability, where’s this design fitting in; findings work.
And so it’s really going back to what is the intent behind this study. And what makes sense as an outcome.
[00:31:30] Tina Ličková: It’s interesting also from the perspective of when stakeholders come, and especially in organizations, which are somewhere in the middle of majority of UX research, it’s a lot of people coming: I wanna do a usability. Okay, wait, what is your problem?
I realized pretty late in my career that I have to structure the stakeholders. Okay. This question is for this research, this question is for this research, and sometimes we have five studies coming from one half an hour conversation there where, whoa, that is a lot of work.
Yeah, with a lot of work to bring it together because definitions are super important. But I know we have a lot of listeners who are super practical and they’re like, okay, what? How can, what can I take from it? And this is where we talked about reporting. Yes. And there’s this thing of I, and I was just discussing it in LinkedIn, some other discussions where we are reinventing reports.
There’s some noises saying Reports are dead. We have to do more for research impacting work.
[00:32:29] Nikki Anderson-Stanier: Yeah, the whole reporting is dead, and this spectrum that reporting is on now I think is absolutely fascinating. But I always come back to this same point, just repeatedly for almost everything: what fits your organization for your stakeholders?
That is the most important thing. So we often forget sometimes, when we turn our eyes to the internal space, so how we’re working with stakeholders, who those stakeholders are, and our process. We can often forget that we’re researchers, we’re good at this, we’re good at understanding what people need, what their pain points are, what their goals are.
So reporting might be dead at your organization, but the only way you’re gonna find that out, is by asking your stakeholders about it. How do you feel about these past reports that I’ve done? How would you change them? What are some problems with them? What’s confusing? What’s missing? What’s not working with them?
Understanding that kind of information then gives you, it’s the same thing as asking us to design, asking a designer to design something without any research. We’re trying to design some sort of reporting format without understanding what people need. So it’s the same thing that we tell everybody not to do.
We’re like: Always speak to your users. But we don’t do that ’cause our stakeholders are our users in that sense. And we need to go back and understand, okay, what’s currently working? What’s not working before we get swept up in this whole reporting is dead. Or reporting should be on Fig Jam or Miro only.
And we should, or the opposite, we should be writing papers, right? There’s a whole spectrum, but none of that actually matters outside of what works for your organization and your stakeholders. And I did this, I remember there was a point in which I was writing different types of reports, and I was like, Ooh, papers, like papers came back into popularity like quite a few years ago.
And as an academic, I was like, Ooh, this might be, and as a writer I was like, this is what I’m good at. I don’t need to design a PowerPoint. That’s great. No, nobody read them. Nobody reads my in-depth papers, but people read my like really quick summaries. I was like, that’s interesting. And so then I tried to go back to PowerPoints and then I tried Slack channels and then I tried Miro and I was trying all these things, but I wasn’t asking anybody how they felt about that.
So ultimately what I did is I did a lot of work for myself in trying to create these, all of these different things for my stakeholders too. When I eventually asked them to come back and be like, oh yeah, we loved those summaries you used to do. And I was like, okay, great. I’m glad we’ve tried 97 other things in that time.
So whereas I think that reporting will evolve because there’s, we’re always time poor, so reports are really, can be really tough to write. Sometimes they make sense in their old school format, right? Sometimes a Miro board makes sense, sometimes a summary makes sense. Sometimes a comic makes sense, a storyboard.
And so it’s really about you thinking and working with your stakeholders to understand how they digest information and what the best format might be. And again, that might change. Like per project, a persona might be better suited for a project than a report. That’s great, but these are all things that we need to think through and ask ourselves in this situation.
Knowing my context, knowing my organization, knowing my stakeholders, knowing my capabilities, what is the best thing that I can do as an outcome that helps those people that need this information move forward?
[00:36:18] Tina Ličková: You are maybe using, I’m looking into our notes, different expressions, and which is nice because we just talk about terminology.
I am thinking about impact and you are also talking about the activation phase. Like what is the research supposed to be for? What has it had to trigger, what kind of actions in the future? And this is before we close something that I find super important. How should we impact stuff in your vision?
[00:36:47] Nikki Anderson-Stanier: Yeah, it’s so no matter what you choose as your format for your report, and this is a hole that I sunk into for many years. We can’t just throw the insights over the fence and hope for the best, as much as we might want to. So something that we have as researchers or as people who are doing research, we have, what is it? It’s not the disease of knowledge, but we have the curse of knowledge.
That’s what it is. We know all these things. These things are in our head because we’ve spoken to the participants directly. We’ve synthesized the information, we’ve slept on it, we’ve let it in, and it’s just in our minds, our stakeholders probably don’t have that, especially to the degree that we do, right?
So when we create insights using this curse of knowledge, sometimes we leave out really important stuff that’s obvious to us. How are we meant to know? That’s obvious to us and not obvious to our stakeholders. So something that happens ,what I see often is that when we give people information, we might not give them the full context, they might be confused, they might not know how to act on it. And to be completely honest, no stakeholder goes to how to activate insights class, right? So like us expecting them to know what to do with our insights isn’t really that fair- without our help. People might not know how to take those very generally, like not tangible things and put them into action, which is why I think it’s so important for us to do things that help us activate those insights.
So a report is our step of sharing it, but we need to go one step further in activating it because the most important thing that a report can do in whatever format that it is, is lead to a conversation. Or lead to something like a workshop, right? So we have things like how my restatements, we have things like crazy eights.
We have plenty of different activities that we can do that can help our stakeholders put our findings or our insights into action. And so the report only goes so far no matter what. Format it’s in. So definitely looking towards those activation phases, like work workshops is to, to me, like the best activation phase that you could, or the best activation activity that you could ever partake in, especially when it comes to like intangible insights.
[00:39:18] Tina Ličková: Yeah, and it’s not only the facilitation of workshops because I think it’s a little bit, we are understanding in our business facilitation a little bit, way too narrow, but it’s the facilitation of impact Yeah. To people and actually showing them, guiding them through what do I do about this finding or insights, and how can I react, how can I take action on it?
Nikki, is there anything that you would say, okay, this is what I really have to say. I have to leave the listeners and Tina with?
[00:39:50] Nikki Anderson-Stanier: I think for me it’s getting really clear about what these different terms mean to you and in the context of your organization. Write down these definitions for yourself. Play with them.
See how they work in your next study. Iterate on them. Make them part of your process in a way that makes sense. What do these definitions mean? Like really looking outwards, and then asking the people that are going to be using it. Always be a researcher, I think. There we go. That’s the main takeaway. Always be a researcher. ’cause your skills are so applicable to so many other situations than just talking to users. You can use those research skills in so many different scenarios when you’re engaging with your stakeholders, and so always have your researcher hat on when you’re trying to understand, when you’re trying to get out of this any sort of cycle, because that’s what we tell our stakeholders. We’re stuck in a cycle of assumptions. We’re stuck in a cycle of biases. Break out and speak to people. That’s what we need to do too when we get stuck in our own minds.
[00:40:55] Tina Ličková: Thank you. Beautiful wisdom. Where can people follow you?
[00:41:01] Nikki Anderson-Stanier: I post a lot on LinkedIn, so if you want to check out my LinkedIn, you, I, hopefully it’s not too overwhelming. So I post Monday through Friday on LinkedIn and then run away from screens on the weekend. And I also, as I mentioned, I have my Business User Research Academy, which is userresearchacademy.com, and I have a pretty awesome membership. So that’s a community, a private community of, we’re over 150 researchers right now where we come together and we support each other in these exact conversations.
So I’m always in my, it’s called Heartbeat, the community that we’re on, like the platform. So I’m always there trying to help out. And I also have my own podcast called Dear Nikki. Great. So I rant about these things.
Yeah, I rant about these things too. So you can either listen to me, rant about them, or you can read me rant about them.
[00:41:52] Tina Ličková: Beautiful. Thank you very much for your time and for your wisdom.
[00:41:56] Nikki Anderson-Stanier: Thank you so much. It was a pleasure.
[00:42:04] Tina Ličková: Thank you for listening to UX Research Geeks. If you like this episode, don’t forget to share it with your friends. Leave a review on your favorite podcast platform and subscribe to Stay updated when a new episode comes out.