x

Save Time and Frustration

Say No to Poorly Designed Products!

Save Time and FrustrationRegister for free
Homepage / UX Research Geeks / Leisa Reichelt | Contextual research renaissance in the AI era
small-flowers half-flower half-circle
 Back to All Episodes

Leisa Reichelt | Contextual research renaissance in the AI era

half-circle publisher
Tina Ličková Tina Ličková
•  08.04.2026
Share on socials |

Leisa, former Head of Research and Insights at Atlassian, reflects on how AI is changing the UX research industry. She discusses how vibe-coding helps shift from traditional project sequencing to faster, more iterative build-measure-learn cycles and how that impacts the role of UX research in organisations. Leisa expounds on her positive attitude towards AI, saying that the unique value of researchers will shift toward filling the “judgment gap” with deep, longitudinal human observation that machines simply cannot replicate.

Episode highlights

00:01:47 – About Leisa

00:04:24 – Working in the UK government

00:07:39 – Research at Atlassian

00:09:56 – AI is the next Internet

00:14:18 – Sequencing in research is fading

00:20:01 – Role of UX researchers in 2026

00:24:12 – Maintaining UX research quality with AI

00:31:20 – Final thoughts 

 

About our guest

Leisa Reichelt has spent two decades helping governments and organisations figure out what people actually need – and why that’s so much harder than it sounds.

As Head of Research and Insights at the UK’s Government Digital Service, she helped reshape how we think about user research for public services. Most recently, she led research at Atlassian. These days, she’s a walking mid-2020’s cliché – offering coaching for impactful research, getting hands-on vibecoding niche app ideas, and studying and making podcasts.

Leisa has particular obsessions with why well-intentioned systems so reliably make life harder for the people they’re meant to serve and what the advance of AI means for UX Research.

I do feel as though it (AI) is a fundamental shift in how we operate as humanity.

Leisa Reichelt, a UX researcher and coach.
Leisa Reichelt, a UX researcher and coach.

Podcast transcript

[00:00:00] Leisa Reichelt: I really hope that there is a whole cohort of researchers out there who have the same bet that I do, which is the future is. Rich and human, and contextual and longitudinal, and all of this AIstuff has got its place, but it’s gonna leave a big gap, which is the judgment gap, which can only come about through this.

[00:00:27] Tina Lickova: Welcome to UXR geeks, where we geek out with researchers from. All around the world and topics they’re passionate about. I’m your host Tina, a researcher and a product manager, and this podcast is brought to you by UXtweak, the UX research platform for recruiting, conducting, analyzing, and sharing insights all in one place.

This is UXR Geeks. And you’re listening to a conversation I had with Leisa, who is a researcher, was leading the research at Atlassian, is now a consultant in our field and who is very much a big optimist when it comes to AIand now how our role is going to be reshaping. So if you have some blues when it comes to ai, please listen to this conversation and let yourself be inspired.

Hello, Leisa.

[00:01:22] Leisa Reichelt: Hello, Tina. How are you?

[00:01:24] Tina Lickova: Good, how are you? So now we are connecting Berlin, Europe with Australia, which I really love the idea of continent. And Leisa, I think it’s super interesting how your professional journey evolved. So could you maybe for introduction, tell us something about that and then we jump into the topic that we are going to talk about.

[00:01:46] Leisa Reichelt: Wow. I feel like I’m old enough that I could take up the whole episode just talking about my path, to be honest, but I won’t. I am kind of sufficiently old enough though that I remember when the internet first started becoming a job. And prior to that I didn’t really know what I wanted to be when I was a grownup.

And then when the internet arrived I was like, ah, there it is. That’s what I wanna do. Um, and. At the time, there was this amazing role called the producer, and the producer kind of got to do so many different things. We would do account management and project management. We’d do information architecture and interaction design.

We didn’t do the visual design and we didn’t do like the hardcore coding, but you know, I had to do lots of front end coding and lots of production artwork, and it just taught me so much about all the different parts of how digital things get made. And then over time I managed to like drop the things that were less interesting to me.

And what I ended up with was research, because I was just really conscious that. We spent a lot of time making things and not really knowing if they did what we thought that they would do or what we wanted them to do. Um, and so through that I learned all about user centered sign and leaned into research.

I started doing this in Sydney and there wasn’t really a huge community for that here. I was really fortunate to be able to move to London. I spent 10 years in London where there were loads of people doing this kind of work, and I learned from amazing folk in consultancies and I had my own. For freelance business for a while, and then I got to work with the government digital service in the UK government who were the folk behind Gov uk.

And that was a really extraordinary opportunity to help reshape how government thought about how it created services for the people that it had to look after. And that was a really formative time. Since then, moved back to Australia, worked for Australian government for a while, worked for Atlassian global tech organization, and for the last couple of years I am doing a lot of coaching with folk who are researchers, managing research, and trying to understand how research might fit into their product development ecosystem as well.

Yeah, that’s kind of me.

[00:04:02] Tina Lickova: Two things. If you could maybe tell more about the experience working for the governments. That is something that I’m interested in because it’s like the basis of like what people need as citizens. That’s where it starts for me. And the second one would be your work Atlassian.

’cause a lot of us are using the product. So maybe if you could enlighten us a little bit more in those two roles.

[00:04:24] Leisa Reichelt: I’ll talk about UK government because I was fortunate enough to work there, I think at a really sort of formative time, which was after Marc Lane Fox wrote her report as to what should happen to the UK digital services, and a really incredible team was formed in the center of government who were tasked with making government digital services.

Easy to use, desirable to use after, you know, a fairly long history of not being that at all. And I think what happened in that time was a real shift. Towards user-centered design where we had amazing teams of designers and researchers and data analysts and product managers and all kinds of folk who were very interested in working in our agile and user-centered way, putting the actual observation.

The end users into an iterative design process. So we built research labs inside government buildings and had people come in and do usability testing, other kinds of research on a really regular basis. So we were doing it, we’re running fortnightly sprints at the time. It feels a bit archaic talking about agile ’cause I feel like it’s.

Almost a thing of the past now, isn’t it? But we would run fortnightly sprints and people would come in and do usability testing on a fortnightly basis, and we would have observation rooms that were crowded with, you know, all different disciplines and policy folk coming in and watching and it was really tremendous and helped, I think, fuel a wall of the.

Change that happened in government to really help refine the quality of the experience that we were able to deliver. So yeah, a really formative time. Super challenging because you don’t really get to choose a target market. We didn’t really talk a lot about things like personas, because, you know, the whole point of it was that it had to work for everybody.

Had to work from an accessibility point of view. We had to think about, you know, digital literacy from the extremes of people who are not very familiar with doing things online. But it was a fantastic challenge and I think one of the other big things that I learned there was the power of systematization.

Just coming up with basic rules that aren’t necessarily the perfect answer to things, but that everyone can follow. So we would say things like, just get six people in every two weeks and do a little bit of interviewing, and then do some task-based usability testing. Just do that. And that became like a really simple recipe that people could go, okay, now I know what to plan for.

And so they could budget for it and they could resource it, and then they could get started. And as they got more skilled, then they could go, well, actually I kind of need to do more than that, or I need to do different than that. And that’s great because at least they had the makings of being able to do that in place, whereas before.

Would be like, oh, I’ve got a project coming. What do I need? You’d be like, oh, I dunno. It depends. And you’d end up with nothing ’cause it was too hard. Whereas coming up with these really simple, systemic ways of getting people to think about just making a loan allowance for it meant that you had budget and you had capability and then you could actually get stuff done rather than talk about it all the time.

Then I moved, I did some Australian government work as well, and then I moved out of that and into, you know, tech company at Atlassian. A lot of people were like, oh, this must be so much easier working at a tech company than working in government. And like, you’d be surprised. Actually, it’s still super hard and for different reasons as well, because in government a lot of the time you have a very policy-based culture where, you know, people make the rules and they often come at it from a, a face-to-face.

Service delivery kind of concept rather than a digital service delivery concept. Whereas in a tech company, you have a different kind of cultural setting, which is that engineer’s rule and a law thing comes through that engineering lens and making stuff quickly is. Really desired and desirable, and anything that gets in the way of that can be problematic.

Things that get in the way of that on a regular basis are things like research and design. So working around that I think was a big part of the challenge. And you have a large organization with lots of different parts and lots of different teams. Who are all trying to coordinate everything that they’re doing.

So thinking about how to structure a research team so that you’re enabling the teams, but also kind of getting that coherent view. Yeah. Uh, it’s a proper challenge and I think. We got better and better at it over time, but it’s always, always more opportunity for improvement. That’s for sure.

[00:09:03] Tina Lickova: I completely resonate to what you are telling about like the internet came because I had the same framing, like I was searching for what I wanna do, went to HR consulting, didn’t like it then, and then I was like, oh, I wanna do something on the internet.

That was my framing. Very specific. I get it. And now we are. The next revolution, which, you know, some people even say all the things, even from the invention of the wheel are coming towards the ai. So yeah, this is where the moment of comes where we are mentioning and I, and we are going to talk about AI and I found your views on how our job, particularly, is changing.

Pretty fascinating. So, could you tell us more and then we will, you know, explore patterns of it?

[00:09:54] Leisa Reichelt: Absolutely. Well, I mean, I should put my cards on the table and say I feel similarly about AIthe same way that I did about the internet. In that I do feel as though it is a fundamental shift in how we operate as a humanity, and that is all of wondrous potential.

I think I’ve learned from watching the internet grow up that not all good things are gonna happen all the time. There’ll certainly be challenges that come along with it, but my position on it is inherently optimistic with some pretty sensible concerns as well. Funnily enough, one of the reasons that this has gotten really interesting for me is that as someone who wants to operate in this world, you have to get your hands into it, right?

Gotta start making stuff yourself. And so I started getting in and doing the vibe coding. Which hopefully everyone is doing, even if it’s just like for their own funnel, throw away things. And I realized a couple of things. I realized that as I was doing it, it reminded me an awful lot of my producer days, you know, back in the late nineties, early two thousands.

I felt a lot when I was working with the software as though I was working with the kinds of teams that I was working with then as well. And so that was kind of strange. And then the other thing that I spent a lot of time observing was how I behaved as like the product owner who is also a researcher.

And it gave me a really strong appreciation of two really important things. One is the drive to build. Which I mentioned it in terms of Atlassian’s context, don’t get in the way of the engineers. Right. And I think that is something that we as researchers have always had to contend with to a certain degree, is that everyone is eager to just like get started and build the thing.

And I think now that we have AIand all the coding tools that come with that, they’re stronger than ever before. I feel it in myself. When I’m doing this right, it is just an urge, an instinct that we are not gonna be able to hold back. So that makes me kind of question as a researcher, if I really feel this when I’m making something.

Then how do I think about our role? You know, if we talk all the time about how important it is to understand the customers and understand the needs and understand, you know, the problem that we’re solving before we start working out how we’re gonna solve it, and then later on we are actually gonna build it, uh, assuming that nobody’s gonna wait for that.

’cause we could just build it and put it out there and find out, well then what does that mean for us as researchers? Does that mean that we are redundant now and no longer necessary? Does it mean that we have to do different things? Do we have to go super fast now so that we can keep up with this? Or is it something different again?

’cause I think that keeping up with the speed is just kind of impossible. Can do it to a certain extent, but I also think it’s a bit of a fool’s errand. It’s a bit of a race to the bottom, and I don’t think that’s a race that we should be getting into.

[00:12:58] Tina Lickova: We’ll be right back after a short break with a commercial message from our Spencers.

Hey, UXR geeks. You know, this podcast is brought to you by UXtweak. I’ve tried several UX research tools before, and most item make recruitment a nightmare. Or overcomplicate the analysis next week is the first one that actually does both. Well, I can recruit participants from over 130 countries with solid quality checks and detailed profiling, and it supports both moderated and un moderated studies and analyzing results doesn’t feel overwhelming.

It just makes my research smoother. So if you’re curious, go to uxtweak.com website and start for free. No credit card and no strings attached.

[00:14:02] Tina Lickova: When we were collecting nodes, you were talking about the sequencing, which I find an interesting term, and you already said it a little bit, but I wanna get into it because your perspective is that the sequencing is fading. And I’m also interested in knowing. Where did you see in real life happening in your practice?

[00:14:19] Leisa Reichelt: Lemme go back in time again a little bit. When I was a baby user researcher and we would talk about why our work was really important. We always used to talk about, we called it like the 1 9 90 rule, right? Which was that if you made a change and you made it on paper, it costs $1. If you made a change and you made it in a prototype, it costs like $9.

If you made a change and you made it in production code, it costs like $90 and so therefore always much better to make the changes as early as you possibly can on paper or on a prototype rather than, you know, in production code. I think that rule now has gone out the window.

[00:14:56] Tina Lickova: Right, okay.

[00:14:57] Leisa Reichelt: Because like the cost of making code now has become so commodified. It’s just not the major expense that it used to be. It’s become something that’s not free. Certainly it doesn’t howl the same kind of economic value than it used to. And so if for us to make the argument to say we need to do all of this work upfront before we make the thing is a much harder argument to make.

So I, I think that we need to get our heads around the idea that actually what’s gonna happen is that the thing is gonna get made much earlier on. In the product development life cycle than maybe what we would’ve traditionally been comfortable with now. Yeah. And so we can do one of two things, right? We can fight against that and go, no, no, no.

Wait, wait, wait. I just don’t think that’s realistic. Or we can go, assuming that’s gonna happen. Then what does our role become? Assuming that someone’s gonna make the thing and put it out in the customer’s hands. Then what does our role become? I think that’s the big challenge for us moving forward, is to think about what do we wanna do as researchers to help support this process?

Knowing that the build is gonna happen a lot earlier without a lot of the things that would usually make us feel comfortable about what’s being built. My experience that I’ve had with this is really just kind of in the context of my own projects right now, where I see myself doing it, I kind of question myself, assuming that I can’t stop myself from building.

Because it’s so fun and easy to build this stuff, right? But I do still wanna be like a responsible research citizen, and I wanna make sure that I’m understanding. Things that are working and things that are not working right. So I’m seeing two really important things in my behaviors that I think probably are widely applicable.

One is that I think a lot more about working with kind of like an ongoing group of people who are using this thing that I’m putting out. Like the value of making the thing early is that you can get the actual thing into people’s hands. I made a thing, I sent it out to a little alpha group of early customers and they were like, yeah, this is great.

I’ll definitely use this. And then guess how often they used it? Right. There is no more, there’s no more reliable data than that, right? They can say anything that they want, they’re not using it. I’ve got a problem on my hands, right? So I had to like scrap that and go back and rethink what my thing is.

It’s not a new thing to do in software land, right? It’s to run these kind of beta groups and understand what small cohorts are doing with the actual thing. Have, you know, regular, ongoing conversations with them, sort of longitudinal. You know, participatory experience almost. So I see hopefully a lot of that in our future.

And I think a big part of our job is to be able to work out when to call it and go, you know what, this is a fail. Or we can, maybe we can conclude a learning experience rather than a fail if we prefer that. Right. But I think part of what we need to learn to do is if we can build all of these things. We don’t want a million things.

We want a couple of things that are gonna do really well and make us the money that we wanna make. So when do we decide to kill and move on to something completely different?

[00:18:21] Tina Lickova: Yeah.

[00:18:21] Leisa Reichelt: Hold time on the experiment and go, this is not working. We need to make a radical shift. That opens up an opportunity to go, okay, well if it’s not this, then what?

And in all of those conversations that we have with that user group over that time. We’ve done the research, right? We have a whole lot more answers now that we, that we maybe would’ve liked to have had at the beginning. But arguably the answers that we have now are more reliable, are more believable, have more credibility because they come from that actual interaction with the product that we’ve created and put.

[00:18:55] Tina Lickova: This is exactly why I think the job is changing and we have to be listening to the change happening, and I see it now more as a product manager when a researcher tells me, but we don’t understand. I’m like, it’s a change of Paragon. I am not going into the deep psychological research exploration before because it might inform me of something.

But it’s very hypothetical and I am putting the thing in front of the people as an assumption, metaphorically say an open question, but I can see their reactions and their real life behavior with it and the emotions around it and how strong the emotions are towards the thing. Mm-hmm. And that’s also very valuable.

And then I can go back to, okay, what’s the. Big psychological base for it, but I think a lot of people are thinking, oh, I’m losing that in my job when AI is approaching us and changing stuff. You are actually not, but the weights are changing, in my opinion.

[00:20:01] Leisa Reichelt: I agree. I agree. I think we’re very used to thinking about our role as being very front loaded.

That our job is to do all of this before, right? And I think actually we still need to do it all, but we might just need to like shift where that work’s happening and therefore you know exactly how that work’s happening. But I think that’s really beneficial because I think by having the added benefit of the real context of use.

In day-to-day life, it’s brilliant data. It’s a brilliant way to structure your research to get more dependable insights into things. I’m gonna say entire organizations have been quite good at like doing the project. And then the thing goes live. And then what? How good is the learning after something being put into people’s hands?

Often nowhere near as good as the bit before. But we could very easily argue that the best learning can happen after the thing goes live. And yet. We’re so often moved onto the next shiny thing instead of really staying with that problem and deeply understanding, you know, how is it operating in people’s lives?

How is it being used? What’s working, what’s not working? I think there’s still plenty of work to be done. But I think it’s shift a little bit in terms of of where it exists. You think. There are a bunch of other changes as well, obviously in that these whole new sets of tooling bring us opportunities to run interviews automatically.

Like I saw a post from a product manager recently where they were so excited about how AIhas massively improved their customer discovery. In their project because they used to only be able to talk to three or four customers a month. Now they talk to dozens of customers every month because they have an AIset up to do the interviewing for them.

You know, so the AIdoes the interviewing and then the AIdoes the analysis of the interviewing and tells the pm what the PM needs to know, and the customer discovery is done. That makes some people really happy. I know it probably makes most people who are listening very disturbed, and that’s fair enough as well, but like this is the context that we’re sort of operating in.

We have a lot of researchers who are going, okay, how can I set up my operational. Foundations so that I can go fast like that as well, so that we aren’t still considered the slow bottlenecks for everybody else in the organization.

[00:22:32] Tina Lickova: I’m just thinking about when you are saying this, you know, I think I was guilty of that as well.

Like a gatekeeper or a quality manager. Research and then I found out I have to let people do even the bad research and then tell them not what is wrong about it, but how to get better in it. And this is what I’m also still very much negotiating in my head as well because. You were talking about the systemization as before, it’s still stuck in my head because the whole AI thing is very messy.

I have way more, you know, ad hoc meetings to just clear something, a small thing or you know, talking about that and that. And then the tools come into it. And if it’s real life, you are even more combining adamant methods like A/B testing and equality, quantity, whatever. So it’s getting messier, messier. And then.

Where does the systemization come in? How do we actually make sure that the teams we work with evolve and get deeper into understanding? Have a good intuition for the user. Good. Intuition is something that I, I love that you were using, because intuition for me is knowing, oh, okay, this is what I should look into because I have the experience of knowing this is where I should dig in and.

Teaching people to do it because we are the experts on it, is something that I struggle with in a way because it’s hot. Like how do you teach? Sometimes you don’t even get successful of teaching people how to ask proper questions.

[00:24:09] Leisa Reichelt: Absolutely, and I, and a lot, a lot of researchers are spending a lot of time being the quality police in their organization and I think that’s deeply unsatisfying as a role.

Most researchers, they don’t wanna spend most of their time facilitating. Other people do a poorer job of the research than they would be able to do themselves. That’s really frustrating and it’s frustrating for everyone else who like, really they’re just going to the research to get the rubber stamp to go look.

They said it was fine, so you know I’m gonna do it. So I don’t think that’s a very satisfactory experience for anybody. When I was at Atlassian, we at one point, unfortunately lost a lot of our research team. As I know a lot of other folk are losing teammates left, right, and center as well. And that forced us to sort of reconsider what our role was in terms of quality.

And one thing that we really wanted to do a lot more of was usability testing. And so working with our design leadership, we were able to put together a program that we called the SEQ program because we used the single ease question as the core metric of it. And the whole point of doing that was to put together a framework and go, here’s what you should do.

Here’s when you should do it. Here is how you should measure and report on it now, off you go and do it. We had a couple of people who were like tipped to be like the coaches, so if anybody wanted some coaching, they could get a little bit of coaching, but it meant that we could then free up predominantly designers, occasionally product managers, but mostly designers to just go and do this.

And then mostly we just kind of looked away and let them do their thing because the time of the researchers was, I think, sufficiently precious. That we needed to make sure that that was pointed to things that were. Really important and really complex and required the skills and training that they had rather than spending all of this time watching out for the quality of everybody else.

So my attitude to that at the time and also to a lot of the stuff that’s happening now off the back of AIis just let it go. You can give some guidance, but don’t hold onto it day to day because you’ll never win, and it’ll take a lot of your attention and you’ll feel deeply unsatisfied and frustrated.

And you won’t get to do the thing that you love to do as a researcher. So you have to clear space to do the thing that you love as a researcher. And if I kind of play this forward a couple of years, right? Let’s imagine in a couple of years time where everybody has automated and AIthe research process to within an inch of its life, what do you imagine is gonna be happening then?

What I kind of imagine is this absolute ache for real customer interaction. AIis amazing at many things, but is not very good at telling you about the future because it’s based on data from now and before. Now it’s not very good at taking that intuitive leap. It’s really great at pattern matching, right?

So I think there are gonna be certain things that we take for granted as researchers who do. The kind of work that we do and who bring the kind of opportunities and insights that we bring that can’t be automated. If I look forward a couple of years, I would bet good money on what I’m calling this contextual research renaissance, right, where actually probably one of our main jobs will become helping to build that intuition.

So everyone and their dog right now is writing about how the way that we keep our jobs in the AIfuture is through our judgment. The AIcan do all of these amazing things, but it can’t do human judgment. Early humans can do human judgment, but how do people in our. Product design context, for example, how did they get that judgment?

You’re not born with it. You can’t like just work really hard and develop the capability for it. Like so much of it comes from observation, it comes from observing cause and effect what works and what doesn’t work, and using our human brain and all of our experience to put that together. Like as I’m doing my vibe coding, I see I use this platform called Rept sometimes, and it will go through and it will test what I’ve asked it to do and it will like pull up a browser and I see it go step by step through the process.

And I was imagining to myself the other day, imagine if it could do usability testing. Then how quickly if we outsourced all of that to machines, how quickly would humans stop being able to make good judgment about what’s usable and what’s not usable? Because we haven’t spent the time watching what works and what doesn’t, and using our own brains to understand contexts.

The only reason that those of us who are any good at it are any good at it is ’cause we’ve done it for a long time. We’ve watched it. We’ve seen cause and effect. We’ve seen all these different contexts, and now we can make a pretty good debt as to what’s gonna work and what’s not gonna work. But if we stop.

If we outsource all of that observation or that observation just goes away entirely, then where does our judgment come from?

[00:29:20] Tina Lickova: Mm, right.

[00:29:20] Leisa Reichelt: And so I think a huge part of our job is to do the kinds of work that we actually really love doing, right? That rich contextual work. That longitudinal work, getting sticky, like, let’s get off Zoom.

Again, probably this won’t happen next week. It’s gonna take a little bit of time for this to play out, but like we all got onto Zoom and started doing Zoom research when COVID hit, and how many of us have really stepped back from that? I didn’t think we really have. Right. There’s so like what 95% of research these days is like Zoom interviews.

That would be my guess, just what we do now. So I really hope that there is a whole cohort of researchers out there who have the same bet that I do, which is the future is rich and sticky and human, and contextual and longitudinal, and we bet that all of this AIstuff has got its place, but it’s gonna leave a big gap, which is the judgment gap, which can only come about through this.

This is the kind of stuff that like Jared Spool was talking about in like 2011 when he talked about exposure hours and how the research that they’d done had shown that if you are watching customers for two hours every six weeks, your team’s gonna make better decisions. And I don’t think that that is going away.

I think that’s gonna become more of a way to win in the future with AIthan not. So that’s my soapbox. That’s my soapbox.

[00:30:48] Tina Lickova: You framed it so nicely and you took away all my questions and answered them. The skin shop, you gave a lot of advice to researchers. I think the let go and to be. Okay with it is a very strong one to be of the judge and teaching people the judgment and the intuition getting better at it is the second one.

Is there anything else that you would advise the folks who are now looking at all the changes and trying to figure it out? No matter if juniors or seniors in the research field…

[00:31:23] Leisa Reichelt: I have a lot of discussions with people at the moment who are coming to me. For almost career calibration coaching where they’re like, they’re looking at the tea leaves and going, is my job actually gonna exist in a few years time?

I think that’s a very sensible question to be asking, but as you can see, I have optimism that there will certainly be a particular kind of job thinking about that in light of all of the changes that are ahead of us. Something that’s really good to do is to think about in your work that you do now, where does your energy come from?

If you’ve done like a session of work and then you sit back and you go, oh yeah, that was great. Really enjoyed that. Like what kind of a session was that? Versus the ones where you go, oh, thank God that’s over. I feel so drained. Right? Like, we all have those. I think that it’s less likely that there’s gonna be like a generic user research role.

I think there’s gonna be fragmentation and there’s gonna be different types of people who are gonna go into different kinds of areas. And I saw a wonderful talk, I think it was James Lang speaking at UX Brighton, talking about like optimistic futures. And he has like this whole deck of like different kinds of directions that he thinks that researchers will potentially go into in the future, which I think was.

That’s very inspiring and I think that there are a lot of directions, ’cause I see some people writing about how excited they are about how they can use vibe coding to build different kinds of tools for them to use in their research to give them different ways of working with their participants. And I hear other researchers who are thinking it’s the systematization and all the technology gets me really excited.

So I think we should be thinking about there not being like a generic user research role. Like there has been historically, like I don’t think that that is gonna exist in the same way in four or five years time, I don’t think. But I think there are multiple potential pathways and the way that I would be thinking about how am I gonna move forward is what gives me energy.

And then think about how do I craft that into. Future potential role that’s probably got a level of specialization around it. Like one of the things that James talks about, which is interesting because it’s the opposite of what I always used to think is, you know, subject matter expertise is a really great opportunity for researchers as well.

Like whether you wanna be like a health researcher or finance research or whatever, you know, for a while it’s hard to get a job in the finance industry if you don’t have. Finance research already on your cv, but I wonder whether more of that is gonna happen over time, that we’ll have more subject matter depth and then researching is part of, you know, the way that you contribute to that as well.

[00:34:04] Tina Lickova: Thank you very much. Beautiful ending. Thank you for sharing your thoughts. I really felt the energy in our conversation.

[00:34:13] Leisa Reichelt: That’s really great. Yeah, I’ve definitely been through a period where I was just like, that’s it. The fund’s over the circus isn’t leaving town, and this has been a great job, but it’s not gonna exist anymore.

But the closer that I get to working with ai, the more I understand about its shortcomings, the more I see the way that the others are working with it. Actually, the more optimistic I am that I think if we. This opportunity, we could have much better jobs that we like more, that let us do work that we wished that we could do in the future, and it could actually be a really good outcome for us all.

[00:34:56] Tina Lickova: Thank you for listening to UXR Geeks. If you enjoyed this episode, please follow our podcast and share it with your friends and colleagues. Your support is really what keeps us going. 

 

If you have any tips on fantastic speakers from across the globe, feedback, or any questions, we would love to hear from you, so reach out to geekspodcast@uxtweak.com.

Special thanks goes to my colleagues, to our podcast producer, Ekaterina Novikova, our social media specialist, Daria Krasovskaya, and our audio specialist, Melissa Danisova.

And to all of you, thank you for tuning in.

💡 This podcast was brought to you by UXtweak, an all-in-one UX research tool.

 

Read More

Jake Burghardt | Recycling research

Jake Burghardt, author of “Stop Wasting Research,” discusses how organizations leave valuable insights unused from past studies and explains how this happens due to poor preparation, lack of accountability, and weak integration into planning. He emphasizes that activating existing research requires active knowledge management beyond just repositories, and offers practical ways researchers can extend their impact by prioritizing insights across studies and making past research accessible during planning cycles.

uxcon special with Kathleen Asjes

Kathleen Asjes is a dedicated Research and Insights Leader, guiding businesses in their evolution from data-driven to insights-informed, and empowering global research teams through her company, Stippen, leveraging her extensive experience across diverse sectors including Fintech, e-commerce, and Media.

Improve UX with product experience insights from UXtweak

Test your assumptions quickly, access broad and qualified audiences worldwide, and receive clear reporting of findings - all with the most competitive pricing on the market.

Try UXtweak for Free
Improve UX with product experience insights from UXtweak