x

Register Now to Beegin Your Journey!

Register Now For Free to Beegin Your Journey!

Register Now to Beegin Your Journey!Register for free
Homepage / UX Research Geeks / Tadeas Adamjak | AI in UX Research: What Really Changed Since 2023? [UXtweak Report Findings]
small-flowers half-flower half-circle
 Back to All Episodes

Tadeas Adamjak | AI in UX Research: What Really Changed Since 2023? [UXtweak Report Findings]

half-circle publisher
Petra Rajkov Petra Rajkov
•  24.04.2026
Share on socials |

This special episode is based on a webinar recording with Tadeas Adamjak, Head of Growth at UXtweak, where he shared the main findings from our AI in UX Research Revisited report. He talked about how stances on AI in our field have evolved in light of everyone’s deeper experiences with AI models, what common issues and concerns professionals have with the technology, and what the best use cases for AI in 2026 are. 

 

Episode highlights

00:00:00 – Introducing the “AI in UX Research Revisited” report 

00:04:00 – 2023 results 

00:05:55 – 2023 vs 2025 answers

00:10:40 – How UX researcher experts are using AI

00:14:15 – Applying the human-AI workflow 

00:16:25 – Major problems with AI

00:20:43 – The risk of UX theater

00:24:49 – How to showcase the value of human research to stakeholders

00:29:11 – Synthetic participants 

00:30:46 – The future of UX research  

About our guest

Tadeas Adamjak is a CX/UX consultant and Head of Growth at UXtweak, a UX research platform that supports hundreds of research, product, and design teams worldwide. He is also one of the team members behind industry initiatives, including the ‘Dealing with Resistance to UX Research‘ report, ‘AI in UX Research‘ report, and ‘Research Recruities: Award for the Best Recruiting Stories.’

AI is an efficiency multiplier, not a replacement.

Tadeas Adamjak, a CX/UX consultant and Head of Growth at UXtweak.
Tadeas Adamjak, a CX/UX consultant and Head of Growth at UXtweak.

Podcast transcript

[00:00:00] Petra Rajkov: Welcome to UXR Geeks where we geek out with researchers from all around the world on topics they are passionate about. I’m your host Petra, a senior UX researcher and a research advocate, and this podcast is brought to you by UXtweak, the UX research platform for recruiting, conducting, analyzing, and sharing insights all in one place.

In 2023, we asked expert UX researchers, how they viewed the rest of AI in our industry at the time. Their responses were a mix of curiosities, skepticism, and early experimentation. Two years later, we revisited this topic with the same UX researchers and wrote the AI in UX Research Revisited Report.

Today’s special episode is based on webinar recording where my colleague Tadeas shared the main findings from the updated report. He talked about stances on AI in our field have evolved in light of everyone’s deeper experiences with AI models. What common issues and concerns professionals have with the technology and what the best use cases for AI in 2026 are.

Tune in.

 

Let me introduce my colleague, Tadeas.

[00:01:34] Tadeas Adamjak: Hi Petra, how are you doing?

[00:01:36] Petra Rajkov: Good, good, what about you?

[00:01:37] Tadeas Adamjak: I’m doing well. I’m looking forward to today, so I hope we’ll share a lot of interesting content with the audience.

[00:01:43] Petra Rajkov: Yeah, me too. The report is about AI and also the progress that has been made. From 2022 and 2025. So what was the motivation behind it?

It can sound like it’s just three years.

[00:01:56] Tadeas Adamjak: Yeah. Basically what happened is, actually in 2023, we saw this AI frenzy in our industry. Saw a lot of worries and maybe speculations of where the industry is going. And at that time we thought that it would be great to map this out. We decided that we’d like to do this AI and UX research report, and we split it up into two parts.

So we surveyed the community. We wanted to learn whether they are already adopting AI for UX research, how they’re using it, what’s the sentiment there. And then we also picked some UX research leaders, so experts in their fields, and we asked them open-ended questions about. AI and the view of what it will do for UX research or to UX research and yeah, a lot of interesting things were mentioned there.

So that was the report we published back in 2023. And of course since then the lms, a lot of new tools. Came out, many things improved. So we came back and of course we had quite a few friendly nudges for some of the contributors or the industry itself, and we were pushed to, hey, this is something that would need an update.

We decided that we will do that, and we came back to the same research experts we talked to back in 2023, essentially asking them the same or similar questions, of course, to see how their. Views evolved, if at all, and how they’re currently using or maybe not using ai. And this is basically what we are here to talk about today.

[00:03:26] Petra Rajkov: Perfect. So thank you for giving us these little bits and now we can get to the main part. So over to you, Tadeas.

[00:03:33] Tadeas Adamjak: Yeah, thank you. Thank you so much, Petra.

All right everybody, hello again. Welcome to AI News Research revisited. As I mentioned, this is based on the answers we were given by the industry leaders. So we had people such as Mario from Nielsen Norman Group, Stephanie, Walter, Ben, Levi Kova, Kevin Liang, Debbie Levitt, and many others. Chime in and contribute their point of views.

So, very shortly we’ll talk about how the attitude towards AI shifted from 2023 to 2025, where AI is currently at this moment delivering value. So how do use research experts we talked to use it and where they avoid it? Very briefly also mentioned risks or problems the AI has and some tips on how to defend the value of real human research.

And this age of AI and the dreaded AI participants or synthetic uses, whether they actually have some valid use cases now, if at all. And lastly, we also ask the experts to give us their predictions or maybe speculate a little bit on how the AI in new UXR will look like in the next five years. Very shortly, how it looked like in 2023.

Already in 2023, the adoption was pretty high. So at that point, many researchers were already using AI or planning to do so, more than 66%. And the key benefits that were reviewed by the community, of course, was the faster data analysis and. Potentially reducing the research costs, so making it more accessible, maybe opening up to some organizations that were not able to do use research until now.

The most common applications were generating survey questions and then creating study tasks for use. Real testing. While we were asking about specific studies, most of the work was being done in information architecture studies or surveys. The views were quite split. Many people were pretty worried about what AI will do to the industry and whether it would replace us or how it will change our work.

Some of the experts we talked to saw it as a detriment, mostly because it will make potential stakeholders think they don’t need researchers, or they might want to get away with skipping research. And, of course, on the more positive notes, people saw quite a lot of potential, mostly just saving time.

[00:05:55] Petra Rajkov: What was the main difference between the 2023 and 2025 answers?

[00:06:01] Tadeas Adamjak: So when we contrasted that with the answers we got from the research experts in 2025. This is what we arrived at in 2023. The general sentiments were mostly about fear or some defensive skepticism. I remember the amount of the LinkedIn posts I read about whether AI will or can replace use researchers.

So this was like the main question people were asking what it can do and if it could replace that people were viewing the risk of poor decisions from fake AI personas, they were concerned about the potential growing of UX theater and low quality inputs that can be generated very quickly. And of course, stakeholders maybe not understanding what exactly are the limitations of AI and skipping research altogether.

In 2025, what we saw in the answers is that it evolved from asking whether AI could replace researchers into asking how do we use AI without compromising quality, and ideally, also ethics and some other guidelines that you have. But this is what you see. So the conversation from this theoretical, maybe skepticism and even some defensive points, switch to much more practical, and people are talking more and more about how to actually use ai, what use cases they found, what actually works and what really doesn’t.

So AI is being embraced as this tool for efficiency. Some view it as a tool to help or support analysis, and what we see researchers now have quite good standards and view the AI as a tool for augmentation. So another tool that you have. That will or should help you do more or do it faster, but it’s not a replacement that you use research at all.

And the last thing that we wanna mention is that there was a high awareness of the limitations of ai and pretty much everybody told about the importance of human oversight. So what state, from the concerns that were mentioned back in 2023 is mostly stakeholders confusing AI outputs with real research companies opting for speed or the good enough solutions.

Abandoning user research at all, amplification of bad practices in low UX maturity teams or organizations.

[00:08:18] Petra Rajkov: How does this situation look in 2025?

[00:08:21] Tadeas Adamjak: In 2025, pretty much everybody is aligned and sees the importance of human oversight. They even go as far as mentioning it’s something that’s non-negotiable. As another topic that emerged was that they view AI as an augmentation tool, not a replacement.

So that’s something that we mentioned before. Again, a very important warning is that the quality validation is a researcher’s responsibility, and the primary value area, the experts mentioned was that they see the most gains in the data analysis. And lastly, research experts believe that the strategic planning should still remain a human domain.

And when it comes to the place of AI in 2025, I think that Joel Barr put it very nicely in his following quote when he says that he uses as a Swiss army knife. And for him it’s an ”always wants to please” intern. So, I think this is really good framing for what the AI is at the moment. It’s a very smart assistant, very smart intern that is always on, always ready to work, always wants to, of course, please you.

But the big thing, similar with interns, you have to give it a lot of context. And you have to babysit it and you have to check its output. So that sums it up pretty nicely. Some of the things that were also mentioned is that they view AI as the most useful for speeding up analysis. AI can go through a lot of data very quickly, which is impossible to do for a human researcher.

Knowledge management is something that was mentioned. AI can scan your research repository, look for data, resurface some other. Older insights or answers that would be impractical or almost impossible for a human researchers to do. And, lastly, transcriptions. AI gives us this high quality, almost perfect now instant transcription from our user sessions, which was not possible just a while ago.

So again, quite a lot of efficiency gains. As was already mentioned, view it as a efficiency multiplier, not a replacement. As Joel Bar puts it, it’s this smart assistance that requires human oversight. Now, I would like to jump into a more practical area, and that was when we asked the user research expert about how they’re using ai.

[00:10:39] Petra Rajkov: Tell us more. I feel like this is a key question everybody’s trying to figure out right now.

[00:10:45] Tadeas Adamjak: The most, after mentioned four areas. So, of course data processing, so transcriptions, summarizing interviews, mining for some early patterns, finding the exact quotes in large data sets. This can be a huge time saver.

If you wanna support the inside of an exact quote, you can really surface it very quickly. Another thing is converting file formats. So if you maybe doing some hygiene in your research repository, or if you need to standardize the reports or put them in some format. Again, AI can be immensely helpful.

They also mentioned analysis, but most of them said that it’s an analyst support, so not having AI do the analysis on its own or supervised, but it is a support, so it’s still a limited use case. What I find value in is spotting some early trends, providing a second opinion summarizing topics and interviews so you get up to speed quicker and maybe out to bookmarking some key moments, some tagging, so you have something to look at for so you’re not starting with a blind page.

The first mention use case is research prep and communication with research operations. Task such as drafting emails, outreach messages. Their instructions. All these things can be automated or outsourced to AI. Another big thing that was mentioned by Ben Levin is this background research before session, which is especially powerful if you’re doing research on maybe industry or a new product or new market.

Which you’re not familiar with, AI can really be a great assistant to help you get up to speed quickly, to learn the language of the market, what is important so you know enough and you are able to then conduct the recent session. Things like testing questions before you go to your sessions. Uh, testing the for quality, finding relevant case studies.

Helping you build those reports, doing those research operation tasks, this is something that AI can really be a good assistant with. And lastly, AI is being used as this brainstorming and education partner. So to help throw out some ideas there, to critique what you put and helping with forming hypothesis or suggesting methods study you could use.

Just to illustrate the uses that we talked about, we have Kelly Jura mentioning that she used I to mine the data for some interesting patterns before diving herself. Stephanie Walter really mentioned those gains on the research operation tasks where she can draft emails, save time in recruitment, and it’s a big help from there as she’s not a native.

English speaker, and as I was mentioning with surfacing the notes, this comes from Ari Al. When she says that instead of shifting through her notes to find which person talk about this, she can get an instant answer and be directed into the relevant place, jumping there in the second so she doesn’t have to spend hours going through her transcripts.

So out of this use cases, we put together human and AI workflow that we see now researchers. Adopting. We have, of course, the human conduct, the interviews, they observe users, they gather research process, and the analysis. AI then comes in as an assistant, so it helps you to transcribe, to summarize and support you in the analysis.

And lastly, there is this non-negotiable step for human verification. So you need to check the outputs, deepen the insights, craft the final narrative, maybe add some storytelling into it, and draft the final conclusions. Recommendations and then of course turn that into decisions.

[00:14:14] Petra Rajkov: This is so insightful. Do you have any practical recommendations on how to adopt this workflow?

[00:14:21] Tadeas Adamjak: Make sure to implement AI strategically. Try looking at those high impact, low risk applications, which usually are data querying transcriptions, some repository searching and some synthesis assistance. Then what also came as recommendation from Kevin Liang, where he mentioned that he did a longitudinal study when he had his students in his course do the analysis and compare the analysis that was done by AI.

What we have found that the most effective thing was to apply the human analysis first, so we don’t get. And then use AI to help and refine the themes that human researchers did. What they also mentioned there is that, of course, alone AI analysis produce inaccurate results and summaries. So the ideal process would be doing you as a researcher, the analysis first, and then having AI go at it again.

And lastly, of course, maintaining the quality standards. As we mentioned before, if you are using AI to support your work, you should never publish a report or any kind of output without verification or double checking your output. Because AI messed up is not an excuse because it is your work. You will be the one responsible for it.

And as we even recently heard about some scandals where air was used and it hallucinated or fabricated some literature. This can be a huge problem, and it is your reputation that you’re putting on the line. So out of this, we prepared questions to think about when you are applying AI into your work.

Before you share any AI work, ask yourself if you conducted the primary analysis first, if you have validated the outputs against the source data, if you can explain what the AI did and the mythology to stakeholders, if this maintains research integrity. And of course, lastly and probably the most important.

If you would stake your reputation on this output. So that’s something we really recommend thinking about when using AI. And of course that’s because AI still has problems.

[00:16:23] Petra Rajkov: That’s such valuable advice. Thank you. Data. Tell me more about the specific problems that AI still brings, please.

[00:16:29] Tadeas Adamjak: The AI got a lot better, but there are still problems that stay.

It often flattens the nuance, so it collapses the complex behavior into really generic flat patterns. It misses the contradictions, which is something, again, as humans with empathy and being, there is something that we can spot and we can potentially probe about follow up, especially when we are talking about maybe AI moderated sessions, which are still working mostly out of transcripts.

This is something that they cannot do, so the AI cannot spot attention. It only has the transcript from what the participant said, not necessarily how it was set. Or maybe what they didn’t say, which can often be more insightful than the things that they actually say. Another problem is that AI cannot innovate.

It gives you the remixes of what is seen before. This is something that is still a problem. Training data sets are full of bias. This is something that can pollute the data. And lastly, AI is a very confident liar or fabricator of things. You can confidently report or summarize something that cannot really judge what’s actually important for product decisions.

So for those reasons, the used research experts, we asked avoid using AI for the following activities in their work. They’re not using AI for study design and research planning, or at least not supervised. So it can help with some initial drafts and concepts that can get you all the way there often to get the output the way that you want it to.

The prompting necessary just takes so much time that it’s faster and more effective to do it yourself. The experts mentioned that they are not using AI to conduct the actual research sessions. The current AI moderator solutions mostly work out of transcripts. They do not have that context to ask the questions, and often they slip into just pretty much asking people for more details or elaborating, and most importantly, they are not able to do what is often marketed and then be the replacement for user interviews.

Also, don’t use it for final deliverables and recommendations, and of course that goes without saying, it is not a replacement of critical thinking. Just to put that into perspective, this is quoted by the beloved, she said. It is your job to verify the output that you created with the assistant of ai because you do not want to end up humiliated because you put something in the report that AI messed up, fabricated or hallucinated because it is your work and this is not an excuse that will hold up.

So this is where researchers do not use AI for. We also specifically picked two use cases. Which we found interesting. The first one is that they still believe that planning should stay human led. The AI generated plans tend to be quite shallow. The recommendation is not having AI generate the plans for you, but using them to polish the plants, so not creating them from scratch.

Because there usually are not going to be something that you would like to use, but having it as a partner to help you, maybe to critique, to improve, and to test the plan that you have rather than doing it all together. And another very interesting use state is the human moderation. Multiple of the researchers mentioned that.

For them at this point, this is a non-negotiable and people should not be using AI moderators to replace user interviews. Of course, there are some potential users for AI moderations, but at this moment, the use researchers we talked to, they do not use AI to run. The studies, usually user interviews are the main source of our qualitative data and may be the only one.

And if we have AI run it, there is quite a high probability it will mess up. It’ll introduce some bias or it’ll not ask some ideal questions. And then we risk poor data and studies basically ruined, and this can, of course, lead to broken trust. As I already mentioned. Moderation solutions out there oftentimes are not capable yet to ask effective questions, to do some probing and they cannot read the room.

Those are still the skills humans are excelling at, and we have the empathy to help us with that.

[00:20:42] Petra Rajkov: I see, you are absolutely right. Human oversight is a hundred percent a necessity when working with AI. We are in charge of maintaining high research quality. Is there anything else interesting that researchers mentioned in their responses?

[00:20:59] Tadeas Adamjak: Another very interesting thing that was brought back, it was mentioned by Stephanie Walter in 2023, and that was the risk of UX Theater. So what she said back in 2023 that she has seen the people who want to create personas with chat and replace user research with asking the AI tool, this is not easy research and it will lead to poor product decisions.

So this is something that she said in 2023. Of course, in 2025, we can say that this is something that we are seeing more and more often, so her worry is actually coming true. In 2025, Maria Sal mentioned that the growing problem that is now curing, of course, as one of the results of AI, is that AI generates very legitimate looking outputs and to non-specialists.

In our stakeholders, it can look legit. And another big thing, it does it very quickly. So some of the stakeholders are familiar with what we do, how the outputs we produce look like, but are less familiar with how we get there. So they can be convinced that this is research, even though that it’s just built on asking a black box.

To pretend and call it research. So this is a big problem we see and we believe that this is a responsibility. Use researchers to really educate the stakeholders and teams that this is where AI cannot help you, and it’s not the replacements of real humans. We have to talk to real humans if we want to find those insight that can move the organization further.

As I mentioned this growing risk is that stakeholders could potentially start using or are already using AI as an substitute for research. They use synthetic participants as a proof because they say, okay, so this is our persona, and we had the AI pretend to be, and this is what said. And of course, based on this outputs, teams will start shipping features based on the convincing sounding, but wrong insights.

Which are built on these non-existent users. So the worry there is that non-specialists or stakeholders that are in charge of the budgets that are in charge of the decisions won’t be able to distinguish from the AI outputs, from real insights and the companies would potentially replace UX with yet the little box telling them they’re brilliant.

And another big problem is that the synthetic users, or AI-based participants, are going to give us this middle ground, this generic average answer, and this is going to lead to generic products. So if your goal is to create the products and arrive at the same decisions that your competitors do, then definitely it can be something valuable.

But if not, and you’re looking for innovation and maybe some market disruption, then unfortunately you have to talk the messy, unpredictable humans. Another worry is there that with the adoption of ai, who will risk the junior roles, that they will be eliminated or people will start losing some foundational skills.

And the very interesting thing there is that. While AI gives us really good tools to make research much more accessible, it can make running research much cheaper. It can allow us to talk to more people. We couldn’t because maybe language barriers or some other logistic issues that we had. Now it can really give us the tool to do that.

And not only that, it makes the research more accessible. So thanks to the outputs, thanks to the research repositories, thanks to a lot of new features that user research platforms such as ourselves are putting out, it can help. Research to be more adopted. It can get more people actually reading the research, but unfortunately the same thing can be used to make bad research look legitimate.

[00:24:43] Petra Rajkov: Yes, absolutely. I’m wondering, is there anything that can be done about it?

[00:24:48] Tadeas Adamjak: There are four things that the use research expert mentioned. So the first one is, of course, making the process visible. I would say this is a standard thing that you should be doing when we are talking about stakeholder management.

So really involving your stakeholders in the process, educating them, making them understand, ideally making them part of it. So maybe calling them to the research sessions so they see what is happening. Now we can create a lot of outputs very quickly. To help us or make the research accessible and actually adjust it to formats that the stakeholders preferred.

So for example, if somebody really likes the data, of course you can use AI to help you get some stats. If they’re more visual, you can have the participants talk for themselves, like creating clips and highlight reels. Again, this is something that AI tools, that half these features can help you out. And again, of course, documenting your research steps, documenting the reasoning behind it.

And why we just simply couldn’t ask the AI. Another big thing is that AI is pretty quick. It is pretty quick at generating things that look like research. So if you wanna challenge it, so to speak, speed is not the area that you should try to take it on because of course it’s faster. We as researchers should be focusing on proving a value for quality.

So when stakeholders are seeing AI outputs with some cookie cutter results. And we are able, thanks to prioritizations, thanks to focusing on the quality of really thinking about the strategy, the ROI, and we are able to produce eye-opening insights that are actually going to move the business forward even if we are a bit slower, or even it’s more methodical based, the choice of course, becomes then obvious.

So this is something that we have to focus on proving the quality. Making sure that the organizations that employ us see a positive ROI there and understand what we do and why it is important. Another recommendation from K Jura is that you should be the AI expert who knows its limits. Of course, as a researcher, you should be able to know enough about it that you know where the limitations are.

What and how it could be used for work, and that you are able to draw the hard line that, of course, this we cannot use AI for and explain why that is. Of course, use the AI for speed, so things like data preparation, transcriptions, but call out the limitations. So maybe compare the outputs that you got from AI and the analysis you did yourself.

Coming back to stakeholders like, I missed these three critical themes. Which I then found in my manual analysis. So really showing the stakeholders that there are limits and there are things that we can use AI for, and there are things that we cannot, and ideally this will buy you more time to do more research.

So as we talk about if AI helps you with the research operations, it can save you some. Which we would then idly spend talking to more users, doing more research, so we can then drive more impactful decisions in our companies and automate tactical work, elevating the role, and offering more impact to the organizations.

So that’s the practical steps that we prepared outta the answers. We have also done a report last year on dealing with Y research resistance and what objections y research are usually getting when it comes to conducting use research. And I think a lot of those strategies and tips we collected with the community and experts.

Are now becoming even more important. Of course, things like building relationships with your stakeholders, really being in the room where the decisions happens and learning or educating the stakeholders that those are the decisions where a researcher should be present or research should be done. So if that’s something that you would be interested in checking out, you can definitely check out the full report there.

A lot of the things and the strategies I think are very helpful for this case as well. We even put together a quick cheat sheet with some strategies, and I think it can be another resource that can potentially help you out.

[00:28:45] Petra Rajkov: Thank you so much for the tips Tadeas. What especially resonated with me was that we truly cannot compete with AI for the speed of insights, but we can focus on the other aspect and bring more value there.

Was there anything else in the report that is worth mentioning right now?

[00:29:04] Tadeas Adamjak: One of the last very interesting topics is the slash ai based, uh, participants. When we look back into 2023, the consensus there was that it’s a wholesale rejection. The agreement was there that there is no use case for this. It does not provide value at that point.

So this view has also evolved a little bit, and now we are seeing practitioners mentioning some actual useful applications. They mentioned mostly two photo personas and some early check-in and hypothesis. So this comes from when she mentioned that it is great for collecting some hypothesis, but again mentioning it must be verified real users.

So what the value can be there is for brainstorming, scenario building, testing your script pressure, testing some early ideas, getting some early low stakes feedback, which you can then take to real participants. Another big thing is the already mentioned pre-research augmentation, so as Edward Ku.

Founder Sig mentioned that it is useful for early augmentation, but it is not a substitute for real human variability. And of course, why is that? Some base problems are over generalization, amplifiers of bias. It cannot innovate, or even a question that Maria Rosa ask if we can actually feel empathy. For a made up persona.

So this is when it comes to AI users. The last question we asked use research experts is how they view AI and use research in the next five years. What we arrived at, whether two perspectives – the first one, the more optimistic one, is the perspective of elevation. Imagine that the research gets elevated.

There will be more strategic work, less tactical research, and it’ll have a bigger impact. Maria Rosa mentioned is that she imagines the researchers will become architects. Where AI will be the builder and it will do the repetitive heavy lifting and researcher will be there to architect it. The other point of view is the perspective of elimination or reduction.

This comes from Debbie Levitt when she predicts that AI will reduce or eliminate most of research jobs in the next three to five years. Also, some of the other s research experts mentioned that they envision. That it can lead to smaller teams. It can lead to the elimination of junior roles, as seniors will be asked to do more without their support and there will be even more pressure to justify the value.

I think that the future that all of us here would love to see more than the other one. It’s from Maria Rosa when she says that she sees research as more like architects. Again, as I mentioned, I do believe that it is our responsibility and our work as an industry that we have to educate the companies and really draw the line at where AI can help.

What are the potential use cases where it cannot and how it should be used. So it actually provides value for us, for our shareholders, for our users.

[00:32:08] Petra Rajkov: What do you think we should focus on to help build that future, to make sure that the future is more real?

[00:32:12] Tadeas Adamjak: We think that these are some of the skills that we should be focusing on.

So being more strategic, really investing in the strategy, learning as much as possible about facilitation, stakeholder alignment and management. We think that this will be the key. So the soft skills, the relationships there. Storytelling to help you get your points across, of course, prioritization. So the organizations get a good ROI on user research.

And just a closing message here. As we mentioned, the workflows are changing. AI is definitely evolving, some of those, but it is not here to replace researchers, and we believe that the advantage comes from the context, empathy and, of course, ethics. If you would like to dig deeper into this report, you can scan the QR code.

You can also find it on our blog. You can read the full article. You have the full answers of each of the experts we talked to, so you can see the full perspectives. And if you’re interested, you can go back to the 2023 report. To do your own comparison. So again, if that’s something that is interesting for you, I will definitely be happy if you check out the report and let us know what you think of it.

[00:33:28] Petra Rajkov: Thank you so much, Tadeas. That was a lot of insight.

[00:33:31] Tadeas Adamjak: Thank you to everybody for this link. It was great. Bye bye. Have a great one.

 

 

Thank you for listening to UXR Geeks. If you enjoyed this episode, please follow our podcast and share it with your friends and colleagues. Your support is really what keeps us going. 

If you have any tips on fantastic speakers from across the globe, feedback, or any questions, we would love to hear from you, so reach out to geekspodcast@uxtweak.com.

Special thanks goes to my colleagues, to our podcast producer, Ekaterina Novikova, our social media specialist, Daria Krasovskaya, and our audio specialist, Melissa Danisova.

And to all of you, thank you for tuning in.

💡 This podcast was brought to you by UXtweak, an all-in-one UX research tool.

 

Read More

Healthcare digitalization and its implementation around the world

Tjaša is an internationally renowned digital health speaker and moderator, known for her expertise in global healthcare digitalization, telemedicine, and her role as the founder and host of the highly-rated Faces of Digital Health podcast, where she explores the evolving landscape of digital healthcare with top industry experts.

Cultural crossroads: when research worlds collide

Maria Panagiotidi, with a background in Cognitive Psychology and Cyberpsychology, explores cultural sensitivity in UX research. She discusses the role of culture in communication during research and offers insights on inclusivity and generational differences in work culture.

Exploring UX research: the second edition of ‘Interviewing Users’

Steve Portigal, a renowned user experience researcher and author, discusses the updated second edition of his book “Interviewing Users.” He covers changes in the user research field, including cognitive biases, remote research practices, and the rise of research operations and in-house teams.

Improve UX with product experience insights from UXtweak

Test your assumptions quickly, access broad and qualified audiences worldwide, and receive clear reporting of findings - all with the most competitive pricing on the market.

Try UXtweak for Free
Improve UX with product experience insights from UXtweak