x

Save Time and Frustration

Say No to Poorly Designed Products!

Save Time and FrustrationRegister for free
Homepage / UX Research Geeks / Jake Burghardt | Recycling research
small-flowers half-flower half-circle
 Back to All Episodes

Jake Burghardt | Recycling research

half-circle publisher
Tina Ličková Tina Ličková
•  22.10.2025
Share on socials |

Jake Burghardt, author of “Stop Wasting Research,” discusses how organizations leave valuable insights unused from past studies and explains how this happens due to poor preparation, lack of accountability, and weak integration into planning. He emphasizes that activating existing research requires active knowledge management beyond just repositories, and offers practical ways researchers can extend their impact by prioritizing insights across studies and making past research accessible during planning cycles.

Episode highlights

00:01:38 – About Jake 

00:05:13 – How we waste research 

00:07:45 – Why we forget about research findings 

00:12:30 – Research acceptance in teams 

00:17:25 – Promoting research for decision-making  

00:22:08 – Research and knowledge management 

00:27:14 – Final thoughts

About our guest

Jake Burghardt is a consultant, author, and speaker at IntegratingResearch.com. His mission is to shift expectations in the tech industry so that almost every important product roadmap, prioritization, and design decision is grounded in research about people and real-world systems.

Jake helps organizations drive product value, outcomes, and quality by:

– Extracting more impact from existing research assets

– Achieving a useful mix of new research in collaborative communities of practice

– Integrating insights into crucial product decisions

He has 20+ years of experience in product research, vision, planning, design, and experimentation. Previously, he owned insight activation initiatives in Amazon’s Retail and Alexa divisions.

Prior to working within research communities, Jake consulted on products in a range of domains, including “new space” engineering, power plant operation, cockpit avionics, genetics instrumentation, creative production, and financial trading.

It's not just what we learned in this study, this method, this sample, these questions for this moment. It's about on this topic area, what do we know?

Jake Burghardt, a consultant, author, and speaker.
Jake Burghardt, a consultant, author, and speaker.

 

Podcast transcript

[00:00:00] Tina Ličková: Welcome to UXR Geeks, where we geek out with researchers from all around the world and topics they’re passionate about. I’m your host Tina, a researcher and a product manager, and this podcast is brought to you by UXtweak, the UX research platform for recruiting, conducting, analyzing, and sharing insights all in one place.

This is UXR geeks, and today I was talking to Jake, who recently brought out a very good book about how to not waste research. I was really happy to hear somebody voicing out like, okay, let’s recycle research. Let’s look into the old research. And I think they give it a really a lot of deep thought and we are discussing how to approach this in different organization and overcome different barriers.

So I hope he will enjoy the topic as I did, and I hope he will enjoy Jake’s wisdom and humbleness because he is a really very interesting guy to speak to.

Jake, hi. 

[00:01:20] Jake Burghardt: Hello. 

[00:01:21] Tina Ličková: We always start with a question, who are you? But I am trying to rephrase the question into like, who are you? Who are you right now? What is the most important to tell people about you right now? So if you could enlighten us. 

[00:01:38] Jake Burghardt: I’m Jake Burkhart. I am a generalist, mostly focused on UX and product management.

I got started in.com, all sorts of background from consulting to big tech to this latest obsession. Bridging to the second partier question who I am right now. I, I’m spent the last few years consolidating some ideas, learning a lot in the industry about how to get more value from. Research learning. So as an industry we’ve figured out all sorts of processes and methods and tips and tricks for optimizing study processes.

It’s an ongoing subject of optimization, and it very well should be. But the, what happens after the study timeframe is an area that has maybe received less attention. And so I’ve been spending some time writing in that space. And just released a new book with Rosenfeld Media called Stop Wasting Research, maximize the Product Impact of Your Organization’s Customer Insights.

So I’ve been on the journey of connecting with teams and learning about where they’re at, and sharing how some of the folks ideas might be useful to them, and I’m excited to chat with you about it today. 

[00:02:59] Tina Ličková: It’s an exciting topic because when you approach us, I was like, okay, this is important from my perspective as a researcher slash product manager, I think there is a lot of research being somehow lost in the translation or in time.

My first question is really a basic one. What was it for you? That you decided, okay, this is the biggest topic I could be able to write about. 

[00:03:26] Jake Burghardt: I think it, some of it comes from a personal bias across my career. I show up and I say, okay, what do we know? And I try and make the most of it, even in spaces that were entirely new interactions, trying to distill academic literature into things that could be useful to product teams.

Trying to not start from square one, but, and. In terms of product development processes, I’ve always been obsessed with sort of problem finding before solutioning and connecting research into planning. When you’re a consultant, it’s easy to do, but you don’t know if it’s gonna be successful in the end in terms of whether they implement it going in-house.

There’s all sorts of different ways to conduct research and connect with folks and huge wins, but if you take a look back, there’s this big remainder, and so that remainder is what inspired me to get into this space. The trend in research systems, tooling, repositories, all those things has been amazing to see.

And I’m a avid fan and follower of what a lot of people are doing, and I think so much of it is tool centered. What I wanted to do with this book is to take a step back and say, yes. There are knowledge consolidating tools, but it’s about very active knowledge management if we want to get more value out of research and what does that look like?

And it’s full of ideas. No one’s ever gonna do all the ideas in the book, but it’s got a framework for people to develop their own ideas and then a bunch of things that they might experiment with. And if they work, then they could be turned into operations over time. 

[00:04:55] Tina Ličková: Every sentence you basically went out of your mouth was an open door to enter it in into discussion.

I have to correct my thoughts right now, but before I go to an on it, what is your definition of wasted or lost research? 

[00:05:13] Jake Burghardt: Let me tell a story and then do a definition. A lot of researchers, if you imagine sitting at the end of a study. Walking away, whatever processes you follow, you work to optimize that so much.

And there’s wins that you can count, and these are the wins that are advancing researchers careers and products, and doing great things for customers. You don’t write a book called Stop Wasting Research unless you see a lot of value in research, right? So there’s the winds that we’re already having, but.

If you take an honest look back at existing studies, your own others, the ones that really have a lot of currency in an organization, some studies are very narrow and a hundred percent of the insights are utilized. They may be utilized well somewhere else as well. You can get more value out of them. And more common situation is that there is a remainder of insights that were not applied.

Some of those are no longer applicable, but a surprising amount of them are. So that remainder, that sort of cutting room floor is where the concept of research waste came from. And to put a finer point on it, it’s, these are insights from customer research that are still applicable, that are. Unused.

Didn’t find the right team, didn’t land at the right time. That doesn’t have to be the end of the story. These are business assets that a competitor would happily pay for, but we’re forgetting in our own organizations. And we can pick them back up and get more value out of them. 

[00:06:50] Tina Ličková: I love what you’re saying that a competitor would be, would love those insights.

This is summarizing pretty heavily on how the insights even went forgotten or ignored are important and. What strikes me at, in this topic is why do people forget about this research? Mm. Because a lot of them went through it. Sometimes product managers are there, or even doing the research, the researchers are there to remind people like, we did this.

The most common questions we as researcher become, it’s what do we have on that? And. I do have my hypothesis, but I don’t know if it’s a, well build up one. And this is again, a why question on why do you think it’s happening that we are forgetting in our practices, no matter what position, what role about the research that have been done.

[00:07:43] Jake Burghardt: I’ll speak to it, the study level, and then it’s some of the root causes I talk about in the book, at the study level. There’s so many good intentions and. The audience of research, assuming that you find the right audience, which gets harder and harder in organizations over time. This sort of deep partnership model that is the basis of the research paradigm where you’re closely interacting with stakeholders is trending to breaking down.

Because organizations become more and more fragmented. Decisions become more and more fragmented over time. And current development practices, but assuming you find the right people, you’re having those conversations, folks are already executing on something. Mm-hmm. They’re looking at everything you’re saying.

And if you’ve played the other side, you know this. Right. Everything that you’re saying around, how is that impact? The current work that I’m stressed out about? And how can I think about that as relating to that or not relating to that? So I’ve spent a lot of time bringing lists of existing insights back to teams who had previously not wanted to do anything with them.

And if you show up at the wrong time, you get the same answer you got at the readout and maybe even a little bit more haired. ’cause there could be too much if you show up at a time when people are thinking big in their planning cycles. It can be welcomed as a treasure. So all those things that researchers work to do to optimize study processes and land around decision making times, there’s, even with that, there can be more learned than what can be applied.

And folks are overwhelmed as a really common case. And then when folks are in that mindset where they’re ready to think big, that research may not be at hand. And or often isn’t, right, it’s lost somewhere. What was that study again? And if it’s too hard, people don’t do it. Just briefly, I talk about three root causes in the book.

There’s preparation, which is we can do more to prepare our research assets to be useful over the long haul of product development. Research reports are not the ultimate container for research, especially when you think about over time. Motivation is the second root cause. Folks see research often as an optional input and there isn’t a sense of accountability.

And the information structures that researchers are creating don’t necessarily create that accountability beyond the study level. And so there’s more that we can do there. And even researchers may not feel accountable for the follow through from their work. Obviously an ongoing debate in the industry, I lean heavily towards let’s, we’re working in product development.

Let’s try and follow through, and then integration is the last root cause with integration, you could have prepared your research to be so useful. Product decisions. Summarizing things and synthesizing things and being ready to go for top pain points and opportunities and all those things. I hate the word pain points, top problems to solve, and you could have motivated use by showing that this internal product of existing research is something that decision makers will get a lot of value out of and they want to use ’cause they’ve seen their peers using it.

That’s something that takes time to develop, but it can be developed. Sort of virtuous cycle, you can build up that motivation over time. It’s big change management book, but if it integration, if it’s not there, if, if they haven’t had the right touch points with that work, it’s not present in the decision making space, it can still be left behind.

So those are the three big root causes that the book is organized around with a bunch of action ideas within those to make progress. And again, it’s about. Looking and thinking about action ideas in those spaces that would make more sense for your environment. Or maybe picking up one-on-ones in the book and adapting it.

But the goal is to present a menu for catalysts who want to have more plans informed by research. 

[00:11:46] Tina Ličková: Mm-hmm. Now I have maybe two questions. One is, and it’s coming from the hypothesis that I had that, that I’m also not so sure about because I encountered. People who didn’t take research to certain topic as seriously as they could because they weren’t a part of it.

[00:12:07] Jake Burghardt: Mm-hmm.  

[00:12:07] Tina Ličková: So the empathy level with the people that, with the users that gave their responses and their knowledge was synthesized, was like, ah, I wasn’t there. I really didn’t get it. 

[00:12:20] Jake Burghardt: Mm-hmm. 

[00:12:20] Tina Ličková: That’s the first thing. So how much, and I would, I will stop there. Mm-hmm. How much do you think is this an issue when it comes to acceptance of research and then working with those insights?

[00:12:32] Jake Burghardt: Yeah, I think there’s some, a lot of individual differences there. You’ll run into some people who just are hard internal customers when it comes to that. And if you worked hard on. Changing their mind first. It would be a very uphill battle. Whereas if you show the value by a bunch of people who are already more inclined and socialize that you can eventually bring some of those folks around, but never everyone, right.

And in terms of direct participation, it’s the ideal, right? Uh, there’s a reason why when we talk about ideal processes, it’s there, but it’s often not the case, especially as you’re trying to influence bigger decisions. The level of leader that objects to that is an interesting conversation. Like very senior leaders are never involved in everything they’re used to.

Taking in inputs, there’s this mid-level that gets a little bit confusing sometimes. It’s making sure that the very senior leaders know about those insights and then it trickles down from there. Hmm. Um, I think that there’s another thing we treat study creation, knowledge creation research processes is this assembly line of units where each one is an independent unit.

And if you take the repositories idea and extrapolate what we’re really trying to do. There are obviously many kinds of repositories, but some of the approaches are about collapsing those study distinctions in useful ways. So it’s not just what we learned in this study, this method, this sample, these questions for this moment, it’s about on this topic area, what have we, what do we know?

What’s the new stuff versus what’s the existing stuff? So if you treat every study as a point to try and influence that person, it’s hard ’cause there’s a limited amount of data. A lot of what I talk about in the book is collapsing the boundaries between different stripes of insight generators, data scientists, market researchers, customer success, UX research, all sorts of folks, people who do research.

If you’ve set a bar and you can pull their content in as well, and. How can we accumulate evidence over time to persuade those people? I think that some people are just hard to persuade, right? Un, unless they are already on that train and seeing it themselves as part of being on that train. But it’s not the only thing.

They could see it themselves and still say the same thing, but accumulating evidence is one of the strategies that researchers believe in, and we can create new types of reports to accumulate that evidence. Not for all insights. But for the really important ones that aren’t getting traction, we can summarize them.

Those individual insight summaries are something that’s much easier to activate over the long term than to say a pile of existing reports with a AI search interface on top or something. 

[00:15:20] Tina Ličková: We’ll be right back after a short break with a commercial message from our sponsors. Hello, UXR Geeks. This is Tina speaking.

You might already know that this podcast is brought to you by UXtweak. A research tool and research tools are great until you need five of them to run a proper study. That’s why I actually use UXtweak. It handles everything from concept testing to usability checks. Both moderated and unmoderated. It also takes care of the analyzers with no message spreadsheets, no frustrating dashboards or endless reports.

Just clear insights ready to share. So if you’re curious, go to the uxtweak.com website and start for free. No credit card and no strings attached.

I love that you went to persuasion because that’s where I am also going in my thinking process. And that’s the thing where I’m still trying to figure out for people. Okay. Hard stakeholders or internal customers who just have a very complicated relationship to research. That’s I learned to say to myself like, oh, that’s your problem.

[00:16:39] Jake Burghardt: Mm-hmm.  

[00:16:40] Tina Ličková: Because you can try a few times and then you’re like, okay. You take it or leave it. 

[00:16:44] Jake Burghardt: Mm-hmm. 

[00:16:44] Tina Ličková: But then there are, people would appreciate research, but for certain reasons it could be the organizational culture or some experiences that they made with their team leads. They are very uncertain and feel insecure to make decisions, even if they have the evidence.

So persuade them is really hard. Do you maybe have. Some tips, although people should go and buy a book of course, but maybe you can give us a freebie what to do in those cases to support people in making informed decision, but making them confidently. 

[00:17:24] Jake Burghardt: We’re helping inform these decision makers to make smarter bets, not answering with certainty a hundred percent.

And some people are less comfortable making bets, I think is what I’m hearing in your question. I think. Normalizing. The use of research as a way of justifying bets is one pathway. When you think about running a study and those relationships with stakeholders, you have direct experience of whether research is being used or not.

Oftentimes, and some of it is invisible, but it’s more direct. Once you have a pile of existing stuff and you’re trying to push a top set of insights out into planning, you have to tell people how you want them to use research. And essentially it’s a link. It’s a link to justify and motivate your plan.

Maybe you can update templates. I’ve done that to say, you know. What research insights does this address or you? Maybe it’s just a standard that you create and then you celebrate every time you do broad reporting where you say this, these people are including research in their plans this way, and it over times it’s normalizes it.

And I’ve seen it in CEO level documents that it wasn’t there before. It’s not a footnote, it’s not in the appendix. It’s something integral. So I think some of it is how do you become the currency that people are comfortable placing bets with? And by Atomizing insights down to a summary and seeing those summaries pop up in a lot of places, folks can start to build the confidence over time that maybe that’s a currency I can use to justify my bet as well on the scale of leadership to management, where early days of a lot of product organizations that were making wild bets based on limited information versus over time in organizations.

So a lot of people that get promoted are more risk averse. It’s finding that balance where what can research provide that makes them feel a little bit more comfortable. I think prioritization is a big piece of this. Like they’re essentially by placing a bet saying, should I put a priority on this? And researchers inherently prioritize in their work.

What they include in a study plan, what they include in a study output. They may do an executive summary. They may even put explicit priorities on some insights to say this is crucial. That’s at the study level and when you start pulling content together. Before too long. There’s a lot of it. And priority becomes important again, because people can’t consume all this stuff.

I have a chapter called Clarify What Matters Most, that talks about different perspectives on prioritizing sets of insights from across insight seekers. It’s not something you do early days, but it’s this muscle you can build over time. And if you get to the point where you’re able to say to a decision maker, the research community that’s contributing to this initiative.

We’ve agreed, or based on our standards that we agreed on, this is a top tier problem to solve, and we think it is a good candidate to move that metric. Then you’re not just giving information, you’re a stakeholder who’s placing a bet with them, and you may not be right. It’s a layer that sits on top of the study that says, we think these things can be most important.

Doesn’t invalidate the. Details of that run study underneath, but it is a layer where the community’s coming together and saying, we are working in a product organization, and part of that is placing bets, and we want to be part of that. 

[00:21:05] Tina Ličková: Just to point it out again, the idea of making people confident and placing bets with you and based on what are your research, very important.

And I also really like the idea of repeating the same important messages, even if it’s knowing like how many times do I have to tell it to you? Yes, this is the annoying moment, but you have to, and it also brings me to a. Older thing that a colleague said to me from marketing the message, you have to repeat it in different channels, different forms, different formats, and it somehow gets by this prioritized.

Mm-hmm. So it feels like this is what you, what I’m taking out of it to move it a little bit forward. You are also saying that the typical, the classical knowledge management processes won’t really help with this so. If you could maybe explain that part or where it does take your mind.

[00:22:08] Jake Burghardt: And I think researchers haven’t capitalized, in my experience, on the classical knowledge management processes. I think when researchers think knowledge management, they’re thinking of a small piece of a much larger puzzle. They may not even be talking about knowledge management, they’re talking about research repositories. And the most common questions about research repositories online, stop me if you’ve heard this one before, but it’s what tool do you use?

’cause I need to get checked this off, right? And then why is it not working? Or those don’t work right. Expectations around what they’re gonna do, I think speaks to the fact that this wrapper of knowledge management, all the activities and ideas around it is something that hasn’t been a focus. Um, I started by writing about research repositories.

There is a chapter on knowledge consolidating tools, and they’re huge enablers. You could do a lot before you have those tools. You can do a ton once you have those tools to activate things. But the tools themselves aren’t gonna do it on their own. So. I guess my takeaway is there’s a lot of classical knowledge management ideas that can be adapted to this environment.

I think as a industry we’re just getting started and I hope it’s not a false start where we’re throwing tools in and they often end up just being researcher tools and expecting bigger things of them, and then losing interest in knowledge management ’cause it’s not having as much of an impact. That’s certainly a threat.

[00:23:36] Tina Ličková: And this brings me to probably the last topic, and I’m thinking about the role because, okay, we have AI in the mix, which is really revolutionary and I don’t want to talk about ai. There is way too much about it, but. Definitely our roles are changing. Mm-hmm. And even before AI became a big thing in our business, I was like, something has changed.

And you are pointing very well out. The knowledge management part is not well utilized. If we look at the role of researchers right now, where we should. Be going with our roles in year or two or five years of time, what would be your suggestion? I don’t want any glass ball or anything like that. We spend too much time anyhow, doing it all.

But what would be your suggestion also when it comes to utilizing research with that? 

[00:24:27] Jake Burghardt: I think there’s a lot of different answers. People can go down different pathways to this point. ’cause my takeaway for what this technological shift is doing. I keep seeing people talking about sort of the flattening of roles in different ways or like the generalization of roles where it was sacrosanct that engineering was the, the role of engineers.

That was an argument for the early days of research democratization, where people were pushing back. You wouldn’t let anybody just write code and now just anybody’s writing code. We’ve hit this very unusual point where the role mix is. Churn for everyone. And so for researchers in particular, I guess I’m pointing the way towards systems thinking and being more operational towards planning.

So not just being data collectors, but being folks who are integral to planning that. Researchers are already doing that to some degree and having successes. I’m just saying let’s dial it up as one direction to go in. I have a sidebar that I never thought I’d have to write, which was something to the effect of hired skilled researchers or that you can’t just expect all these things to happen when you don’t have the domain expertise to understand what you’re doing wrong.

Just like vibe coding’s only gonna get you so far is could be a new example in this space, right? That resonates with a lot more tech folks. And so I think the idea, the early stage career folks are the ones I worry about the most. The folks that. They’re just building out their method stack. They want to have tried a bunch of different things, and they’re not as focused on the follow through necessarily.

I was the only principal researcher in a big tech company for a while and talk to a lot of folks, and I’m just generalizing from that. But like that those sorts of mindsets are gonna be a harder sell, whereas focusing more on enabling others and holding quality bars and. Making teams smarter about their own research approaches and customer connection approaches, and then building the systems that are going to make sure that research shows up more in planning.

I think that those are all opportunity areas that I’m excited about. But to your point about crystal ball, I certainly don’t have all the answers. As someone who had identified as a generalist, I think that there’s opportunity for people to step into all sorts of things that. Their organization could value and that they want to value.

You can find new intersections and I, I love reading about what people are up to lately. 

[00:26:57] Tina Ličková: Is there anything looking at our conversation in las. 30 minutes that I didn’t ask you about or you were thinking to yourself like, Tina, why didn’t you please ask this? Now is the point where you can ask yourself the question that I didn’t ask.

[00:27:14] Jake Burghardt: I think the conversation has covered some great ground, so thank you so much for your questions. One thing that I often think about and listening to podcasts, we’ve kept it pretty high level. What are some things that people can walk away with as researchers and do differently? Tomorrow based on this and early, I think a lot of the book is about breaking down silos, and you can think about silos between different researchers.

You can think about silos between insights and that chasm into planning a body of existing insights. But you could also think about silos and researchers’ own work. I talked about this sort of assembly line where we’re very individualistic. Oftentimes, and we’re even partitioning between our studies going into next week.

How can you start treating every study as an opportunity to bring back what’s been learned, whether it’s in the planning process or the reporting process, not just the latest samples. I think that’s something that people can walk away a mindset shift. I think taking stock of some past studies and feeling what didn’t make the cut.

Researchers walk away from studies jazzed about what they accomplished and some things that maybe are pain points for them that they really wanted to oftentimes move the needle on, go back and take stock and kind of feel what’s been left behind. It’s one of those things that if you immerse yourself a little bit, I think it really changes people’s perspective on it.

I think that’s a thing to do next week and then. When you think about study processes, how can you extend them to have longer conversations with teams? That may be every time you activate insights at the end of a study, you go back and activate insights from past studies as well, where it makes sense or whatever.

It makes sense in your environment to have echoes and to extend the conversation. ’cause to your point earlier, a lot of it’s internal marketing, right? I talk about presence and mind share. How can you experiment around some of those concepts, and we can keep going from there. The book is full of ideas, but there’s just a few things thinking about near term takeaways that folks could try.

[00:29:23] Tina Ličková: So my first advice would be to write a book and to look through the ideas. Definitely. My second advice is to really look into the research that you have. Being doing it might surprise you. I just had a chance with one client to work for almost two years. Mm-hmm. So I started to build something like meet Meta-level studies.

Mm-hmm. Where I just look at the last half a year and was like, okay, what have we actually done in this half a year? And was trying to capitalize on the biggest things like the prioritization that you were mentioning. It’s also very humbling and a very nice moment of, oh wow, yeah, we did this. And it has value.

Absolutely. Even if you’re at a point where AI is just making us really insecure and everything is making us insecure, including the political situation, go back and check your work because it’s worth it. 

[00:30:15] Jake Burghardt: Yeah. Celebrate those wins, like really take stock. Figure out ways to capture those stories. Start logging ’em for yourself.

Log in in a shared environment is a good first step. So you can see what each other are winning at and maybe look for similar wins. And then in doing those meta-analysis, I’ve had the opportunity to look across hundreds of studies at a time, and you see the wins, and then you see, it’s not a moment of loss necessarily.

It may be angsty to see the things that are left behind, but it’s also in moment of enormous potential. You see what’s there that could be activated. So a great action for folks to try. 

[00:30:53] Tina Ličková: Yeah. And that brings me to what you were just saying, that prolongs the conversation and keeps it very alive in the organizations. Absolutely. Jake, thank you very much. You’re a very wise man, and I’m happy you came to our podcast. 

[00:31:08] Jake Burghardt: Thank you so much, uh, for having me, and I really enjoyed the conversation.

[00:31:16] Tina Ličková: Thank you for listening to UXR Geeks. If you enjoyed this episode, please follow our podcast and share it with your friends and colleagues. Your support is really what keeps us going. 

 

If you have any tips on fantastic speakers from across the globe, feedback, or any questions, we would love to hear from you, so reach out to geekspodcast@uxtweak.com.

Special thanks goes to my colleagues, to our podcast producer, Ekaterina Novikova, our social media specialist, Daria Krasovskaya, and our audio specialist, Melissa Danisova.

And to all of you, thank you for tuning in.

💡 This podcast was brought to you by UXtweak, an all-in-one UX research tool.

 

Read More

Noam Segal | Garbage metrics: the case against NPS

Noam Segal discusses why NPS is a flawed metric and why companies should stop using it. He explains the issues with NPS from a survey science perspective and suggests better alternatives for measuring customer satisfaction and engagement.

Tiny pieces, big picture: the world of atomic research

Daniel Pidcock, creator of Atomic UX Research, and Larissa Schwedersky from Glean.ly discuss how Atomic UX Research leverages diverse data sources for robust findings and the benefits of non-linear research. This conversation explores how flexible frameworks in UX research enhance user experience and customer satisfaction.

uxcon special with Zariah Cameron

Zariah Cameron is an Equity Centered UX Strategist at Ally Financial and founder of the AEI Design Program, committed to promoting equity in design and bolstering Black design college students for industry achievement.

Improve UX with product experience insights from UXtweak

Test your assumptions quickly, access broad and qualified audiences worldwide, and receive clear reporting of findings - all with the most competitive pricing on the market.

Try UXtweak for Free
Improve UX with product experience insights from UXtweak