x

Save Time and Frustration

Say No to Poorly Designed Products!

Save Time and FrustrationRegister for free
Homepage / UX Research Geeks / Tiny Pieces, Big Picture: The World of Atomic Research
small-flowers half-flower half-circle
 Back to All Episodes

Tiny Pieces, Big Picture: The World of Atomic Research

half-circle publisher
Tina Ličková Tina Ličková
•  21.12.2023
Share on socials |

Join host Tina Ličková in the sixth episode of UXR Geeks Season 3, where she talks with Daniel Pidcock and Larissa Schwedersky from Glean.ly. Daniel, creator of Atomic UX Research, explains how his method uses diverse data sources for solid research findings, while Larissa discusses the benefits of non-linear research. This episode offers an intriguing look at how atomic UX research provides flexible frameworks for organizations, enhancing user experience and customer satisfaction. Get ready for an engaging talk that breaks down the key aspects of this innovative UX research approach.

Episode highlights

  • 00:04:08 – Origins of Atomic Research
  • 00:06:52 – Explaining Atomic Research
  • 00:18:48 – The Necessity of Insights in Recommendations
  • 00:26:07 – Putting Different (Green) Pieces together
  • 00:34:34 – Connecting with the Atomic Research Community

About our guest Larissa Schwedersky

Larissa Schwedersky, a Brazilian researcher, holds a Bachelor’s degree in Social Sciences, a Master’s in Social Anthropology, and is currently pursuing her Ph.D. in the same field. Combining her academic background with practical experience, she works as a UX Researcher while also contributing to urban planning projects as a freelance researcher. Larissa’s portfolio includes work as a Monitoring and Evaluation (M&E) consultant for renowned organizations like UNICEF, UNDP, and ILO, emphasizing her expertise in mixed methods studies.

About our guest Daniel Pidcock

Daniel Pidcock, a UX Consultant renowned for the Atomic UX Research method, combines his expertise in User Experience Design and research with a digital UI design background. With a profound grasp of UX strategy, accessibility, and design systems, Daniel’s innovative approach has set a global standard in User Research. His achievements include the development of a Research Repository system used by leading global brands. Having collaborated with FTSE100 giants and startups alike, Daniel’s extensive experience in lean product design facilitates the rapid design, validation, and launch of digital products with a focus on accessibility for all users.

I think being human and using your intuition is key in making exciting products. This framework is great because it doesn't make you ignore your instincts. It actually lets people be creative and come up with their own ideas.

UX Consultant and creator of the Atomic UX Research method
UX Consultant and creator of the Atomic UX Research method

Podcast videos

This podcast episode also includes a video clips, which you can watch below.

Transcript for this video starts at [00:21:43]

Transcript for this video starts at [00:25:58]

Podcast transcript

[00:00:00] Tina Ličková: 

Welcome to UX Research Geeks, where we geek out with researchers from all around the world on topics they are passionate about. I’m your host Tina Ličková, a researcher and a strategist, and this podcast is brought to you by UXtweak, an all-in-one UX research tool.

This is the sixth episode of the third season of UXR Geeks, and you’re listening to me talking to Daniel and Larissa from Glean.ly. Daniel is the creator of Atomic UX Research. a method that emphasizes the importance of backing insights with multiple sources and facts, promoting a data driven approach to research. Larissa brings her perspective on non linear research, highlighting its benefits.

In this episode, we delve into the core principles of atomic UX research. which may seem chaotic at first, but ultimately provides a flexible and structured framework. We explore how organizations, including non researchers, can benefit from this approach. So get ready for an insightful conversation.

How are you guys?

[00:01:16] Daniel Pidcock: Not too bad. Thank you. Yourself?

[00:01:18] Tina Ličková: Not too bad. What does it mean? That it’s always very.

[00:01:21] Daniel Pidcock: It’s a British way of saying, yeah, not terrible, not amazing. Just things are happening. To be fair, I think we say not too bad when things are terrible and when things are amazing as well. So..

[00:01:30] Tina Ličková: That’s why I’m asking because I was living for a year in Australia when I was a student and not the bad was also, and it’s the spectrum of not bad could be like, Oh, I just had a bad shower in the morning to, to, Oh, I’m really sad.

[00:01:44] Daniel Pidcock: The other one that my wife really gets annoyed by is, and many other people, British people say it’s all right. About things. So I’ll be eating some food. So what’s it like? I’d say, yeah, it’s all right.

That either means. It’s all right. Actually, it’s really terrible. Or it’s yeah, all right. That’s pretty amazing. So it can mean any of those things. It’s fine. Either means it’s okay, or it’s fine, but

[00:02:12] Tina Ličková: Larissa, are you British yourself?

[00:02:15] Larissa Schwedersky: No, I’m Brazilian. No, I live in Brazil. Yeah.

[00:02:19] Tina Ličková: Ah, okay. Where in Brazil?

[00:02:23] Larissa Schwedersky: In, more in the South is in Iceland, which is called Sorenopolis. It’s amazing here.

[00:02:28] Tina Ličková: Okay. And how do you manage you guys, the time zones and everything?

[00:02:33] Daniel Pidcock: Larissa tends to start at 2 p. m. my time. And I tend to work all the time.

[00:02:39] Larissa Schwedersky: I wake up early and when needed I stay longer, so that’s okay.

[00:02:49] Tina Ličková: Let’s start with an introduction of you, who you are. And I would say ladies first, although you are the Atomic Research founder or brain behind it. But Larissa, who are you? What are you doing?

[00:03:03] Larissa Schwedersky: I work as a vaccine researcher and sex manager with them at Glean.ly as I said, I’m from Brazil, so I’m not a native Portuguese speaker.

I also have a degree in social sciences and I am currently doing my PhD in anthropology. I’m passionate about urban mobility, so this is what my thesis is about. But I really love everything related to qualitative research. And yeah, that’s how I started working with UX research. I worked for the past two years in a big company, it was a betting and gaming company from UK.

And I just joined Gleam two months ago. So I’m pretty excited working with them.

[00:03:45] Tina Ličková: Daniel, who are you?

[00:03:47] Daniel Pidcock: I’m Daniel Pidcock. I am a UX designer and researcher and have been since before I knew that was a thing. So I’ve been working in product design and creating, especially focusing on digital products for probably about 20 years now.

And in around 2015. I was struggling with the problem of managing knowledge in a big company and was looking for a solution. And along with side, other people created the concept of atomic UX research as a way to make knowledge scalable. And that became my life for the last however many years now, about eight years.

[00:04:26] Tina Ličková: So how did you come up with atomic research rate?

[00:04:29] Daniel Pidcock: So the situation was the company I was working with really struggled with knowledge. We had lots of research that was stored in reports and these reports we get written and then put into Google drive. In fact, we used to call it Google grave because it’s where research went to die.

We knew that we’d probably never see it again. This was a problem we always complained about. I ended up founding the accessibility team in that company. And of course, with accessibility, very rarely, do you have a piece of research just on accessibility? It does happen, but it’s rare, but almost every other project would have some knowledge around accessibility.

So we had the problem that we’d have. Massive long reports with maybe just a tiny little nugget of gold that we wanted to pull out about that and trying to gather all of this information across a massive company with many brands across the world with quite a small, mostly voluntary accessibility team was a massive problem and we thought there must be a better way.

As the name suggests, we were actually inspired by the atomic design process. And I was having this discussion with someone the other day, who was talking about whether it’s good to be a generalist or a specialist in UX. And I said, I can really see the value in being a specialist, being so focused on your craft and knowing every little tiny thing about it, but if I didn’t have a broad knowledge across to research, knowledge around that is a problem we have in knowledge management.

And there, there has been a solution for something completely different in design systems. I wouldn’t have seen the connection between those. And in fact, it was a colleague of mine that I was talking to that was drawing something on a board. And I was like, wait a minute, that looks a little bit like atomic design.

Maybe actually, that was, that’s solved a problem there. Maybe that could solve a problem here. And I started talking to other people in the organization. We started talking to people outside the organization. We actually even wrote in a couple of teams that I knew were struggling with a similar thing in other companies, not competitive companies, but still other companies to see, we’re talking about this. What do you think? And getting lots and lots of input from probably hundreds of UX specialists from knowledge management to skilled people to. just generalists as well. So bringing all that knowledge together is how we managed to refine the process and to think actually there is something here, there’s something really special here in fact.

[00:06:48] Tina Ličková: This is the point where let’s introduce maybe what is atomic research because I have to be honest, I didn’t know about it. Larissa, what is atomic research?

[00:06:57] Larissa Schwedersky: Atomic UX research, it’s a simple process with four distinct parts that creates molecules of knowledge. Usually we call these four parts as experiments, facts, insights, and recommendations.

And the main concept behind this approach As the name implies, it’s to deconstruct the research findings into smaller units. So it allows for easy and efficient conception. And it also means research democratization, which is something that we consider very important. And that’s why the name is called Atomic, because each individual piece of knowledge acquired through the process is referred to as an atom.

[00:07:40] Tina Ličková: And how did these atoms come together?

[00:07:42] Daniel Pidcock: In its inception, what we recognized is one of the problems we had is that insights were too closely aligned with how they were discovered. And that’s why when you try to pull an insight out of a report, it lost that context. So what we did is break down the components into a piece of knowledge into its atoms.

So how we learnt something, how we learned the fact was separate to what we think it meant, what the cause or the effect of those facts are. And that added the benefit that we could actually connect atoms of knowledge from all around the organization. So I might have a customer observation or quote. And I’d be going, I can look, I think there’s something here.

Do we have any data to back that up? And I could connect to that to create an insight. And I might have other data elsewhere that actually I could connect negatively. That would go, actually, this makes me think that might not be correct. How can we add more knowledge to confirm whether it’s correct or not?

We can keep on building, keep on adding evidence on one side of the chain and building our confidence on whether this is true or not. And it also means that we can have one or several pieces of evidence and have several different ideas about it as well. So we could have one fact and have several ideas about the causal effect of that and create several insights and the same for recommendations.

So once we’ve got an insight. We want to decide what we want to do about it or come up with some ideas of what we used to call conclusions. We felt that was too definite. So we now refer to it as recommendations. So we know this and we think this is the right solution. Probably we’re going to test that and that will create new experiments, new facts, either back up the insights we’ve got already, disprove those insights, maybe create new ones as well.

So it creates this really beautiful network of interconnected and quite holistic view of what we know and what we don’t know.

[00:09:36] Tina Ličková: And you started with this experiments. How do you define them?

[00:09:39] Daniel Pidcock: So the term experiment, and so I think one thing that comes up a lot in UX is terminology and we’re not very tied to terminology.

And in fact, in the software that we produced around this, the tooling that we produce called Gleamly, we’ve actually got the option to change that terminology. So what we try to tend to call it is experiments. That’s probably the most commonly changed because experiment sounds very formal, doesn’t it?

But it just basically means that this is the thing we did to create this knowledge. So that could be a formal experiment. It could be a user test or an AB test or a survey or something like that, but it could be, I overheard someone talking on the bus. It could be that my developers have told me, or people in customer service are saying customers are struggling with this.

That’s evidence, right? And now I want to go off and actually speak to some customers or look at some data to bring more confidence around that. But that’s still evidence. Some people might call facts findings. I like facts because we want to really define that they should be factual. It’s not our opinion.

There’s no room for opinion when it comes to facts. It’s what the data is telling us. Whereas insights, we can really think about. Whether it’s, we can come up with quite crazy ideas if we want to, as long as we can find data to make them seem less crazy, and we can even start with insights or recommendations and work backwards.

Quite often would say to people, if you’ve got a hypothesis. If it’s actionable, you could put it in as a recommendation, or if it’s an assertion, like I believe our customers prefer cats to dogs. It’s not actionable, It’s just a feeling, gut feeling. Okay. I’ve got that insight as a hypothesis. Let’s go and see if we’ve got any data already.

And if not, what we can do to try and get some more confidence around that.

[00:11:18] Tina Ličková: And now you were mentioning hypothesis, and this is something that I am contemplating in the last weeks, having some clients or. Very mature when it comes to research and having some clients who are just in the beginning of the journey, if to torture them with hypothesis, or if you go more into the direction of grounded theory and just collecting stuff, what would you say?

How does atomic research put that together? Do you have to have those hypothesis? And do you have to have those assumptions?

[00:11:51] Daniel Pidcock: I think the best way to think about it is a lightweight framework. So there are definitely teams in which they’re completely evidence led. If there isn’t any data for it, they won’t create those insights.

But even then, when you’ve got data, there’s no such thing as absolute proof, right? So we’re already guessing when we’re making an assertion, when we’re making an insight or a recommendation, because we might have more evidence later, we go. God, like, why did we think that was true? All the data was pointing that way, but that wasn’t actually the truth of it.

And of course, sometimes our data can be wrong. We could have done a survey and it’s really clear the outcome, but perhaps people were just ticking the first box on the survey. For example, I don’t know. I definitely think there are teams that they always start with the findings first. They start with the experiment, create facts and generate insights and recommendations.

That’s probably the most common process starting from left. But. I think it’s important when you’re trying to create really exciting products to be a human and be intuitive. And I can’t, one of the things I like about this framework is it doesn’t force you, it doesn’t force you to disregard your human instincts.

And it allows people to come up with ideas. And in fact, I think it encourages it. Cause one of the things is it can be very easy when a customer says this or a customer does this and we go, okay, we should do that. The customer can’t find the button, so we’ll move it. Okay. So that’s the fact and that’s the recommendation.

What’s the insight? What’s the reason for that? What’s the reason they’re struggling to find it? And what’s the reason that we think moving the button is the solution to that. And actually trying to put that into words. So often customers, I’ll be training someone on this and they will say, Oh, it’s obvious, isn’t it?

And I’m like, if it’s obvious, it’s going to be easy to write this insight. But it isn’t, is it? You have to think like, why is the button over there better? I instinctively know it is. But okay, maybe if we say it’s because that’s the way the eye travels or something, we’re reading left to okay. Then we’re creating quite an interesting insight there.

Call to actions are better in this place because of reading angle. So hypothetically, if it was Arabic reader, it should be on the opposite side. Is that right? We can start really think things start becoming more interesting, quite often more innovative and more effective decisions come out the other end of it.

[00:14:04] Tina Ličková: I like two things that you are saying and there is no question, so sorry for that, but I just like to think out loud here. One is the fact that you were mentioning facts and not findings. I love it because sometimes, especially when I’m talking to my technical colleagues, they just don’t consider qualitative findings to be facts.

And changing the terminology might make it a change. We’ll definitely try it out. And the second is that you are. Mentioning the intuition and the intuitive character, because this is something I kind of blame sometimes my design colleagues, especially in bigger companies, corporation, where they tend to use user research as a kind of defense against politics and opinions and stuff like that.

I’m like, Use your designer intuition and make an argument on a design level. You have that intuition. You just lost it somewhere in the process of finding if the button should be like there or a green or orange. Now the question is coming and you were also pointing out that the insight is something which is gluing facts and recommendation, but let’s go a little bit back, how does a fact become an insight?

Because this is a never ending in these in my opinion, never ending story when it comes to researchers.

[00:15:18] Daniel Pidcock: When you say, how does a fact become the insight? Do you mean like physically, like how does that happen? Or what is the process to create, creating an insight?

[00:15:27] Tina Ličková: What do you think? How does a fact become an insight in your opinion?

Because I actually never. Headed, summarized, and okay, and a finding or fact becomes an insight because it’s, there’s this interpretation and I’m always also doing it in a way that I put the findings, I even use tables to sort it out, this is the finding, we can solve it this way, and then I do the last piece in my report where it’s a inside and I always put researcher interpretation and I know unconsciously how it’s happening for me because I have it from the observation and especially when it’s a client or a company that I was working for a longer time you are picking up the clues through the studies that you are doing.

How would you define, is this process happening for you?

[00:16:17] Daniel Pidcock: I think in the end, there’s first of all, there’s no right or wrong way, but I think actually in, in the model, in the atomic model, the term insight could be a synonym for what you’re talking, saying as an interpretation, right? Normally a fact is. One of the struggles sometimes with creating a fact is sometimes it is a summarization of the findings.

And that obviously has the risk of including interpretation and bias. Normally people would use an insight as the summarization of all these facts. What I encourage is to include on top of that. So for instance, we’ve got all of these facts and that tells me the button is in the wrong place. Now that’s fine as an insight, but it’s very tactical and it’s not very interesting and it doesn’t really lead anywhere, but if we add on that cause and or effect, the button is in the wrong place because of this.

And this is having that effect that is suddenly becomes a much more interesting insight and something we can work off. And that’s why we might have several insights going. I think the button’s on the wrong place because of this. I think it’s because of that. I think if we did this, it would cause this, et cetera, et cetera.

So it starts a much more interesting conversation than just a simple statement and makes. And it opens the mind a little bit in there, but it would be perfectly reasonable to, to treat, to not go to that detail, if that makes sense.

[00:17:44] Tina Ličková: And thinking about a discussion that I had with Nikki Anderson, she was also mentioning that sometimes depending on the context or what kind of study it was, sometimes you are not even having insights.

For example, especially in usability studies, like going into big psychological insights doesn’t make sense if, especially if it’s one study, but if it’s more than you might have something bigger. And do you rely that this four components have to come every time, or are you in atomic research? more flexible or relaxed about, okay, it might have facts and might have the recommendations, but maybe the insights will come after longer time.

[00:18:28] Larissa Schwedersky: These sites are good to think about them. It’s good to have it on the structure and in Glean.Ly, we use it so you’re not possible to add recommendations that are not linked to insights.

[00:18:41] Tina Ličková: Could you maybe explain more how you do it?

[00:18:43] Larissa Schwedersky: Yeah. So it’s very interesting because I had the same question and this is a very common question that we have with our customers.

They always ask if it’s necessary to have the insight, to think about insights.

And I had the same feeling in the beginning, and then explained to me, just try to add insights because it will make you think about the process and think about the why’s and When I was trying LingLing for the first time, Daniel suggested me just to add some, yeah, just to add things there on the platform and to think about the whole process.

And I found it very interesting and very important as well, because, yeah, it’s. It makes you think. It makes you think about the whole process and of the research. And it’s very useful because sometimes you don’t perceive, but you are a little bit lazy and you already know the recommendations that you want to make.

But when you are thinking about the insights, it makes you think more deeper into that problem.

[00:19:52] Tina Ličková: I really like what you’re saying about you’re looking for the, why is it happening? It’s definitely something that sometimes I’m missing. Okay, but let’s stop here. Let’s be mindful of why are things happening.

But on the other side to contradict myself, by the way, it’s also that it. It could have a very good role in triggering more discussions in the team and trying to understand the users or the humans using the products, but it’s also creating more and more assumptions. How not to just running in a circle, I would say.

What would be your recommendation there?

[00:20:29] Larissa Schwedersky: I think I would go for the same because I think it’s not a problem to run in circles because you don’t need to have a finish for the research. Of course, you have recommendations and you work on it, but then you can test these recommendations again and it’s an ongoing process.

And you are always finding new stuff about that. So I don’t think it’s really a problem.

[00:20:49] Daniel Pidcock: I think the other thing to really consider is one of our main focuses through this is always the decision maker. So it’s obviously it’s very, as a. process, it was designed to solve the problem of scaling knowledge.

And the solution was making a really useful synthesis process. And there’s a lot of people who use it just for synthesis. They’ve got no interest in creating a repository or greater knowledge base. But the reason it’s effective for that is a, it makes you think about it a little bit deeper, but also the end point is that we have a recommendation.

It’s really clear how that’s connected to how the thought process works. I know this isn’t useful for podcasts, but I’m going to share my screen.

[00:21:32] Tina Ličková:

🐝 If you want to check out the video, Daniel is just showing us, please visit our social media and also the web page of this podcast on uxtweak.com/podcast/.

[00:21:43] Daniel Pidcock: For those that can’t see the screen here, what we’re looking at is what we would refer to as a molecule, an atomic molecule. Some people might refer to as a nugget because of the great work that Tomer Sharon’s done on the subject. But basically what we’re looking at is a knowledge molecule from the perspective of a recommendation.

So we’ve got a recommendation here and connected to it. We can see two insights that are connected positively with these green lines. And these are the thinking that supports, this is the reason we’ve making this recommendation. We can also connect things negatively as well. These are the reasons we might consider not doing this, or at least consider things we need to take into account.

And then each one of these is also then connected. So we have the recommendation. This is what we want someone to decide on. This is the thinking behind that. This is the reason we’re making that recommendation. And here’s the evidence for those insights. And once again, we will have, these are grouped by experiment by source.

So facts always belong to an experiment because how you learn something is intrinsically connected to what you learn, right? You can’t separate those two at all. I can see how I’ve got a moderated. To use a test with customers saying they’re things that support these insights and things that go against it as well.

We use the term disproves, even though it’s quite final. I’ve also got a B tests and all sorts of different types of source that will help me gain confidence of whether to trust these insights or not. So as a decision maker, I can say I’m, I can understand why you’re thinking this. I can also see holistically why I might choose not to.

And also I can gain a confidence level. Around this. And it might be that I don’t have enough confidence. I might say to the researcher, sorry, you’d have to go back and do some more. I’ve got gaps in my knowledge. I’ve got gaps in my, in the research, or I just don’t have enough of it. I’m not confident enough at this stage as well.

So they’re able to, so this relates to your original question in the, yes, we could always be creating assertions. We could always. Be chasing our tail, looking for more evidence to it, but there’s going to be a stage where we’re confident enough to make decisions on the subject. And then of course, once for the decision in here with this example, which is to offer free shipping, the answer is we’ve got enough confidence to do an AB test now.

So we test that in the UK and we can see, did that work on that? And we can see, yes, it worked. It cost us some money, but we made money from it. So we’re confident that word probably really happy to roll that out permanently in the UK. So now it’s given us the confidence to test it in France and Italy, et cetera.

And we might find, look, it does work in UK and France, but it doesn’t work in Italy for some reason. Now, why is that? Is there something culturally in Italy that I don’t understand as a British person? Yeah. Okay. Let’s speak to some customers, gather some evidence about that. And I can bring that to bear.

It helps build this node of knowledge around free shipping in different countries. It might also help me to learn other things that I wasn’t expecting as well. And that creates new insights. There’s different ways of approaching selling products in Italy, for example. Does that make sense?

[00:24:49] Tina Ličková: It makes sense.

I was a little bit struggling when we were talking about talking like another framework. I’m sometimes really tired of all the frameworks, the research Oh, it’s the same thing that market research showed us a hundred years ago. But if I see it now, It’s facts where the experiments are listed and it’s coming to into the insights and then a final recommendation is made.

And I see that you are saying, okay, three experiments and then the insight is built out of that. And I see the framework as very beneficial, especially for what’s called continuous research. Because it actually goes back and this is what I’m trying to do sometimes with my clients if I have a chance or in the companies that I was working, doing research also on a meta level, like not just taking out one study and being like, Oh, we have this, but going back and forth and really, okay, these three studies tell us.

That we have to do now, or this is the insight and this is what I like about it, that it’s bringing the sources are from different studies, not just one, and it’s not just sucked out of the finger. Sorry for the expression. Yeah.

[00:25:58] Daniel Pidcock: Yeah. Yeah. And that’s absolutely one of the big powers is if I’ve got an idea, I’ve got some facts, I’ve gotten an idea of what I think that insight is.

I might be able to find other research from a different part of the company that suddenly tells us whether that’s true or not, I’ve actually got an example here based on a real use case, if I can remember, it was, so we, I was a long time ago now speaking to one of our customers and they were having the question about, they’d done a survey, they’d learned this information and it was, I’ve changed the details, but it was around our customers have told us that they prefer green clothing above other colors. And they said, so is that a fact was an insight? And I said, I believe it’s a fact because it’s a fact that the survey told us that we might find other evidence elsewhere that tells us that isn’t true. So we’ve got the survey here, 74%, 64 percent of respondents claim they prefer green clothing.

We may also go and look to see what is the national standard. Is that unusually high? Which we can see it is, but also, okay. But is that borne out by actual data? So luckily they had access to their sales data. They’re able to see, yeah, actually 44 percent of their items sold fall into the green spectrum that’s so much higher than any other type clothing they’re selling.

Okay. So we’ve got two bits of data here. That’s starting to give us confidence. There’s a thing going on, right? We don’t know why yet. Cause it’s all quantitative. We haven’t done any qualitative. We need to go and speak to some customers and find out why that is in the meantime. Do an A B test on the homepage and swap out of the picture on the homepage.

Just some lovely green clothing. Cause we might sell some more. I didn’t actually, I don’t remember what the result of that was exactly, but I know they had a significant uplift as a result of that change. They still don’t understand why this is. I don’t know whether it’s something they can make assertions perhaps.

Our branding is particularly attractive to people who love the color green. Their logo wasn’t green, so I don’t know why it could be that. Maybe there’s something about their marketing, something about the demographics that they naturally, we have no idea at this stage. We can start making insights. And of course, that’s going to determine how we can test those insights as well.

I think it might be to do our branding. Okay. Let’s create a recommendation that we can test around. Maybe we do some color variants or go to a branding specialist company. I don’t know, but it starts asking those questions and allows us to start thinking about why is this? And also what’s the effect? Is this a problem because we’re excluding all those people who have red clothes and blue clothes?

Or actually have we got a niche here, something we can really focus in on and become like the green clothing people. We can start asking more interesting questions as a result of this.

[00:28:36] Tina Ličková: What would be also interesting to know how many people had a color blindness, color insensitiveness to know, like how people perceive also the colors, but that’s just me digging out.

Okay.

[00:28:47] Daniel Pidcock: But yeah, you’ve got, you suddenly there’s a seam of gold there that we can start.

[00:28:51] Tina Ličková: Definitely.

[00:28:52] Daniel Pidcock: Whereas before it could have been, Oh, that was interesting. We didn’t expect to find that on the survey. Moving on by moving it into facts and calling out. Okay. So what does, what is the cause of this and what’s the effect of that?

It raises those questions in a really effective manner to start turning into something useful, potentially. Not always, of course, not everything you find is going to be useful, but yeah.

[00:29:12] Tina Ličková: One of the things that I Also, I’m interested in just enough to say yes and no. And Larissa, I’m also interested in your opinion about it because it doesn’t seem like a linear process when it comes to atomic research.

It’s like back and forth and that could be chaotic. How is it for somebody new coming to atomic research to actually understand this little piece of chaos, which is then giving structure, but it’s still a little bit like it’s not linear.

[00:29:40] Larissa Schwedersky: Yeah, for me, it’s very common to do research in a non linear way.

So yeah, I think it makes more sense because if you have a structure, like fixed structure, you have to think about it and you can get out of it. So for me, it makes more sense when it’s not linear.

[00:29:58] Daniel Pidcock: One of the things we’ve found is that there’s lots of researchers that use this just as a synthesis process I mentioned earlier, but probably the people, the organizations get the most value out of this organizations in which they’ve got lots of non researchers doing research.

And there is a small learning curve that they have to get over non researchers to understand it. But it is quite a low wall to jump over, but what it gives them then is a really good, flexible framework to work within and makes it easy to differentiate what they’ve learned with what they believe, for example, and give them confidence in, in, in what they’re learning and being able to reuse, reuse and benefit from other people’s knowledge as well and get the things together really well.

So the non, and I suppose also encouraging retesting as well. I think that’s a benefit is non linear. It is a circle. And I think that’s one of the positives that encourages. It’s not a case that we’ve learned that we made a decision, move on and forget about it. It’s a case of, yeah, we need to test that and check that.

And we’re always learning as we’re going. I think that’s actually quite natural. It’s quite human. And one of the mistakes I made in the early days, and it’s. Probably very clear with the term I talked about UX researchers. I assumed this was a user research problem. It isn’t, it’s a knowledge problem and knowledge affects every part of an organization.

And so I think very much in digital terms, the developers on our team, they’re operating with knowledge, and there’ll be re when they start a project, they’re researching, it might be the code rather than, or approaches and different architecture rather than the user needs, but actually even being aware of why we’re doing this is really powerful thing for them to be able to bring this is possible to the table and be involved in those discussions. One of the questions you asked on the preparation, I think was what point does a researcher stop? Do they just do the facts or do they go all the way through to recommendations? And the answer is it depends on the organization.

Sometimes they are just delivering recommendations just the facts and other people picking up insights or picking up recommendations, but it works best. When everybody’s collaborating, everyone’s bringing knowledge together. Everyone’s got the opportunity to go, I’ve got an idea of why that might be.

I’ve got an inkling, or I’ve got some evidence elsewhere. I’ve already solved that problem with a different project. And then conversations start happening, innovation starts happening, and it’s a beautiful thing.

[00:32:21] Tina Ličková: We are getting into the recommendation spaces because we speak to the customers a lot, but the recommendation should be definitely, as you said.

Something that everybody’s working on. But how much do you think you both, should a researcher go into the ideation, either in a facilitation role or even with the ideas? Because a recommendation is also a different thing than an idea, and I don’t want to hang too much on the terminology, where I’m going, probably, hopefully.

[00:32:52] Larissa Schwedersky: This is interesting because it’s very related to what Dana already said, but because we don’t think that researchers always have to create both the text, then synthesize the insights, and then make recommendations. It doesn’t need to be always the case, but we also don’t think that researchers can help on ideation as well.

I used to do it a lot on the company that I worked at before Glean.Ly. So for example, sometimes it could be that researchers only provide the facts and maybe designers or project managers synthesize insights and make recommendations. Or it might be that researchers make the facts and also create insights and someone else make recommendations.

Yeah, this is very open and our advice is any of those could be the case, but it shouldn’t be any person’s responsibility. So anyone should be able to create facts to deliver evidence. Anyone should be able to come up with ideas in the form of insights, for example, and anyone should be able to make recommendations as well.

So this is what I believe, and this is related to the research democratization that I talked about before, but Another important thing to highlight is that probably there will be somebody that has the authority to approve these recommendations.

[00:34:16] Tina Ličková: My final question would be, where can people follow you both? Where can people find you? So they can maybe try it out. If you are like organizing any workshops or trainings, how can people approach that and sign up for that? Because this is something probably might need a little practice.

[00:34:36] Larissa Schwedersky: We are on LinkedIn and you can follow us as Glean.ly and as Larissa and Daniel Pitcock as well.

And we are always posting there because we are doing once a week, a public webinar to present atomic research and to present, to talk about Glynly and Glynly features. And we are. Our intention is always should do that. So just follow us on LinkedIn and you’ll see the next ones and you can just do it.

[00:35:02] Daniel Pidcock: You could also do a search on Google or YouTube and you’ll see some talks and some articles from us as well. And yeah, so we’re doing the talks and if anyone via process of atomic can probably be done with most tools that people have already. We have built a specific tool for it. There were certain things we wanted to be able to do that just wasn’t possible using things like air tables and such.

And if anyone fancies giving that a go, that’s Glean.ly and there’s a 30 day free trial on there.

[00:35:31] Tina Ličková: 

Great. Thank you. Obrigada for your energy. Yeah, that is a wrap.

Thank you for listening to UX Research Geeks. If you liked this episode, don’t forget to share it with your friends, leave a review on your favorite podcast platform, and subscribe to stay updated when a new episode comes out. 

💡 This podcast was brought to you by UXtweak, an all-in-one UX research software.

 

 

 

Read More

Cultural Crossroads: When Research Worlds Collide

Maria Panagiotidi, with a background in Cognitive Psychology and Cyberpsychology, explores cultural sensitivity in UX research. She discusses the role of culture in communication during research and offers insights on inclusivity and generational differences in work culture.

Improve UX with product experience insights from UXtweak

Test your assumptions quickly, access broad and qualified audiences worldwide, and receive clear reporting of findings - all with the most competitive pricing on the market.

Try UXtweak for Free
Improve UX with product experience insights from UXtweak