The Optimal Path

Mastering product decisions through risk and reversibility with Dalia El-Shimy | Wise

Episode Summary

Dalia El-Shimy, Director of UX Research at Wise, presents a framework for navigating product decision-making with confidence—from daily product decisions to the most complex and high-risk scenarios. Dalia shares how to classify decisions based on their level of risk and reversibility, along with questions and tactics to help determine the type of research or insights needed to better inform those decisions.

Episode Notes

Dalia El-Shimy, Director of UX Research at Wise, presents a framework for navigating product decision-making with confidence—from daily product decisions to the most complex and high-risk scenarios. Dalia shares how to classify decisions based on their level of risk and reversibility, along with questions and tactics to help determine the type of research or insights needed to better inform those decisions.

About Dalia:

Dalia is an engineer-turned-academic-turned-user-researcher. She is the Director of UX Research at Wise and the former Head of UX Research at Miro, where she helped build the team and discipline from the ground up. She started her career as a human-computer interaction researcher, then joined Shopify, where she helped scale the UX Research practice from a few researchers to a team of 60+ strong and co-led the craft across the entire organization. When she’s not busy asking too many questions, she enjoys baking, eating, reading, and obsessing over all things David Bowie.


Connect with Dalia:

You can follow Dalia on LinkedIn or check out her website.

Resources: 

Follow Maze on Social Media:

To get notified when new episodes air, subscribe at maze.co/podcast.

See you next time!

Episode Transcription

Dalia El-Shimy:

Just because you're talking to users doesn't necessarily mean that you never have to conduct actual rigorous formal research. And just because you have someone on your team who can conduct formal user research doesn't mean that it absolves everybody else on the team from talking to users. And so if I bring this back to the framework of decision-making and you're looking at a decision that is high in risk and low reversibility and you need to gather more information, you probably want to conduct a study or delve into deeper with more formal user research.

Ash Oliver:

Today on The Optimal Path, we're discussing a framework for navigating product decisions with confidence and the types of questions and tactics to use in even the most complex and riskiest scenarios. I'm Ash Oliver, and this is The Optimal Path, a podcast about user research and product decision making brought to you by Maze. 

Our guest is Dalia El-Shimy. Dalia is an engineer turned academic turned user researcher. She started her career as a human computer interaction researcher before joining Shopify where she helped scale the UX research practice across the entire organization. She was formerly the head of UX research at Miro where she helped build the team and discipline from the ground up. Dalia is currently the director of UX Research at Wise. I'm excited to have you on the podcast, Dalia, thanks so much for being here.

Dalia El-Shimy:

Thanks, Ash. I'm really excited to be here myself.

Ash Oliver:

Today we're going to be talking about decision-making specifically how to know you're focusing on the most important decisions and how to use evidence to bolster more confidence when making decisions. And as we know, every day teams are faced with decisions that can make or break their future success. Yet despite the wealth of literature and discussion on decision making, many teams really struggle to know how to systematically and repeatedly make the best choices. I thought we could start by talking about why you think that is. What do you think poses the biggest obstacle to enhancing decision making confidence?

Dalia El-Shimy:

I would say decision making is hard for a lot of people. If I were to break it down to what I think is a fundamental thing that's happening, I think there's a bit of a fear of regret. People feel like as soon as they make a decision and they lock into a particular direction, if that direction or that decision turns out to be wrong, then there would be consequences for that. And it could be as simple as being in a restaurant and worrying about ordering the wrong thing and worrying that you're not going to have a great experience, even though that's not necessarily going to have huge consequences of your life but there's that fear of regret.

All the way to, in a professional context where you're making a decision and you have an entire team that's going to be mobilized towards executing that decision and then worrying on the other end that you might've gotten it wrong and it could have some pretty bad consequences and you'll come to regret that decision. I think that that's a bit of a fundamental fear there that underlies sometimes our hesitation towards making decisions quickly and making them with confidence.

Ash Oliver:

Everything from those that might struggle with analysis paralysis all the way through just the sheer fear that you describe in regards to the outcome and the consequences. Let's ground it a little bit in terms of the decision making that's happening in teams. You had a spectacular presentation that you gave at WebExpo and you made mention of Thomas Davenport. I'm wondering if you could maybe elaborate on what he calls decision disorder.

Dalia El-Shimy:

What Thomas Davenport did is he looked at a number of businesses, ones that were doing well, ones that were not doing so great, or that eventually came undone. And one of the factors that he identified as being a recurring theme in businesses that didn't end up doing so well is the inability to make decisions at scale or this decision disorder. And where this comes from is that I think traditionally back when companies were a lot more hierarchical, there was this notion that there was people at the top, executives, senior leaders, whatever it might look like, who were in charge of making decisions. And then these decisions were disseminated top down, and then people just had to execute on these decision and made sure that the work has happened. But I think as far back as the 80s, we started seeing a little bit of a shift in that with the rise of knowledge workers.

And at that point it became that the people who had a lot of context that was needed to make decisions weren't necessarily the executives who were sitting at the top. They obviously have a lot of context and they may have a lot of context that the other workers at the company may not necessarily have. But the reality is that if you're working within a particular problem space and you're an expert in that problem space and you have the knowledge that's relevant to your particular discipline, you may have a type of knowledge that they don't necessarily have. And so increasingly people started realizing that sometimes decisions need to lay in the hands of the people who are closer to the problem space. And that sounds all well and nice, but the problem is that people who are individual contributors working on teams day to day, maybe even managers, don't have the same level of, shall we say, experience with making important decisions the way that executives, founders, senior leaders do.

We start to see this tension between, well, it's not just senior leaders who need to make top-down decision, coupled with the fact that making decisions is really difficult for people who aren't trained in it. And one of the things Davenport believes in is that learning how to make good decisions is a skill that we can all learn and that we can train ourselves into and that companies should treat as any other skill that they want to upskill their workers in as opposed to expecting people to just magically have the ability to make difficult decisions. And I think it's something that I would say in my experience, even though we have this realization that people need to get better at making decisions, I don't think that I have seen that put into practice systematically at a lot of different companies where people are really focusing on how do we upskill everybody at making really good decisions as opposed to just senior leaders or maybe even PMs.

Ash Oliver:

It reminds me of the leadership saying about hiring smart people and enabling them to do great work. But to your point, I think it comes down to this confluence of those at the top that may have more of this vantage point to making these kind of critical decisions and the people that are closer to the context and the merge of those two. But we all make countless decisions every day from really generic to very unique decisions, simple to complex, what characteristics do you think define an important decision?

Dalia El-Shimy:

I think for me, there's a couple of different parameters that I tend to think about. And those aren't necessarily things that I've invented myself, but rather just things I've paid attention to over the course of my career and trying to understand what makes certain decisions different from others. And to put that a little bit into context of where my interest in thinking about decision-making and paying attention to how decisions are made came from is that being a researcher, the traditional wisdom around the role of a researcher is that we don't necessarily make decisions, but we get to inform decision-making. And over the years, I feel like I have sat in countless meetings where I have felt almost an urge or a desire to make the decision, but knew that I wasn't necessarily the decision-maker in the room and have watched the decision-maker in the room hesitate to make the decision and would just sit there and wonder why, what's the hesitation?

The decision is clear, let's just make the decision, let's move forward. And so I've reflected a lot on that and tried to understand why is it that sometimes the decision or the answer can seem really clear to me or somebody else in the room, but yet someone else who's in charge of making the decision is hesitant to do it or doesn't want to make the decision. And one of the things which maybe some of our listeners have already heard about is what Bezos, who's the former CEO of Amazon, of course talks about in terms of type one and type two decisions. Type one decisions is what he describes as reversible. And he uses the analogy of walking through door and just being able to turn around and walk back out. And type two decisions are decisions that you can't really reverse and once you've made the decision, you're locked into that direction.

And once I started putting that sort of characteristic in my mind, I realized that it becomes a really interesting way to think about why there are times when I'm really comfortable with making a decision and why where at times when I would say, okay, maybe we need a bit more time or we need to think about this a little bit more deeply. And then the other thing that informed my thinking about that as well was a PM that I used to work with at Shopify, Brandon Chu, who has written a lot about decision making. And he says that how important a decision is should dictate how much time you spend actually making that decision. And he has a lot more writing around that but one of the things that I felt like he really gets at there is the level of consequence that a decision may have, the level of risk that may be attached to a particular decision.

And so I started thinking about these two axes, if you will, around reversibility, so low reversibility, high reversibility, and then risk, low risk, high risk as being parameters to take into consideration whenever you're looking at a decision and using those to classify the type of decision you're looking at. And once you have more clarity on what the type of decision you're looking to make actually is, I think that that can help inform how much time do you want to spend on it, how should you go about making this decision? What do you need to put in place to make sure that you've made the right decision and so on?

Ash Oliver:

Love the background on what's informed this. I'd love to spend a little bit more time really looking at the framework in more detail because I think this would be really helpful. Can you go through the axes again and the two by two and specifically how does this show up in your day-to-day? Do you have a mental image of this framework that is just running in the background or do you really spend the time to clot out what decision is in front of you on this matrix?

Dalia El-Shimy:

The matrix itself is like again, if you imagine two axes, one around reversibility and one around risk. You would essentially have four quadrants, one that's high reversibility, high risk, high reversibility, low risk, and then let me see if I can get those straight, low risk, low reversibility and low risk high reversibility. I know that's a lot of different terms, but if we break them down one by one, you start to get a little bit more clarity. For example, if you have a decision that is low risk and that's highly reversible, those tend to be the kinds of decisions that I'm always really itching at, just make the decision. People will come to me or come to somebody on my team and ask for research, "We want to test this thing or we want to gather more feedback on this thing before we ship it."

And this is where if I look at the situation and I'm like, "Okay, well you just want to change the copy on a text and you can put that out. And if it doesn't..." The risk might not necessarily be that high. It's an easily reversible decision, especially if the connotation in the copy, the differences in the copy is not so high, I think we should just be able to go ahead and ship that. And I think that that's something that scares the team a lot because I think to a certain extent, and not necessarily in a bad way, we've been indoctrinated to believe that gathering research and gathering data is a really important thing before we make decisions. But if we're going to treat all decisions the same and we're going to stop and gather more information on every single decision we need to make, I don't think we'd ever ship anything.

I also think that it does the service to the type of expertise and confidence that we should be developing in our own practices as time goes on like, a content designer should be expert enough to make a call on changing the copy on a call to action or on part of a page without necessarily feeling the need to go ahead and do a whole test on that every single time. And if we do live in a world where we have them convinced that they need to test every single change that I think we're eroding their confidence in their own practice essentially. I think it's really important when these situations come up to just be comfortable with making that leap and going ahead and making the decisions. And I would contrast that for example, with the situation where you have high reversibility and high risk, so going back again to the example of wanting to change the copy on a CTA, if that change can still be highly reversible, if it's just a matter of going in the front end and changing a copy.

But if for example, the variation between the two terms that the previous one and the new one is really big, and maybe it's a really critical call to action, a lot of people need to interact with it, if they don't find it, then they don't get to start on some sort of critical flow, then I think the risk here ends up being much higher. And so what you want to do here is rather than necessarily taking a ton of time to preempt the risk ahead of time, given that the change is reversible, what you might want to do instead is set up tripwires. Look for example at particular metrics, at particular numbers or interactions or whatever you can keep track of after you've made the change in order to see if anything's gone sideways or if you're seeing any effects that you maybe had not anticipated or if your numbers are dropping down below a certain threshold and this will give you an indication that maybe you made the wrong decision.

Again, if the decision is reversible, what this approach does is it encourages you to still try to move fast to still make the decision, but at the same time be able to put the right parameters in place in case you want to keep track of things going wrong or make sure that that decision did not have any detrimental effects. If you are looking at decisions that are low risk, but also low in reversibility, so decisions that tend to fall in that bucket for me are often along the lines of maybe a feature that isn't super widely used that maybe a small proportion of your users are relying on, but you need to sunset it for whatever reason. A lot of times we'll see engineering teams leading to sunset features because they're trying to clean up the code or they're trying to refactor the code or they're trying to migrate in a way that is not very reversible, like they're about to invest a lot of time in some technical work that would be very difficult to reverse.

But you want to make sure that if that feature is sunsetted or disappeared, you're not necessarily going to annoy that small group of users who are highly reliant on it. And this is where I think preparing some workarounds for that becomes really important. Are there other ways that those users can achieve a similar outcome using other features in your system? Can you communicate the change to them ahead of time? Can you help them prepare anything that they need to do if they won't have access to that feature down the line? Again, it's not something that I think requires a ton of testing and research, but rather, what do you need to do to mitigate any adverse effects, even if they're small with that feature? Because once you make that decision, you're not necessarily going to be able to easily revert it, but again, at least you're mitigating some of the potential consequences.

And then finally, the most tricky quadrant to find yourself in is when you have a decision that's high in risk, but also low in reversibility, so the consequences of getting that decision wrong might be pretty high, but also once you've made the decision, it's going to be really hard for you to walk back that decision. And this is where I think you need to spend the most time, and this is going back to what Brandon Chu says about deciding how important a decision is should essentially influence how much time you're willing to spend on that decision. Those are the decisions that I think are really worth spending more time on to make sure you get them right.

Ash Oliver:

I absolutely love this matrix and it goes back to one of the points that you had made previously around the fear and the consequences. And I feel like this is a real solution to discern if fear is present in every one of those quadrants just because it's a decision to make versus what level of fear really should be tolerated for some of these types of decisions. Especially with the actions that you have for high reversibility, high risk for preparing tripwires or gathering more info when you're in high risk and low reversibility, I think that that really helps action what you can do to build more of that confidence. How do you use this in your day-to-day though?

Dalia El-Shimy:

For me personally, I want to say the thing of I have gut reactions to decisions, but I want to unpack a little bit what it means to have a gut reaction. I don't think having a gut reaction is some sort of magical thing where you're just going with however you feel and letting that dictate how you approach your work. I think that a gut reaction to me is really a quick or a physical or a knee-jerk reaction to something your brain knows to be true but hasn't quite arrived at why it's true yet. It's the physical reaction that preempts the logical reaction that you're going to have. And so I think that I will have immediate responses to conversations I'm hearing, decisions I'm hearing and feel like I know in my gut and then I will take time to parse why do I feel this way?

Why do I think we should pause with this decision? Why do I think we need to move forward with this decision? The more I think about it, it's almost always comes down to a pattern matching thing where it's like, I've seen this situation before and I've seen it over and over again in different contexts to be able to recognize some of the patterns that will tell me, okay, I think we should go ahead with this versus we should gather more information. But when I'm working with researchers on my team for whom this way of thinking may be new or still developing, essentially that gut feeling towards being able to discern immediately what's going on with the decision that needs to be made, I will share the matrix with them as is and I will encourage them to actually sit with their teams and work out together where they think some of the decisions that they're trying to make are.

And the context where this usually shows up for me is a researcher will come to me and say, "Hey, someone on my team is asking me to do this research and I don't necessarily agree, but I don't know how to articulate that to the team without coming across like I'm not being a team player." Or the reverse of that where I'll have someone saying, "Hey, my team really wants to move forward with this decision, but I think that we need to hit the brakes a little bit and pause and dig a little bit deeper before we decide how we want to move forward." And this is where I literally share with them the framework, put it on some sort of digital whiteboard and say, "Hey, take this back to your team, talk through this framework with them, talk about where you think the decisions might fall."

And then that leads to a discussion where people start to understand a little bit where the researcher is coming from and then they start to think about how they're thinking about the decision and articulate it a little bit differently and then they can move forward from there. But I think when you've done this often enough, it becomes a pattern that you can recognize when you're in the rooms and when you're in discussions with people.

Ash Oliver:

I love the utility. Thinking about research, do you often find thinking about the low reversibility and high risk quadrant specifically, is this where specialized research in your mind should really focus their time and efforts?

Dalia El-Shimy:

There's a lot of different parts to this question, so when it comes to research, I feel like this is probably... I keep calling this the worst kept secret and I'm now sharing it on a podcast, but I tend to default... This may sound counterintuitive to someone who has had... My entire career pretty much has been in research is like I default to a place of no. It's like, I don't think we need to do the research, please convince me why we need to do the research. And the reason why I come from this place is because I don't want people to rely on research as a crutch. I don't want them to believe that we need to research every little thing before we make a decision. Because going back to my point earlier, I think that that erodes people's confidence, I think it slows people down and I think it does a disservice to research as a discipline.

I'm much more interested in starting from a place of I don't think we need this research, help me make the case for why we believe this research is important. And then once we've determined that the research is worth doing, then it becomes a matter of where are we going to find the capacity or the bandwidth to do this research and who's going to do this research? I think it's fair to say that a lot of research teams across the industry right now are understaffed, people are stretched super thin. And then coupled with the rise of research democratization and more non researchers taking on their research, regardless of where you stand in this debate, the reality is that people are going to do it. Non-researchers are going to find ways to do their own research, so then it becomes a matter of like, okay, how do we decide what work a specialized or dedicated or skilled researcher, however you want to call it, versus a non-researcher can take on?

And to me, when I think about that, I think about what's the complexity of the research that needs to be done and coupled with who is uniquely positioned to be able to do this research as well as what's the risk of getting this research wrong? And if I think that the research is high enough in complexity, it would be difficult for a non-researcher to take it on and there's perhaps enough nuance in there or possibility for it to go wrong and people not realizing that the research has gone wrong, then I'd rather put a researcher on it. When it comes to this sort of quadrant of high risk and low reversibility where we have to gather more information, it's not an immediate, yes this goes to a researcher straight away. It's more a matter of, okay, we need to do something more here.

And then it becomes a question of who is the best person to take this on. And I'll also add that this idea of gathering more information is not strictly a user research activity. It can be, but gathering more information can also be working with analytics. Can we invest more time to understanding the problem quantitatively? It could be a matter of doing social media listening or looking at what our customer support contacts are telling us. It could be a competitive analysis as well, if it's a problem that is perhaps solved elsewhere, and it's a matter of looking how others are doing it. It's simply a matter of pumping the brakes and saying, let's just figure out what we need to learn about in order to be able to make this decision and you as a research as a conduit for that. And if so, who's the best person to take it on? But also there are probably other activities that we can also do in order to gather more information and feel more confident making the decision beyond user research.

Ash Oliver:

I think that's an important distinction when why I asked so that it doesn't just become a situation of if research should just live in that quadrant, but it sounds like gathering more info isn't a default yes to research, but a default to finding the evidence and that can come from lots of other places. Have you found that sometimes your teams that when they're asked to put forth this evidence and you're suggesting that new research doesn't need to be done, that some of the evidence that needs to be provided is more on the emotional side, is treating some of the fear? Not to say that that should fall all to the researcher's shoulder, but what do you think about that? Have you ever been in those scenarios?

Dalia El-Shimy:

I think you're probably right in saying that there's an emotional aspect to it. I think especially going back to what we were talking about earlier in terms of people having a fear of regret and there's a lot of emotionality that goes into making decisions for sure. I think in my day-to-day or in my experience, what I tend to leverage in order get through it is less perhaps an appeal to that emotion or a way to make people feel more comfortable. And I'm not saying that that's the wrong approach. I think that that's absolutely needed sometimes, and maybe I should reflect on when I could be doing that more often, but I tend to default more on the logic and rationale side of things. Like can I frame a solid argument to help them understand why the research may or may not be needed? Can I take the previous evidence that we have?

Can I use it to reach some sort of logical conclusion or demonstrate to them that, look, when I put all of these different pieces of evidence together, this is what the argument looks like or this is what the narrative looks like and this is the conclusion that we reach and therefore we don't necessarily need the research? And also if people disagree with it, then it's an invitation to poke holes at your own arguments, which is also a healthy exercise to go through. I could be wrong about it, and if I try to frame what I have in what I think is the clearest rationale and people still find fault with that rationale, then I think it's healthy for me to reflect on it that way.

And I think sometimes there's just a point at which we have to disagree and commit, and I know that can be uncomfortable for a lot of people, but I'm generally not a super confrontational person and I think that there are ways to disagree and commit where it's not a standoff, we can all still be on the same page even if we have different views of ideally how a situation could have gone.

Ash Oliver:

It's reminding me of some of the aspects that you talked about in the democratization effort and some of the conflicts that can sometimes arise when we have non-specialized researchers also talking to customers or users or informing their decisions through their own collection of evidence and the difference when there's maybe not a need to do net new research, but there's the level of rigor that's involved. When you're thinking about all of these streams of insights and information and evidence that can be pouring into teams, how do you help balance or integrate the insights that are coming from some of those informal user conversations into the formal decision-making process?

Dalia El-Shimy:

I think it's a matter of how confident can you be in what these different streams of information are saying. How much weight do you ascribe to something that is the result of rigorous qualitative research versus someone who's like, I've talked to a few users, can you use those same two sources equally the same? And I think that that's something that I've had to increasingly think about because there's been a growing discourse in the industry for some time now around this idea of talking to users. I listen to a lot of podcasts and read a lot of LinkedIn posts and a lot of different blogs where especially non-researchers, especially PMs sometimes are talking about, "Hey, talk to users, talk to users, talk to users, talk to customers, call up your customers." People are starting to use those terms interchangeably with the idea of conducting research.

And I think it's worthwhile just hitting pause and thinking about how those two activities are different. And the way that I think about it is I don't think one is a substitute for another. For example, just because you're talking to users doesn't necessarily mean that you never have to conduct actual rigorous formal research. And just because you have someone on your team who can conduct formal user research doesn't mean that it absolves everybody else on the team from talking to users. If both of those things need to coexist and aren't substitutes for one another, then how do we differentiate between them? And the way that I've talked about it with our product teams at Wise is that there are differences in similarities, and it's important to understand what each of those things can help you with. Both conducting user research, especially like qualitative user research where you are in direct contact with users and having informal conversations with users can be a really good way to help you hear from users directly to help you hear from them in their own words, to listen to them.

And when you talk to users, you might be able to pick up on some signals that could be interesting to you that you might then want to flesh out later on with additional research. But talking to users won't necessarily produce full-fleshed insights. And there's a number of reasons for that primarily in that you don't necessarily need representative sampling in order to just talk to users. You don't apply enough rigor to make sure that your methods and your findings can be replicable. You don't necessarily go through a process of analysis and synthesis, and all of these things are things that are really important as part of designing a more formal user research study. With that, what I tell teams is if you are just interested in connecting with users, you want to hear from them directly in their own words, you want to get out of your own headspace and listen to different perspectives. You want to start building empathy.

You want to maybe build relationships with certain customers that may be more valuable to you than others, then by all means, going out and talking to those users is important and it's something you should absolutely do. But if you are looking to investigate specific, well-defined questions, which usually requires a certain amount of rigor in order to get at the answer, then this is where conducting research is, you have to do that. You can't substitute it with just talking to users. And so if I bring this back to the framework of decision-making and you're looking at a decision that is high in risk and low in reversibility and you need to gather more information, I would definitely recommend erring on the side of, if the best way of getting at that information is qualitative, direct connections with users, you probably want to conduct a study as opposed to just talking with users. If all you have at that point are conversations with users, maybe use those to flesh out additional signals or knowledge gaps or things that you want to then assess or delve into deeper with more formal user research.

Ash Oliver:

I think that's so pointed and it makes me think about the difference between data and learnings and insights, where talking to customers may be part of that data gathering, but it's not necessarily equivalent to insights. I want to go back to another point that you made earlier in regards to research informing the decisions but not necessarily the ones to make the call. I think that's such a prevalent thing that's occurring in the industry, and I think a lot of advice is encouraging researchers to lean in more to the consultant advising in the decision to take place, not just providing the results of a research study. What strategies have you found successful in building consensus among stakeholders when user research is trying to drive a specific decision or recommendation?

Dalia El-Shimy:

I think one thing that's important to clarify is that when I say making the decision versus informing the decision, I don't hold either of those in higher regard to the other. It's not like I think that disciplines that get to make the decision are inherently more powerful or more responsible than the disciplines that inform the decision making. I think both are equally important, and I think that the disciplines that tend to make the decisions, which usually think of product design and engineering are because they are the ones who tend to work more on the execution side of things and make sure that the work is actually materializing and getting to ship every day. And it makes sense to me that they have to make a lot of decisions as part of that, whereas other disciplines, primarily user research, market research, data analytics, play an equally important role, but I think sometimes it's healthy to have that bit of separation where we can provide a lot of the context that help the day-to-day decision makers, but leave it in their realm.

And trust them to be able to make the best decisions based on the actual execution that they need to be doing day-to-day. And I also want to clarify that when I say people in those disciplines, like research or data or what have you, are informing the decision, that doesn't mean you shouldn't have an opinion about what the decision actually is. In fact, I often encourage people to have a strong opinion about what the decision actually is, and this might be a bit controversial or as my research team at Miro would call it a spicy take. I'm not a fan of how might wheeze, I'm not a fan of sitting on the sidelines and posing a question and being like, "How do we solve this thing?" But I think sometimes we're almost shirking the responsibility a little bit by saying like, "Oh, I'm just going to frame this insight as a question and let you work it out."

I think it puts us in a much stronger position if we're able to actually suggest what we believe can happen, but also frame that to the team as like, this is what we think needs to be done, but we would love to still continue having the discussion and come together to an understanding as to how we might actually execute on this thing. I tend to approach this also with a perspective of, I don't think that I'm necessarily right just because I think what the decision needs to be made is. And I want to operate in a high enough trust environment that I believe that even if engineering or product or designer or whoever are going to make a decision that's different. And even though I wish the decision was one that reflected what I believed, I trust them enough that they're making a decision that takes into account a lot of other constraints that I may not necessarily be thinking about. I need to come in and advocate as much as possible for what I believe is going to be the best thing for the user.

But the engineering decision maker is going to also, even though they might want to make the decision that's best for the user, they also have to take into account technical constraints. Product managers are taking into account potentially other dependencies, roadmaps, timelines and so on. And so I think it's important for me to come in with the perspective that represents the user, but also respect that that perspective is going to need to be balanced with a number of other perspectives as well. I also think that if we are not doing our part as researchers to also come in strong with what we believe the opinion that's informed by our users and what they need is going to be, then our voice is going to be lost. Just because I'm coming in strong with the decision doesn't mean that I am expecting them to do exactly what I say, but I'm expecting that voice to be heard and I'm expecting it to be taken into account equally with all of the other things that they need to take into account.

Ash Oliver:

So well said. In your presentation you had talked about how good decision-making is circular and that it really needs a feedback loop as we gather information and analyze it and work through the thinking. I think what you've described is that there's this myth that decision-making is linear. When we're coming out the other end and we've made the decision, I think there's so many times where we can apply hindsight. We can very clearly see what was the right decision and what maybe was the wrong decision. But I'm wondering if there are ways in which you've measured the impact of decisions that are made using this matrix and framework.

Dalia El-Shimy:

That's a good question. When you say measuring impact, I start to think about measuring the impact of research and how far we've come in this conversation and how I've changed my views in a lot of ways on that conversation. I think impact sometimes can be really easy to measure and in some cases it isn't. And I think that it's okay that in some cases impact is going to be difficult to measure and we don't necessarily need to try to force that. And I think when it comes to decision making, yes, there can be situations for sure where you can look at metrics or clearly measurable things that changed as a result of the decision that you made and are able to point to that, but I think it can be equally valuable to reflect on your decisions in other ways. I'm a big fan of retros, and I don't mean just retros that you run at the end of a sprint to think about what you're going to do differently in the next sprint.

I'm actually a fan of doing retros at the end of a project, wrapping up and looking back over a period of time, a longer period of time as to how it was executed and the decisions that were made along the way and what we learned and what we could have done differently and what we would do differently moving forward. But I'm also a fan of day-to-day ongoing reflection. Someone on my team, for example, recently ran a workshop that they didn't necessarily feel super happy about the outcomes of it. And so what they did is the next day they did their own personal mini retro, and then we chatted through it as a way of thinking about, okay, this is how it went down, this was the decisions that they made, what do we need to communicate to the team at this point? What would we do differently moving forward in order to make sure that whatever workshops we run have the intended outcome that we want to have?

And I think similarly with myself now that I'm more in a position where a lot of my time is about leading a discipline as opposed to being involved 100% of my time in product decisions, I also tend to reflect on my own decisions, a lot of decisions when it comes to hiring, performance reviews, how I deal with people on my team, conversations I have with them, those are also critical decisions that any leader takes day to day. And I do spend the time, even if it's just with myself in my own brain, reflecting on the trajectories of people that I've worked with and thinking about things that went well, things that didn't go well, and how I would want to deal with certain situations differently moving forward.

What I'm trying to get at is that it's not just decisions related to product. It could be decisions related to management, it could be decisions related to your own growth. It could be decisions related to activities that you run day to day. And you can go anywhere from really formally trying to look at, did this move the needle in any measurable way? All the way to just the self-reflection, what did I do this time and what would I do differently moving forward? And how do I feel about the decision that I've made?

Ash Oliver:

I think we can get swept up in these types of things is having the moment of reflection, whether it be in something like a retro or something like a journaling exercise, I think is so important. Before we transition into the last segment of the episode where I ask you some personal questions, I'd love to just ask what haven't I asked you in regards to this topic that you would like to close on?

Dalia El-Shimy:

That's a good question. I would say my TLDR is decision-making can be a lot of fun. And I say this as a very risk averse person. I'm the kind of person who freaks out at the ice cream line as to am I going to make the wrong decision? Am I going to make the wrong decision?

Ash Oliver:

Dalia, that does not make sense who I know you to be. That is so surprising.

Dalia El-Shimy:

It happened to me yesterday where I was at the same ice cream store down my street that I always go to, and I'm like, "Am I going to get the flavor that I know is good or am I going to take a risk?" And having this moment of, oh my God, what if I get the wrong decision where literally I can just walk back another day and get a different flavor? But once I started thinking more about decision-making, I started realizing that it's not as scary as I used to think. And I started realizing that I'm not as indecisive as I used to think because of all of my experiences at ice cream stores and restaurant ordering, making me believe that I have a lot of hesitations around decision-making. I think my guidance is just to pay attention to just be aware of how you respond to situations in which you need to make decisions, what you are taking into account as you're making the decision.

How confident are you in making the decision? Are you avoiding making a decision? What would make you feel more comfortable making the decision? I think we can talk about frameworks and we can talk about gut feelings and we can talk about experience and we can talk about all of that. But the first step is really just being aware and just paying attention. And then if you start to see that there are patterns in the way you're making decisions and things that maybe you want to change about those patterns, then at least at that point you're actively thinking about it. But I think without paying attention, it's hard to know where to start in the first place.

Ash Oliver:

Absolutely great advice. All right, I want to transition into our series of hat trick questions. These are the questions actually as a Canadian, hopefully you get that reference, but these are a series of questions that we ask every guest just to get to know you a little bit more personally. My first question for you is, what's one thing you've done in your career that's helped you succeed that you think few other people do?

Dalia El-Shimy:

I don't necessarily know if this is something that few other people do, but sometimes I feel like I hear a lot of advice to the contrary. But the first one is saying yes to a lot of things. I think especially when I was younger and I was starting out in my career, every opportunity felt like a good opportunity. If someone asked me to do something, I wanted to give it a shot, I wanted to try it out and see what I could do. And I think I still applied that thinking to a certain extent these days where someone will ask me, "Hey, can you do me a favor or do you want to do this thing?" My default is to want to say yes, I'm getting a bit better at learning when it's okay to say, Hey, can I get back to you on this?

Or let me think about it to take some time to reflect on is this actually going to be a good use of my time? But I have found that just coming at things from a place of like, yes, I want to try new things and yes, I want to take on new opportunities has definitely helped me out a lot in my career. And the other one that I think is also somewhat related is I tend to try and do my best to make time for just about anybody. I think that taking the time to talk with people, to get to know them, to build the relationship. Even if I don't necessarily, or they don't necessarily have an agenda upfront, I've been really surprised at how these things come around. And having strong relationships with people is something that I think has helped me tremendously in my career.

I know that I also come from a privileged, not a privileged place, but I don't have children, for example, so it's not like my day and my routine and my structure are super rigid in a way that I'm responsible for other human beings and I got to make time for them. I recognize that that's not necessarily something everybody can do, but if you find time here and there to just be like, hey, I'm going to earmark some time for those informal connections for doing coffee chats with people, for saying yes to things that don't necessarily have a super clear connection to what I'm trying to achieve every day, I think that that over time tends to pay off.

Ash Oliver:

My next question for you is what is the industry-related book that you've given or recommended the most and why?

Dalia El-Shimy:

I wish I had a more exciting answer to this. I don't know, maybe the listeners will find it exciting, but if I had to think of what is a book I recommended more than anything else, it would probably be Gamestorming, which is the book around workshop design. And it's probably the book that I've gone back to the most in my career because it's essentially a reference book, so I tend to do a lot of facilitation and run a lot of workshops. And I've read through the book back to back, and if you haven't read it's a book that gets at the idea that workshops have a very specific design where you open the world, you explore the world and you close the world. And once you adhere to that structure, you can figure out what are good activities for opening the world, what are good activities for exploring the world and what are good activities for closing the world?

And so often if a team approaches me with needing me to facilitate something or if I decide to run a workshop, I will literally flip through the book and I have a bunch of earmarked pages for my favorite exercises. But I found it really handy too when I talk to other people who are like, "I want to run a workshop, but I don't know where to get started or what to do." And I think their brains kind of melt because I think that a lot of people think that workshop facilitation is again, one of those magical things that some people are really good at, but they have a very predictable repeatable structure. I'd say over time I've relied on this book more than anything else.

Ash Oliver:

I have a very well-worn copy of this book, so it's definitely, it's still an exciting recommendation. My last question for you, Dalia, is what is an unusual habit or an unconventional thing that you love?

Dalia El-Shimy:

I have to think about this one a bit because I'm like, I do a lot of weird stuff, but which of them are actually worth sharing? And I think where I've landed is I don't believe in over-optimizing for everything. I think we're moving away from that world a little bit, but I think there was a time in tech where I was, highly successful people wake up at exactly this time and then meditate for 3.75 minutes and then do a yoga, and then drink a smoothie and then go for a run, and then do journaling and then do gratitude and all of that sort of stuff. And if it works for you, that's great. All the more power to you. I think what I've learned from myself is just because I'm not doing that doesn't necessarily mean that I'm living some sort of suboptimal life.

And so as a result, I'm a big fan of wasting time. I'm a big fan and doing unstructured things. It takes me at least two hours to get ready in the morning, and those two hours are just not... I'm eating my breakfast and reading the news and taking my time and maybe watching a little bit of real Housewives while I do my skincare. None of it would fall into this trend or this category of a highly optimized use of my time. And I've learned to stop feeling bad about that. And I've actually learned that it's a good thing for me. It works for me. It puts me in a head space where I'm ready to leave the house. And most recently I saw a colleague participate in a panel on burnout, and he was the only person on the panel who'd never experienced burnout.

And so they included him in the conversation to try and understand what his secret was. And he was like, "I take two hours to get ready in the morning." And I've not really had bad burnout myself. I'm aware that two data points do not an insight make, but I definitely resonated with that and that little bit of unstructured time where I don't have to feel guilty about it and I don't have to feel bad about it and it's mine and I do what I need to do in order to feel ready to go out into the world and take on whatever it has to throw at me, that's my potentially unconventional or unusual habit.

Ash Oliver:

It's beautiful. I love that. This is been tremendous. Thank you so very much for being here and sharing your thoughts. This has really been fantastic.

Dalia El-Shimy:

Thanks Ash, and thanks for having me and giving me an opportunity to share my thoughts. It was really great to have this conversation with you.

Ash Oliver:

Thanks for listening to The Optimal Path, brought to you by Maze, the user research platform that makes insights available at the speed of product development. If you like what you heard today, you can find resources and companion links in the show notes. If you'd like to stay connected, you can subscribe to the podcast newsletter by visiting maze.co/podcast and send us a note with any thoughts or feedback to podcast@maze.design. And until next time.