Product Managers Should Take More (Calculated) Risks
Background
At the end of the end of the day, product managers are measured on their ability to make good decisions over and over again, the sum of which leads to positive outcomes for the customer and the business. This is precisely why I’ve chosen to focus so much of my writing and framework on judgment and decision making ability. It’s something you can improve with exposure and training, and is so central to your success in the role.
One of the key elements of decision making is knowing how to factor in risk. More specifically, excellent product managers know how to make calculated bets with appropriate levels of risk. Usually this topic comes up in the context of “big strategic bets” but it can honestly factor in at any level of decision making for a product manager. Jeff Bezos famously took on this topic in the wake of the Fire Phone failure in a letter to Amazon shareholders:
“As a company grows, everything needs to scale, including the size of your failed experiments. If the size of your failures isn’t growing, you’re not going to be inventing at a size that can actually move the needle. Amazon will be experimenting at the right scale for a company of our size if we occasionally have multibillion-dollar failures. Of course, we won’t undertake such experiments cavalierly. We will work hard to make them good bets, but not all good bets will ultimately pay out. This kind of large-scale risk taking is part of the service we as a large company can provide to our customers and to society. The good news for shareowners is that a single big winning bet can more than cover the cost of many losers.”
There are a few key takeaways from this quote:
The size of the bets needs to scale with the business to ensure you have a chance of actually moving the financial metrics enough to make a difference
Expect some expensive flops (and the implication we should be measuring our decision making ability over a long time horizon and several bets)
Well made decisions will still occasionally result in failure financial failure though you may still learn something valuable (Fire phone tech was a seed for Alexa)
I think it’s fair to say that there are many examples of the opposite of this behavior in most companies, particularly larger ones. In his book Decision Leadership, Don Moore points out that large companies, despite their larger pool of resources, often adopt a defensive posture and routinely pass up large (potential) returns because of risk aversion. It’s not that they make bad bets. They aren’t making any large good bets, just a bunch of small ones (of questionable quality), with small upside risk and, importantly to them, small downside risk.
In a more specific to product management application, I see this come up all of the time in different ways:
Over-emphasis on small A/B tests and experimentation - these are optimization tools and are not typically the path to major breakthroughs or new product opportunities
Large market leading incumbents refusing to introduce products or pricing models that may disrupt their own business, thus leaving the door open for someone else
If you do some basic napkin math, you can actually see why this happens as companies get larger. When you have a startup with $5M ARR and 5 engineers you cannot mathematically take enough small bets in the year that would add small increments to revenue to drive enough growth to satisfy any VCs. Everything has to be a big swing at some level for a startup. When you are a $100M company with 70 engineers, assuming you can actually find enough valuable projects to do, you could in theory increment your way to reasonable growth numbers at relatively low risk. Early phase companies have no choice but to take risk - it’s everywhere including the possibility of the company folding. As you get larger you can maintain a reasonable amount of growth through incrementalism against a large revenue base without needing to take on “extra risk.” This misses the point, and is what Jeff Bezos was communicating in his letter. Regardless of the size of the potential downside, you should take good bets with a positive expected payoff, and measure performance over a long enough timescale to rule out luck (either good or bad).
Of course there are some scenarios where a company of any size may not financially be able to withstand the downside loss of a bet, even if it is well made and has a positive expected return. In those cases it may not make sense to proceed, however I would caution against other types of spending to achieve the growth goals that appear less risky such as standing up new sales channels, marketing to new audiences, or geographic expansion. All “new” has some level of risk, so be sure that your company is not inappropriately flagging product development as inherently more or less risky than other activities.
Remember Expected Value (EV) from College?
You probably learned about it in at least one class. Likely stats, or economics, and then unless you went into an engineering or finance field you probably forgot about it. Guess what, it’s baaack. The fundamental framework for assessing whether you should take a bet (be it in a casino or a software team) is expected value. Product managers just don’t use it nearly enough in weighing big decisions. The basic framework is pretty simple:
Expected Value = probability of success * value of success
This is the version you see all of the time in books, especially when applied to things like rolling dice or picking cards from a deck, however this doesn’t work as-is for business decisions. You need to factor in the costs of taking the bet. Specifically:
Real and/or opportunity costs of the software developers doing this for some period of time
Real one-time costs associated with building for the first time
OPEX costs such as licensing that would be “new” for this project and will be there for its lifetime
So with a simple update:
Expected Value = [Returns * Probability of Success] - [COSTS]
You need to make sure you line your timelines up to be equivalent for both returns and cost. For small and medium size projects that are going to take a few months of a small number of people’s time, it’s easy to simplify to say “Does the return in the first 12 months pay for the investment and more?” Bigger projects that require more resources and that may take more time to build in the market are unlikely to pay back in year 1 all of the time and likely will require longer time horizons. For extremely large projects or complex decisions you can read more about decision nodes and EVM here.
Wait a minute, this doesn’t sound very agile.
Think of agile and working iteratively as a de-risking strategy. Even if you will work iteratively you still have to dedicate a team for some period of time to see any results. If in the first couple of small tests you’re not seeing the right positive signs you can reassess before you commit all of the upfront cost you had planned for to claim the returns. You still lose financially, but less, and perhaps you learned something.
iPhone was a big bet, but what’s not clear was how much (if any) “iterative de-risking” had happened prior to launch. Steve Jobs had many successes at bringing something transformative to market, but he was also an anomaly. He believed singularly in the potential of the device and famously would say some version of the quote “if you ask people what they want, they will tell you they want a faster horse not a car.” Most CEOs and leaders aren’t that sure. :-)
When I worked at Boeing early in my career they used decision trees similar to the one in the EVM article, because in large part, you still have to just build the new plane and see if it sells. You really are placing a multi-billion dollar bet with multi-billion dollar payouts over 30 years and you may not know until year 5-7 if you are even close to the mark.
In software there are occasionally large build outs or investments that may go on for a long time before you expect to see a return. Let the EV of the big picture opportunity guide your decision whether to invest at all in the beginning, however you can still be more incremental than some other industries. You can look at a big project with big returns over a long horizon and break it into smaller chunks with smaller costs and paybacks. Still follow the direction that the single big equation is telling you (“its worth pursuing”) but then you can gate your investment level a bit so that if you end up in the bad part of the distribution you don’t keep going.
Where am I supposed to get probabilities from?
People make up sales targets, usage metrics, and all kinds of single point forecasts all the time. They are based on intuition but are effectively made up. Sometimes we trick ourselves into “could be no greater than” or “won’t be less than” logic where if a business case closes given some seemingly obvious threshold then it makes sense to go forward. The reality is that point estimates are almost always wrong, but we still use them all over the place. If you are going to represent your intuition (or that of a group), why not do it with a probability distribution that shows the likelihood of many different possible outcomes? How often in reality is the answer 0 or exactly the target?
In his book Perfectly Confident, Don Moore points out the pitfalls in relying solely on single point estimates, and suggests why probability distributions are a better way to go:
“The” number is a traditional forecast is always wrong and puts forth a false sense of accuracy
You neglect the full range of possible outcomes.
The second one I think is frankly the most important for product management situations. If there are roughly equal chances of a relatively wide number of sales and only a very small chance of exceeding that band, does that change your approach? Does it impact sales and marketing? What’s driving the variation? Simply showing how wide the band of uncertainty is shows how confident you are in the outcomes. This is all valuable information that is usually left unspoken in the heads of the people doing the planning that gets reduced to a number of $10M. Now when you are going to calculate EV, use the weighted probability given to you by the distribution and show the range of the estimates - you’ll communicate a lot more information.
Ironically, you will see this all the time in conversations with your go-to-market functions and the finance team. The financial plan typically has a point estimate for leads, conversion %s, deals, average deal size, etc. Of course, no one cares as much if those are off as long as the primary revenue and cost number are right. What does marketing do to fill a funnel? They run lots of different strategies to bring in leads. They are spending money making bets on campaigns and messages, and fine tuning all the time, moving investments around to optimize performance. It’s easy to miss it, but marketing is using their budget to develop a portfolio of bets with an EV that adds up to the leads target. Sales and Customer Succes look at their pipelines the same way. We know only a % of people at various pipeline phases have historically converted so we need to fill up the pipe such that the EV of the pipe is at or above our revenue targets.
Product people need to start thinking the same way. If you have a goal (monetary, usage metric, whatever) start thinking in terms of what is the portfolio of bets that you can use that maximizes the EV possible for you set of resources. In non-textbook speak: how do I get the most likely chance of the biggest returns out of my team? This is very different from where most people are which is either:
“Is this positive ROI” - a good start, but missing a crucial part of the story
Saying in isolation “this is risky” or “the downside is huge” without looking at the full EV equation
Here’s a quick article on multiple different approaches to calculating EV in different scenarios.
Groups Can Be Smarter than Individuals
Another way to improve your estimates is to involve more people, but to do so in a structured way. One of the more popular, yet formal ways of doing this is a Delphi survey. At a high level several people give their opinions based on some provided data and context and then are asked to revise their answers based on everyone else’s answers in the first round. The key is that people are asked for their estimates and revisions independently, not in a group setting. In this way you get the benefits of several people’s intuition, without all of the biases introduced by putting people in a room together. It’s not perfect but if you have a way to quickly survey a few folks about the distribution of possible outcomes, you’ll probably do better than by yourself. If you have the time for the second round based on revisions you’ll improve a bit more.
Sometimes the Process Does Matter as Much as the Outcome
There is of course the need to create the right kind of company culture to give people the safety and confidence to make good bets that may not always pay off. I’d say writing a letter to shareholders saying “expect an occasional billion dollar flop” is probably some of the best leadership by example I’ve seen in that space, but that is uncommon at most companies. People focus on “results,” which is code for “did we hit the revenue target?”
This is why it’s so critical for leadership to pay attention to decision quality, and yes, the process why which decisions are being made. The world is complicated. Shit happens. COVID happens. You need to be able to evaluate whether the success or failure of your team is based on a thoughtful process or just luck and randomness. Don Moore writes about the importance of detecting luck vs skills in Perfectly Confident:
“If being results oriented means you reward successful results and punish failures, you will wind up rewarding luck, incentivizing caution, penalizing the unlucky, and discouraging well-intentioned risk taking. The reason is that when luck plays a role—as it does in the success or failure of any organization, project, or product—then the best people and best ideas are not necessarily always successful. Sometimes people succeed or fail due to circumstances beyond their control. In other words, you should not punish people for unlucky outcomes on smart bets with positive expected value.”
Adopting a mindset of taking calculated risks will also help you avoid burnout, even in a very accepting culture of risk-taking. I’ve talked about resilience as being an important behavior of strong product managers - dealing with the ups and downs of the role, the stress, and all the while remaining effective. This does not mean bury it and never talk about it. Even if no one else is giving you a hard time for something that did not break your way, it’s very likely that your interior monologue might be very critical. In The Scout Mindset, Julia Galef says:
“You want to get into a mental state where if the bad outcome comes to pass, you will only nod your head and say ‘I knew this card was in the deck, and I knew the odds, and I would make the same bets again, given the same opportunities.’”
The more you train yourself to think in probabilities and seek positive-EV outcomes the more likely you are to stay focused on the future and improving future performance, and less time dwelling on past decisions that didn’t go your way. The importance of following best practice for decision making is that it frees you from regret as best as possible when things don’t work out, and allows you to focus on what could be learned from the events and applied to the future. Not only will this make you a better product manager, but likely a happier one too.
Summary
Effective product managers are trying to maximize the expected value of what their team can produce, not trying to minimize the downside risk. They improve their decision quality by looking at forecasts and outcomes in ranges of probability, not singular point estimates and try to bring the collective intelligence of their peers into the decision. Most importantly, good product leaders focus on decision quality, and whether their teams are making good bets, rather than rewarding luck and punishing failure without considering outside factors.