the playbook part 4: product management
sunk thought’s - the playbook
“the playbook” is a mini-book delivered in 8 posts (the forward and seven chapters.) It’s as short as it is insightful for all sorts of roles, be it engineer, product manager, people manager, or executive.
Product Management
Applying the Scientific Method to Your Business
Identifying and Solving Business Problems
When you found a company, one usually begins with a business problem and a possible solution. By the time you have funding and are moving along you’ve acquired a ton of domain knowledge about this business problem from speaking to people (potential customers, partners, employees, VC investors, and experts) about their views on this problem and the possible solutions.
Your vast domain knowledge allows you to steer the org for a long while, but as your scale and speed of ops increases, you become the bottleneck. You need more capacity to discover problems and test solutions. Like Batman needs a Robin, you need additional “problem finders” to help you fight your entrepreneurial fight. This is who your Product Managers are, and why people (usually PMs) say that “PMs are the CEOs of Product Offerings.”
Customer Discovery
One of the ways you can discover problems worth solving is through “Customer Discovery.”
In the absence of application data to analyze, people are the only resource you can tap into. So, discovery is vital, especially when working to understand problems you’re not already working to solve in production. This isn’t to say that data can replace customer discovery, or that you don’t need data if you devote yourself to discovery. It’s just one of your PM’s tools in their arsenal they will use to get there.
There are several basic methods of discovering customer insights from humans:
Anecdotal Data Collection - a good source of early questions to test.
Feedback from Customer Success & Sales
Voice of Customer Tools
Social Media Sentiment
Surveys - collect basic information from a statistically valid segment of the population. In addition to testing basic sentiments as a whole this is great for identifying candidates for the next phases of discovery. These can be done online, on the phone, in person, etc.
Interviews - In depth discovery with an individual. Best on video or in person, as seeing one another creates an atmosphere of intimacy. Dive into their motivations, pain, needs, interests. Ask them about potential problems and solutions you could solve.
Onsite Observation - The next level in effectiveness, onsites allow you to observe customers in their natural environment. Giving you a chance to walk in their shoes and identify their pain in the wild.
User Testing - Once you’ve theorized on solutions you can invite targeted customers to try them out. It’s best to do this early and often. If you have seven features on your roadmap and two are done, test it with some customers in person. What’s the worst that could happen? Let’s say they respond negatively (the worst case, right?) ... all because you’re missing two important features! You just got an opening to ask them more about their pain points and valuable information about your ideal scope.
Should those two things just happen to be on your roadmap, you just identified your true MVP feature set! And, it only requires three of the five remaining things you thought you needed to get there.
If you didn’t plan to build them, now you know your MVP might not require five of your five remaining features, but requires two new ones. This saves you from building five features you don’t need, and helped you identify 2 others to build, all in time to make your launch on time if not early.
Feedback Cycle Time
I’m going to make a quick aside here. Businesses spend a lot of time early on (and throughout their lives) failing. Failing and failing and hopefully eventually succeeding.
Because of this, speed of failure is the most useful “lever” you can engage in your product cycle. We do not create solutions. We identify problems. We then make a hypothesis (not a solution) which we test and measure.
“Punk Rock” products win by testing minimum feature sets and gathering feedback as often as possible. By accelerating the process of failure. The faster you can fail, the less energy each failure expends. The faster you fail, the faster you learn, the faster you succeed.
The “Jobs To Be Done” Framework
One of the important things to understand about a product is what it is that people are trying to achieve with their usage of it. As product and engineering teams it can be tempting to simply believe if we were to “improve” a product as it was designed we’ll move the bar on our important metrics. The problem becomes that “The customer rarely buys what the company thinks it is selling.” - Peter Drucker
The classic example of “Jobs to be Done,” told by Bob Moesta, is about McDonald’s and their quest to improve their milkshake sales. Simple enough, right? And, of course, McDonald’s is a huge organization with abundant data, savvy product teams, and user testing resources at their disposal. They ordered some case studies and brought in folks who fit the profile of milkshake drinkers to learn how to improve them. They gathered great feedback, diligently improved their shakes based on it, but all these improvements made a negligible impact on their sales despite their careful investment in customer research and marketing.
This is when they looked at the problem from a “Jobs to be Done” perspective. The idea is that whenever someone spends money on a product they are hiring that product to perform a job for them. McDonald’s needed to understand what people hire milkshakes to do for them in order to actually understand the metrics that will move the needle, by addressing performance metrics that truly matter to their customers.
In the preliminary research one surprising result stuck out: that nearly half of their milkshakes were sold before 8am. What job were these people hiring a milkshake for so early in the morning?
As they spoke to these morning clients they found that they all had a specific job they were hiring a milkshake for. They had long boring commutes in their cars ahead of them, and a milkshake gave them something to do while driving. The cup holder in the car meant that a milkshake was more convenient than a sandwich, bagel, or doughnut by keeping their hands free. The viscosity of the drink meant it lasted for a long while where as a banana might only last a minute or two. In addition, the banana left them hungry by 10am, but a milkshake didn’t.
Understanding this McDonalds could then approach improving their product for these folks by addressing the metrics that were important. Making them thicker, making it easier to grab one and hit the road without waiting in line, adding healthier breakfast drink options like yogurt smoothies. When tested, these improvements actually increased sales more than 6x by improving the effectiveness of their milkshake in ways these customers appreciated.
To understand your products and customers you have to understand the functional use of your product, not just the “problems” we designed it to solve. It might turn out your competition isn’t just other milkshakes, it’s actually bagels, snickers, and bananas. If you don’t understand that you’re failing to understand what market your product is in and how to measure it effectively.
Data! Data! Data!!
None of the preceding is intended to downplay the importance of data. Instead, it’s to drive home the importance of gaining a functional perspective to allow you to interpret your data in meaningful ways and gain insight into what metrics are actually vital to improving your offerings.
Product Managers live for data, but without perspective it can become all too easy to drown in all the available information. With all the data gathering and analysis tools available to us today it’s easy to become overwhelmed and find yourself unable to decipher which KPI are truly effective uses of your efforts and attention.
Data, after all, is still all about people.
So, what are some data informed ways we can measure our products and how people use them? How can we build a strong data culture and develop actionable customer insights?
Much like your software engineering stack, your company needs a data stack. At scale it might look like:
Infrastructure - data warehouse systems and tools. Folks working on this layer work to obfuscate various info from AWS, Hadoop, Apache, Hive, etc
ETL (Extract, Transform, Load) - curates this clean data and makes it available for analysis
Analysis - studies your data to understand business behaviors and infer correlative metrics which might act as causal drivers to top line outcomes
Data Products - machine learning, recommendation algorithms, and feedback loops that sit on top of your product and drive user behavior
Experimentation - allows you to detangle correlation/causation and measure if your work is actually driving results
Visualization - enables those who are not data scientists but need data to more quickly make decisions within your organization
These layers require several people and tools within your organization to wrangle insights (Data Scientists, Data Analysts, Data Infrastructure Engineers, Visualization Experts, etc.) Not all companies have all these roles at a given scale, but Product Managers must work with them and consult with them to understand the problem space and infer business opportunities.
Similarly to how other teams grow, many of your data resources often function best at scale when embedded within your cross functional product and development teams.
When not yet at scale, it’s vital for PMs to be able to handle as much of this stack as dictated by your available resources (usually at the analysis, experimentation, and visualization layers.) They won’t be as effective at it as their data science counterparts, but they can help bridge the gap until those resources become available.
This basic understanding of data is vital to helping PMs analyze their intuitions and customer insights to derive a useful hypothesis, quantify that hypothesis’ risks and rewards, prioritizing your ideas, and testing their effectiveness in the real world.
Prioritizing New Ideas vs. Ideas in Backlog
One of the most vital roles of a Product Manager is prioritizing the efforts of your company and aligning teams around a product mindset. In a software company, we are not just trying to write code and ship features, we are working to build businesses after all.
So, how do you qualify new ideas, and once qualified how do we prioritize these ideas into our existing product roadmap?
Qualifying new ideas is the key here. Here’s a simple checklist many PMs use to evaluate new ideas:
Is there...
a data based reason
or, another business reason
...that supports this problem, testing plan, or feature?
Is this idea feasible from an engineering, design, and operational standpoint?
Does this idea pass simple/dirty user tests like:
Low-fi mockups
Happy path demos
Customer interviews
If our new idea gets past these three points then we can start figuring out where it belongs on our roadmap. If it doesn’t, then it’s not ready for primetime.
To prioritize work there are several variables which will most often guide your process. They are:
The cost of production and your available design and engineering cycles
The potential business value of this idea vs those currently in the backlog
The dependencies that other projects have within your backlogged ideas and the potential business value to these projects
The collective enthusiasm (internal and external) for this new idea
Once you understand this landscape you can ideally then place bets on the ideas that have a higher likelihood of success and potential business returns in ratio to the cost of development, ideas which outperform the opportunity cost of not following other avenues, ideas which offer greater ease of team alignment, etc.
Align, Build, Measure, and Learn
This is the point when very quickly a Product Manager stops feeling like “the CEO of a product” because as a PM you don’t have the same authority a CEO carries. Where a CEO can more easily “make” people follow them, a PM must instead build alignment around their product roadmap. The point is to gain buy in around an idea with those who will ultimately be charged with the implementation thereof.
Once you have buy in, you can then leverage the expertise of your builders to help create the best possible incarnation of this idea under the time and cost constraints of your experiment.
Once built, we’re still not finished, though. Now, we must test our assumptions, gather insights, and measure how well our implementation of this idea has performed.
It’s important while building out an experiment to have specific goals (KPI against which this idea’s effectiveness will be tested) as well as a timeline based on your product’s usage that will allow your tests to gather a statistically valid amount of attention.
Split Testing (A/B Testing)
One important method for testing how well your new idea performs is split testing. Split testing allows you to define cohorts which receive different versions of your product at the same time and measure their effects.
Because most of your metrics change organically over time based on various factors (seasonality, day of the week, and the entire portfolio of new offerings) split testing offers you the opportunity to properly quantify the impact of your new idea vs the control state over the same period of time.
Split testing is also useful when there are more than one possible solutions you want to test against one another.
Inevitably there will come a time when nothing you test seems to move your numbers significantly. You push and test and push again, but no matter what you do your metrics remain basically flat. I myself spent about four months of my life A/B testing one feature of Yammer’s product, failing to make an impact, and growing frustrated with every test.
This is a moment to harken back to the “Jobs to be Done” framework, and ask yourself if you truly understand what your users actually need from you to improve your outcomes.
If you keep trying what your intuition tells you will work, and it doesn’t, it’s time to seek a new perspective before you further spin your wheels ineffectively.
Pivot or Persevere?
When you find yourself unable to gain traction you have to decide to either “pivot” or “persevere.” Eric Ries describes a pivot as “A change in strategy without a change in vision.”
If you’re on a trip somewhere and you reach a dead end, you don’t stop trying to get where you’re headed, you look for a way around this dead end to your destination.
It’s important to have product cycles developed with this question built into the schedule. Be it every week, month, six weeks, or whatever -- you need a set time to quantify your results, gather what you’ve learned, and then decide if it is time to pivot or if you should continue to go forward with this experiment.
Again, speed of testing and defining minimum deliverables become vital to success here. The faster you can test, the faster you can fail, and the more things you can try along the way to help you succeed before you crash and burn.
You want to leave enough time to properly test an idea, but you don’t want to drive your car into the same dead end repeatedly without trying another route. This, once again, becomes a function of your culture and values telling you when you should “hold ‘em, and when to fold ‘em. Knowing when to walk away, and knowing when to run.”
Now that we understand the basics of using the scientific method in our product methodology, it’s time to talk about how your software products get built and how to improve your engineering efforts in, “Engineering Ain’t Easy” (part 5 of this series.)