Innovation has a big problem – Far too many small ideas masquerade as big ones. As an innovation consultant, how to address the lack of genuinely big ideas, combined with the proliferation of small ones is probably the most common challenge I’m asked to help with. And this reflects a similar complaint I often heard from leadership at my ‘Alma Mater’, P&G. However, the good news is that big innovations come from solving big problems. But to turn this problem into an opportunity, we need to understand it, and more importantly, find specific ways to solve it.
So why do we have so many small ideas?
I believe this is anchored in two places. The first is that innovation-friendly culture has become collateral damage to the search for efficiency and the pressure of short-term financial goals. The second is that our own psychology is working against us. Our natural cognitive biases drive us to believe our ideas are bigger than they really are, and this is compounded by the way we typically research innovation, which often reaffirms these overly optimistic assessments.
The Tyranny of Efficiency:
Big innovations take time, resources, and a tolerance for failure. All of which gets squeezed when a company is trying cut costs. True disruptive innovation often starts small, and grows over time. Big, game-changing ideas often require nurturing and patience. Consider some of the category creating products that came out of P&G, such as Pampers, Tide, Swiffer or Febreze. These required significant resources, not only to develop and tune the product, but also to manufacture it effectively, and communicate the concept. Some really great ideas, such as the iPhone or iPad do a pretty good job of communicating themselves, but these are the exception.
Many innovations start slower, and require unexpected killer applications, or creative communication approaches to cement their place in the market. The slow burn, and ultimate repositioning of Febreze into a billion dollar brand is a well-documented case, but there are many others. However, in a culture of austerity and impatience for bottom line growth, slower burn innovations constantly risk being weeded out before they can deliver against their potential. It is very hard to quantify this, as we cannot measure the value of what we didn’t do, But we do get a hint, as we see the size of initiatives shrink as their number grows.
A culture of austerity also drives strategies that demand big ideas delivered on fast timelines, with small budgets. Clearly there is value in trimming some of the fat, but the pendulum can swing too far. If a strategy demands five disruptive innovations a year, but provides insufficient time and budget, it risks getting what it asks for. But this is often achieved via the reframing and relabeling of smaller ideas as ‘disruptive’ or ‘breakthrough’, rather than by the real thing.
The Psychology of Innovation Relabeling:
When it comes to accurate evaluation of the size of our ideas, our own cognitive biases, and how we evaluate ideas are both stacked against us.
A plethora of cognitive biases all contribute to a tendency to overestimate the value of our ideas or pet projects. The self-affirmation bias means that we naturally over estimate our own value, skills and opinions, while the confirmation bias makes us more receptive to information that supports our vision than data that negates it. The choice-supportive bias is the tendency to remember one’s choices as better than they actually were, and can reinforce our belief in our innovation, while the overconfidence effect, pro-innovation bias, and illusory superiority are all pretty self-evident. These biases are not dishonest, they are just a part of being human, and can be quite useful. They often fuel the passion, grit and self-belief that can drive a good idea through obstacles, and propel it from idea to execution.
But that self-belief also causes us to fall in love with our ideas, and over estimate how big they are. And love is often blind. For example, it is easy to assume our ideas are much more intuitive than they really are. Innovation needs to be readily understandable.
If people don’t ‘get it’, they probably won’t buy it.
However, it is almost impossible for anybody deeply engaged in an innovation to evaluate this objectively. They have usually spent so much time with it, that even a complex or unintuitive idea will seem obvious to them. Dean Kamen probably genuinely thought the Segway was obvious, but it took the rest of the world years to work out its value, and killer, but somewhat limited applications associated with law enforcement.
The Measurement Effect:
This difficulty in objectively evaluating our own innovations is of course why we get third party input, and in particular, why we place research. But unfortunately, much of the research we place tends to reinforce rather than correct the bias to overestimate the size of an idea. This is because the act of measuring an idea itself increases the amount of time and thought that goes into a panelists’ assessment of it. Simply being part of research, and being asked for an opinion means that most people will pay more attention to, and think more about a product than they would in the real world.
In retail testing, they also typically spend more time looking at a shelf, and hence are more likely to find something new. The net of this is that they are more likely to find it, more likely to understand it, and hence more likely to believe they would buy it than they actually will in the real world.
Fixing the Hole:
So how do we fix this? I believe there are three things we can do.
- First, create an innovation-friendly culture
- Second, weed and feed ideas more objectively
- Third, design research that evaluates what people will do, rather than what they think they will do
Creating an Innovation Culture:
I don’t want to overly rehash this much-debated topic. But I do want to reiterate how important culture is, and how important it is to differentiate it from strategy. To be innovative, we need both a supportive strategy and culture, but I passionately believe we need more focus on culture, if only because it typically gets less attention. Strategy is focused on big, carefully considered decisions, and often changes, especially in the face of new management and changing external conditions. I equate it to the System 2 thinking described in Daniel Kahneman’s Nobel Prize winning work on Behavioral Economics.
Culture is slower burn. It is much more like Kahneman’s System 1, in that it often operates on smaller decisions and behaviors, and is much slower to change. However, its’ cumulative effects can be huge. It influences whether people take personal responsibility, or defer up the chain of command. Also, how much time they put against the big risky ideas, and how much against the small, but safe ones that pad their annual performance review.
A strong innovation culture encourages risk tolerance, willingness to explore big ideas, and fail productively.
A culture of austerity encourages safe, incremental behaviors, avoiding failure, or sometimes avoiding making any decision at all.
If waste is frowned upon, it is all too easy to become mired in endless rounds of creating CYI data that in reality often has limited predictive value in a new, poorly understood market. It can take a long time to build a genuine, deep culture of innovation, but just a few years of downsizing, austerity, and overwork and fear to kill it. So the first step is to look very hard at reward systems, tolerance of failure, and the messages coming from senior management. These need to do much more than pay lip service to innovation, they must transparently reward productive risk takers.
The Psychology of Innovation
This falls into two parts. The first is self-awareness, and managing our own biases. We need to be aware of our natural tendency to over value. And we need to actively seek out third party input, devils advocates, and honest critique. This is of course much easier in an innovation friendly culture where productive failure is tolerated.
The second is designing research that challenges our assumptions, and tests the appeal of our innovations as realistic situations as possible. I’ve talked a lot about this elsewhere, but in short, we need to observe rather than ask, and get out of the lab and replicate real contexts. We need to design tests that disguise the questions we want to answer, so that panelists don’t over-think our offerings, and we need measurements to be minimally invasive.
If we stick people into fMRI machines, or attach electrode arrays to their heads, or simply ask them to sit in a room for an hour to discuss toilet paper, they are going to think about it a lot, lot more than they will in the real world. Going back to Prof Kahneman, we need to get away from getting a System 2 answer to what is more often than not, a predominantly System 1 behavior. If we don’t we are nearly always going to over estimate how big our ideas are.
We have a huge opportunity to more accurately evaluate our ideas. If we do this, we can innovate how we innovate, and organically weed out more small ideas, and focus more on the big ones. But this requires more focus on creating, and not killing innovation friendly cultures. And we need to be very careful about creating too much competition for resources, and tying career success too closely project success.
Failure to reward productive risk taking is a recipe for mediocrity. Some competition is good, but an overly competitive market almost guarantees some degree of hype and over estimation; something that can easily spiral out of control. Finally, we have to keep ourselves honest by designing research that challenges the size of our ideas, and that tests them under as realistic conditions as is possible.
Have you come across biases in your own innovation work? If so, how did you address them? Let us know in the comments below (we read all comments)