What Your Innovation Process Should Look Like [Interesting]

Published by HBR.org

Link to the original article.

sept17-11-678650896
Companies and government agencies often make the mistake of viewing innovation as a set of unconstrained activities with no discipline. In reality, for innovation to contribute to a company or government agency, it needs to be designed as a process from start to deployment.
When organizations lack a formal innovation pipeline process, project approvals tend to be based on who has the best demo or slides, or who lobbies the hardest. There is no burden on those who proposed a new idea or technology to talk to customers, build minimal viable products, test hypotheses or understand the barriers to deployment. And they count on well-intentioned, smart people sitting in a committee to decide which ideas are worth pursuing.
Instead, what organizations need is a self-regulating, evidence-based innovation pipeline. Instead of having a committee vet ideas, they need a process that operates with speed and urgency, and that helps innovators and other stakeholders to curate and prioritize problems, ideas, and technologies.
This prioritization process has to start before any new idea reaches engineering. This way, the innovations that do reach engineering will already have substantial evidence — about validated customer needs, processes, legal security, and integration issues. Most importantly, minimal viable products and working prototypes will have been tested.
A canonical Lean Innovation process inside a company or government agency would look something like this:
Innovation sourcing: Over a period of days, a group generates a list of problems, ideas, and technologies that might be worth investing in.
Curation: For a few days or even a week, innovators get out of their own offices and talk to colleagues and customers. As the head of the U.S. Army’s Rapid Equipping Force, one of us built a curation process to help technology solutions to be deployed rapidly. It included both an internal and an external survey, the goal of which was to find other places in the business where a given problem might exist in a slightly different form, to identify related internal projects already in existence, and to find commercially available solutions to problems. It also sought to identify legal issues, security issues, and support issues.
This process also helped identify who the customers for possible solutions would be, who the internal stakeholders would be, and even what initial minimum vialbe products might look like.
This phase also includes building initial MVPs. Some ideas drop out when the team recognizes that they may be technically, financially, or legally unfeasible or they may discover that other groups have already built a similar product.
Prioritization: Once a list of innovation ideas has been refined by curation, it needs to be prioritized. One of the quickest ways to sort innovation ideas is to use the McKinsey Three Horizons Model. Horizon 1 ideas provide continuous innovation to a company’s existing business model and core capabilities. Horizon 2 ideas extend a company’s existing business model and core capabilities to new customers, markets or targets. Horizon 3 is the creation of new capabilities to take advantage of or respond to disruptive opportunities or disruption. We’d add a new category, Horizon 0, which refers to graveyards ideas that are not viable or feasible.
Once projects have been classified, the team prioritizes them, starting by asking: is this project worth pursing for another few months full time? This prioritization is not done by a committee of executives but by the innovation teams themselves.
Solution exploration and hypothesis testing: The ideas that pass through the prioritization filter enter an incubation process like I-Corps, the system adopted by all U.S. government federal research agencies to turn ideas into products. Over a 1,000 teams of our country’s best scientists have gone through the program taught in over 50 universities. (Segments of the U.S. Department of Defense and Intelligence community have also adopted this model as the Hacking for Defense process.)
This six- to ten-week process delivers evidence for defensible, data-based decisions. For each idea, the innovation team fills out a business model — or for the government, mission model – canvas. Everything on that canvas is a hypothesis, and the I-Corps model is designed to test each one. This not only includes the obvious — is there product/market or solution/mission fit? — but the other “gotchas” that innovators always seem to forget. The framework has the team talking not just to potential customers but also with regulators, and people responsible for legal, policy, finance, support. It also requires that they think through compatibility, scalability and deployment long before this gets presented to engineering. There is now another major milestone for the team: to show compelling evidence that this project deserves to be a new mainstream capability and inserted into engineering. Alternatively, the team might decide that it should be spun into its own organization or that it should be killed.
Incubation: Once hypothesis testing is complete, many projects will still need a period of incubation as the teams championing the projects gather additional data about the application, further build the MVP, and get used to working together. Incubation requires dedicated leadership oversight from the horizon 1 organization to insure the fledgling project does not die of malnutrition (a lack of access to resources) or become an orphan (no parent to guide them).
Integration and refactoring: At this point, if the innovation is Horizon 1 or 2, it’s time to integrate it into the existing organization. (Horizon 3 innovations are more likely set up as their own entities or at least divisions.) Trying to integrate new, unbudgeted, and unscheduled innovation projects into an engineering organization that has line item budgets for people and resources results in chaos and frustration. In addition, innovation projects carry both technical and organizational debt.
Technical debt is software or hardware built to validate hypotheses and find early customers. This quick and dirty development can become unwieldy, difficult to maintain, and incapable of scaling. Organizational debt is all the people and culture compromises made to “just get it done” in the early stages of an innovation project. The answer is refactoring, which is an engineering term that describes fixing code to make it more stable. In the process of refactoring, the engineering team helps fix technical debt by going into the existing code and restructuring it to make the code stable and understandable. Fixing organizational debt means “refactoring” the team – the innovation team that built the prototype may not be the right team to take it to scale, and is more valuable starting the next innovation initiative.
This refactoring stage requires that engineering build a small, dedicated refactoring team that’s focused on moving these validated prototypes into production. In addition, to solve the problem that innovation is always unscheduled and unbudgeted, this group has a dedicated annual budget.
By now, most organizations have concluded that they face the threat of disruption. Some have even started to realize that because technological advantage degrades every year, standing still means falling behind. Hence the interest in innovation, complete with hip innovation labs complete with fancy coffee machines. But done right, innovation requires a rigorous process. It starts by generating ideas, but the hard work is in prioritizing, categorizing, gathering data, testing and refactoring.

Comments