Announcing Our October 2025 Metis Strategy Summit in NYC | Express Interest

This article originally appeared on CIO.comChris Boyd co-authored the piece.

As technology departments shift from traditional project management frameworks to treating IT as a product, it is triggering a broader re-think about how technology initiatives are funded.

Under the existing “plan, build, run” model, a business unit starts by sending project requirements to IT. The IT team then estimates the project costs, works with the business to agree on a budget, and gets to work.

This setup has several flaws that hamper agility and cause headaches for all involved. Cost estimates often occur before the scope of the project is truly evaluated and understood, and any variations in the plan are subject to an arduous change control process. What’s more, funding for these projects usually is locked in for the fiscal year, regardless of shifting enterprise priorities or changing market dynamics.  

To achieve the benefits of a product-centric operating model, the funding model must shift as well. Rather than funding a project for a specific amount of time based on estimated requirements, teams instead are funded on an annual basis (also known as “perpetual funding”). This provides IT product teams with stable funding that can be reallocated as the needs of the business change. It also allows teams to spend time reducing technical debt or improving internal processes as they see fit, improving productivity and quality in the long run. 

“We have to adapt with governance, with spending models, with prioritization,” Intuit CIO Atticus Tysen said during a 2019 panel discussion. “The days of fixing the budget at the beginning of the year and then diligently forging ahead and delivering it with business cases are over. That’s very out of date.”

Business unit leaders may be skeptical at first glance: why pay upfront for more services than we know we need right now? A closer look reveals that this model often delivers more value to the business per dollar spent. For example:

Smart first steps

Shifting away from old ways and adapting a new funding model can seem like a daunting task, but you can get started by taking the following first steps:

Establish the baseline

First, establish the baseline to which you will measure the funding shift’s effectiveness. A technology leader must consider all the dimensions of service that will improve when making the shift. Two areas of improvement that have high business impact are service quality and price. To establish the baseline for service quality, it is important to measure things like cycle time, defects, net promotor score, and critical business metrics that are heavily influenced by IT solutions.

The price baseline is a little more difficult to establish. The most straightforward way we have found to do this is to look at the projects completed in the last fiscal year and tally the resources it took to complete them. Start with a breakdown of team members’ total compensation (salary plus benefits), add overhead (cost of hardware/software per employee, licenses, etc.) and then communicate that in terms of business value delivered. For example, “project A cost $1.2M using 6 FTE and improved sales associates productivity by 10%”. When phrased this way, your audience will have a clear picture of what was delivered and how much it cost. This clear baseline of cost per business outcome delivered will serve as a helpful comparison when you shift to perpetual funding and need to demonstrate the impact.   

Pilot the shift with mature teams

The shift to a new funding model will be highly visible to all business leaders. To create the greatest chance of success, focus on selecting the right teams to trial the shift. The best candidates for early adoption are high-performing teams that know their roles in the product operating model, have strong credibility with business unit stakeholders, and experience continuous demand.

In our work with large organizations piloting this shift, e-commerce teams often fit the mold because they have a clear business stakeholder and have developed the skills and relationships needed to succeed in a product-based model. Customer success teams with direct influence on the growth and longevity of recurring revenue streams are also strong candidates as their solutions (such as customer portals and knowledge bases) directly influence the degree to which a customer adopts, expands, and renews a subscription product.

Teach your leaders the basics of team-based estimation

Estimation in the product-based funding model is different than in the project model. Under the new model, teams are funded annually (or another agreed-upon funding cycle) by business units. As funding shifts to an annual basis, so should cost estimation. Rather than scoping the price of a project and then building a temporary team to execute it (and then disbanding after execution), leaders should determine the size and price of the team that will be needed to support anticipated demand for the year, and then direct that team to initiate an ongoing dialogue with the business to continuously prioritize targeted business outcomes. 

When completing a team-based cost estimation, it is important to include the same cost elements ( salary, benefits, hardware, licenses, etc.) that were used to establish your baseline so that you are comparing apples to apples when demonstrating the ROI of product-based funding. Where you will see a difference in the team-based model is resource capacity needed to deliver demand. In a product model, a cross-functional team is perpetually dedicated to a business domain, and there is often zero ramp-up time to acquire needed business and technical knowledge.

Since the teams have been perpetually dedicated to the domain, they are encouraged to take a longitudinal view of the technology estate and are able to quickly identify and make use of reusable components such as APIs and microservices, significantly improving time to market. For these reasons, among others, teams in the product-based operating model with perpetual funding can achieve more business value for less cost.

Pilot teams should work closely with the BU leadership providing the funding.  Stakeholders should work together to generate a list of quantitative and qualitative business outcomes for the year (or other funding cycle) that also satisfy any requirements for existing funding processes operating on “project by project” basis.

Talk with finance early and often

If you don’t already have a great relationship with finance, start working on it now. Your partnership with finance at the corporate and BU level will be critical to executing your pilot and paving the way to wider enterprise adoption of team-based funding models. Ideally. Leaders should engage with finance before, during, and after the team-based funding model so that everyone is in lockstep with you throughout the pilot. This alignment can help bolster adoption with other areas of the enterprise.

Each finance department has unique processes, cultures, and relationships with IT, so while you will need to tailor your approach, you should broach the following topics:

Evaluating success and sustaining success

Measure success and demonstrate value

You will need to achieve success in the pilot to bolster adoption in other areas of the business. Your success needs to be communicated in terms that resonate with the business. As your pilot comes to an end, gather your baseline data and match it up with the results of your pilot. Put together a “roadshow deck” to show a side-by-side comparison of costs, resources, and business outcomes (Business KPIs, quality metrics, cycle times, NPS, etc.) before and after the shift to team-based funding.

Depending on your organization, it may be prudent to include other observations such as the number of change control meetings required under each funding model, indicators of team morale, and other qualitative benefits such as flexibility. Have conversations with other areas of the business that may benefit from team-based funding (start off with 1-on-1 meetings) and offer to bring in your partners from finance and the product teams as the discussion evolves. The most important part of your story is that the team-based funding model delivers more business impact at a lower cost than the old model.

Results governance

Establish light and flexible governance mechanisms to monitor performance of the teams operating in the teams-based model. The purpose of these mechanisms is to validate that the increased level of autonomy is leading to high-priority business outcomes, not to review progress on design specs or other paper-based milestones. A $40B global manufacturing client adopting the team-based funding model established quarterly portfolio reviews with BU leadership and the CIO to review results. BU leadership reviews results of the teams and the planned roadmap for the subsequent quarter. BU leadership is then given the opportunity to reallocate investment based on changing business needs or can recommend the team proceed as planned. 

Change management

It is important to communicate that this process requires constant buy-in from business units. While funds will be allocated annually, demand will need to be analyzed and projected on at least a quarterly basis, and funds should be reallocated accordingly. In cases where investments need to be altered in the middle of a fiscal year, it is important to note that the unit of growth in this model is a new cross-functional team focused on a targeted set of business outcomes. The idea is to create several high-performing, longstanding, cross-functional teams that have the resources needed to achieve targeted business outcomes, rather than throw additional contracted developers at teams as new scope is introduced. 

Making the shift from project-based funding to product team-based funding is a major cultural and operational change that requires patience and a willingness to iterate over time. When executed successfully, CIOs often have closer relationships with their business partners, as well as less expensive, more efficient ways to deliver higher-quality products.

This article originally appeared on CIO.com. Steven Norton co-authored the piece.

You have heard the hype: Data is the “new oil” that will power next-generation business models and unlock untold efficiencies. For some companies, this vision is realized only in PowerPoint slides. At Western Digital, it is becoming a reality. Led by Steve Philpott, Chief Information Officer and head of the Digital Analytics Office (DAO), Western Digital is future- proofing its data and analytics capabilities through a flexible platform that collects and processes data in a way that enables a diverse set of stakeholders to realize business value.

As a computer Hard Disk Drive (HDD) manufacturer and data storage company, Western Digital already has tech-savvy stakeholders with an insatiable appetite for leveraging data to drive improvement across product development, manufacturing and global logistics. The nature of the company’s products requires engineers to model out the most efficient designs for new data storage devices, while also managing margins amid competitive market pressures.

Over the past few years, as Western Digital worked to combine three companies into one, which required ensuring both data quality and interoperability, Steve and his team had a material call to action to develop a data strategy that could:

To achieve these business outcomes, the Western Digital team focused on:

The course of this analytics journey has already shown major returns by enabling the business to improve collaboration and customer satisfaction, accelerate time to insight, improve manufacturing yields, and ultimately save costs.

Driving cultural change management and education

Effective CIOs have to harness organizational enthusiasm to explore the art of the possible while also managing expectations and instilling confidence that the CIO’s recommended course of action is the best one. With any technology trend, the top of the hype cycle brings promise of revolutionary transformation, but the practical course for many organizations is more evolutionary in nature. “Not everything is a machine learning use case,” said Steve, who started by identifying the problems the company was trying to solve before focusing on the solution.

Steve and his team then went on a roadshow to share the company’s current data and analytics capabilities and future opportunities. The team shared the presentation with audiences of varying technical aptitude to explain the ways in which the company could more effectively leverage data and analytics.

Steve recognized that while the appetite to strategically leverage data was strong, there simply were not enough in-house data scientists to achieve the company’s goals. There was also an added challenge of competing with silos of analytics capabilities across various functional groups. Steve’s team would ask, “could we respond as quickly as the functional analytics teams could?”

To successfully transform Western Digital’s analytics capabilities, Steve had to develop an ecosystem of partners, build out and enable the needed skill sets, and provide scalable tools to unlock the citizen data scientist. He also had to show his tech-savvy business partners that he could accelerate the value to the business units and not become a bureaucratic bottleneck. By implementing the following playbook, Steve noted, “we proved we can often respond faster than the functional analytics teams because we can assemble solutions more dynamically with the analytics capability building blocks.”

Achieving quick wins through incremental value while driving solution to scale

Steve and his team live by the mantra that “success breeds opportunity.” Rather than ask for tens of millions of dollars and inflate expectations, the team in IT called the High-Performance Computing group pursued a quick win to establish credibility. After identifying hundreds of data sources, the team prioritized various use cases based on those that met the sweet spot of being solvable while clearly exhibiting incremental value.

For example, the team developed a machine learning application called DefectNet to detect test fail patterns on the media surface of HDDs. Initial test results showed promise of detecting and classifying images by spatial patterns on the media surface. Process engineers then could trace patterns relating to upstream equipment in the manufacturing facility. From the initial idea prototype, the solution was grown incrementally to scale, expanding into use cases in metrology anomaly detection. Now every media surface in production goes through the application for classification, and the solution serves as a platform that is used for image classification applications across multiple factories. 

A similar measured approach was taken while developing a digital twin for simulating material movement and dispatching in the factory. An initial solution focused on mimicking material moves within Western Digital’s wafer manufacturing operations. The incremental value realized from smart dispatching created support and momentum to grow the solution through a series of learning cycles. Once again, a narrowly focused prototype became a platform solution that now supports multiple factories. One advantage of this approach: deployment to a new factory reuses 80% of the already developed assets leaving only 20% site-specific customization.

Developing a DAO hybrid operating model

After earning credibility that his team could help the organization, Steve established the Digital Analytics Office (DAO), whose mission statement is to “accelerate analytics at scale for faster value realization.” Comprised of a combination of data scientists, data engineers, business analysts, and subject matter experts, this group sought to provide federated analytics capabilities to the enterprise. The DAO works with business groups, who also have their own data scientists, on specific challenges that are often related to getting analytics capabilities into production, scaling those capabilities, and ensuring they are sustainable.

The DAO works across functions to identify where disparate analytics solutions are being developed for common goals, using different methodologies and achieving varying outcomes. Standardizing on an enterprise-supported methodology and machine learning platform enables business teams faster time-to-insights with higher value.

To gain further traction, the DAO organized a hackathon that included 90 engineers broken into 23 teams that had three days to mock up a solution for a specific use case. A judging body then graded the presentations, ranked the highest value use cases, and approved funding for the most promising projects. 

In addition to using hackathons to generate new demand, business partners can also bring a new idea to the DAO. Those ideas are presented to the analytics steering committee to determine business value, priority and approval for new initiatives. A new initiative then iterates in a “rapid learning cycle” over a series of sprints to demonstrate value back to the steering committee, and a decision is made to sustain or expand funding. This allows Western Digital to place smart bets, focusing on “singles rather than home runs” to maintain momentum.

Building out the data science skill set

“Be prepared and warned: the constraint will be the data scientists, not the technology,” said Steve, who recognized early in Western’s Digital journey that he needed to turn the question of building skills on its head.

The ideal data scientist is driven by curiosity and can ask “what if” questions that look beyond a single dimension or plane of data. They can understand and build algorithms and have subject matter expertise in the business process, so they know where to look for breadcrumbs of insight. Steve found that these unicorns represented only 10% of data scientists in the company, while the other 90% had to be paired with subject matter experts to combine the theoretical expertise with the business process knowledge to solve problems.

While pairing people together was not impossible, it was inefficient. In response, rather than ask how to train or hire more data scientists, Steve asked, “how do we build self-service machine learning capabilities that only require the equivalent of an SQL-like skill set?” Western Digital began exploring Google and Amazon’s auto ML capability, where machine learning generates additional machine learning. The vision is to abstract the more sophisticated skills involved in developing algorithms so that business process experts can be trained to conduct data science exploration themselves.

Designing and future-proofing technology

Many organizations take the misguided step of formulating a data strategy solely about the technology. The limitation of that approach is that companies risk over-engineering solutions with a slow time to value, and by the time products are in market, the solution may be obsolete. Steve recognized this risk and guided his team to develop a technology architecture that provides the core building blocks without locking in on a single tool. This fit-for-purpose approach allows Western Digital to future-proof its data and analytics capabilities with a flexible platform. The three core building blocks of this architecture are:

  • Collecting data with big data platforms
  • Processing data with analytics platforms; governing data
  • Accelerating value realization with data embedded in business capabilities
  • Designing and future-proofing technology: Collecting data

    The first step is to be able to collect, store and make data accessible in a way that is tailored to each company’s business model. Western Digital, for example, has significant manufacturing operations that require sub-second latency for on-premise data processing at the edge, while other capabilities can afford cloud-based storage for the core business. Across both spectrums, Western Digital consumes 80-100 trillion data points into its analytics environment on a daily basis with more analytical compute power pushing to the edge. The company also optimizes where it stores data, decoupling the data and technology stack, based on the frequency with which the data must be analyzed. If the data is only needed a few times a year, the best low-cost option is to store the data in the cloud. Western Digital’s common data repository spans processes across all production environments and is structured in a way that can be accessed by various types of processing capabilities.

    Further, as Western Digital’s use cases became more latency dependent, it was evident that they required core cloud-based big data capabilities closer to where the data was created. Western Digital wanted to enable their user community by providing a self-service architecture. To do this, the team developed and deployed a PaaS (Platform as a Service) called the Big Data Platform Edge Architecture using cloud native technologies and DevOps best practices in Western Digital’s factories.

    Future-proofing technology: Process & govern data

    With the data primed for analysis, Western Digital offers a suite of tools that allow its organizations to extract, govern, and maintain master data. From open source Hadoop to multi-parallel processing, NoSQL and TensorFlow, data processing capabilities are tailored to the complexity of the use case and the volume, velocity, and variety of data.

    While these technologies will evolve over time, the company will continually need to sustain data governance and quality. At Western Digital, everyone is accountable for data quality. To foster that culture, the IT team established a data governance group that identifies, educates and guides data stewards in the execution of data quality delivery. With clear ownership of data assets, the trust and value of data sets is scalable.

    Beyond ensuring ownership of data quality, the data governance group also manages platform decisions, such as how to structure the data warehouse, so that the multiple stakeholders are set up for success.

    Future-proofing technology: Realize value

    Data applied in context transforms numbers and characters into information, knowledge, insight, and ultimately action. In order to realize the value of data in the context of business processes – either looking backward, in real time, or into the future – Western Digital developed four layers of increasingly advanced capabilities:

    By codifying the analytical service offerings in this way, business partners can use the right tool for the right job. Rather than tell people exactly what tool to use, the DAO focuses on enabling the fit-for-purpose toolset under the guiding principle that whatever is built should have a clear, secure, and scalable path to launch with the potential for re-use.

    The platform re-use ability tremendously accelerates time to scale and business impact.

    Throughout this transformation, Steve Phillpott and the DAO have helped Western Digital evolve its mindset as to how the company can leverage data analytics as a source of competitive advantage. The combination of a federated operating model, new data science tools, and a commitment to data quality and governance have allowed the company to define its own future, focused on solving key business problems no matter how technology trends change.

    Situation

    A client sought to develop a comprehensive understanding of its enterprise architecture and how it could be used to support business strategy.

    A business division within a large US-based employer services provider realized that it had an incomplete understanding of its existing enterprise architecture and was not up-to-date on the firm’s overall architecture standards. The group wanted to create a next-generation enterprise architecture that would support overall business strategy and help drive desired outcomes.

    Approach

    Metis Strategy established a current-state understanding of the company’s enterprise architecture, developed a desired future-state vision, and crafted a strategy for implementation.

    To help the client develop a future-state vision for its enterprise architecture, Metis Strategy undertook the following activities:

    Outcome

    Metis Strategy presented a strategy roadmap to help business leaders move the company toward its future-state enterprise architecture vision and give stakeholders a holistic view of the firm’s EA.

    Metis Strategy presented the client with documented recommendations for EA strategy and implementation, including:

    From detailed homework review to back office automation, progress in artificial intelligence will continue to explode in the year ahead. In 2018, Metis Strategy interviewed nearly 40 CIOs, CDOs and CTOs of companies with over $1 billion in revenue as part of our Technovation podcast and column. When asked to identify the emerging technologies that are of growing interest or are making their way onto their 2019 roadmap, 75 percent of the technology leaders highlighted artificial intelligence, while 40 percent said blockchain and 13 percent cited the Internet of Things.

    AI, an umbrella term for technologies that enable machines to accomplish tasks that previously required human intelligence, could rapidly upend the competitive landscape across industries. While many companies continue to explore AI business cases, seek executive support, and mature their foundational IT and data capabilities, a growing number of enterprises are deploying the technology at scale.

    1. Walmart deploys hundreds of bots to automate back office processes

    Walmart, the world’s largest company by revenue, has deployed more than 500 bots into its internal environment to automate processes and drive efficiencies, . Early use cases focused on automating processes such as accounts payable, accounts receivable, and compensation and benefits. More recently, robotic process automation (RPA) has been applied to Walmart’s Shared Services organization, where it automates ERP exception handling such as matching purchase orders to invoices.

    As expectations rise for technology to unlock business value, Clay is looking to scale AI across the company. Having recently adopted a product model and end-to-end ownership, the company is well positioned to apply machine learning to everything from merchandising operations, which coordinates supplier-relation interactions and affects the in-store displays across more than 5,000 US stores, to improving the productivity of the world’s largest private workforce.

    For more insight from Clay, listen to the .

    2. Western Digital saves CapEx by using AI to optimize test equipment

    One of the biggest expenses in hard drive manufacturing can be test equipment, so for $19 billion Western Digital, optimizing the test environment can save hundreds of millions of dollars in CapEx. Given the foresight with which the company has developed its AI and big data strategy, it’s no surprise that among its most advanced AI use cases is optimizing that test environment. “We’re using advanced machine learning and convolutional neural networks to improve our wafer yield management,”. “And we’re using those same algorithms to start identifying and optimizing our test processes, which can help us save hundreds of millions of dollars in capital.”

    With a global workforce of 68,000, Western Digital has built a big data and analytics platform that supports a variety of workloads, architectures, and technologies to deliver value to business users of all skill levels. While entry-level analysts can leverage the platform to visualize data in Tableau or perform ad-hoc queries in RStudio, data scientists can make use of advanced techniques to monitor and optimize manufacturing and operations capabilities.

    As Western Digital finds increasingly advanced AI use cases in 2019, its flexible platform ensures that the organization continues realizing value while its analytics capabilities mature.

    For more insight from Steve, listen to the .

    3. Bank of America and Harvard team up on responsible AI development

    As companies race to develop and deploy increasingly powerful AI systems, there’s a growing recognition of the responsibility companies have to mitigate unintended consequences.andhave noted that engineers often don’t have the capacity to fully imagine the implications of the technology they develop. That’s one reason why Bank of America (BoA) Chief Operations and Technology Officer Cathy Bessantwith Harvard Kennedy School to create the Council on the Responsible Use of AI.

    While BoA’s most visible application of AI may be Erica, its virtual banking assistant, the Fortune 25 company is increasingly exploring how AI can be applied to fraud detection and anti-money laundering. As proponent ofCathy recognizes that the bank must maintain transparency into the decision-making models and ensure that outcomes are unbiased. Further, as employees begin to question how AI might impact their jobs, Cathy is thinking proactively about how to guide career transformation and development in the age of AI. To explore these critical questions, the Council on the Responsible Use of AI will convene leaders from government, business, academia, and civil society, including Bessant, to discuss emerging legal, moral, and policy implications of AI.

    “If you’re a company where your business strategy can be described by the two words, ‘responsible growth,’ then the concept of responsible AI is not a stretch,” says Cathy. “In fact, it is the tough soul of who we are.”

    For more insight from Cathy, listen to the.

    4. 7-Eleven leverages chatbots and voice to innovate on the user experience

    7-Eleven defined convenience for a generation, but today, the most convenient storefront is the one in consumer’s pockets. In a 2018 interview,  how the company uses new technologies to reduce friction for customers and improve their overall experience.

    7-Eleven thinks about technology in two broad categories: proven technologies that are ready to scale, and emerging technologies. For emerging technologies, the company has adopted a fast follower approach, which Gurmeet describes as “watch closely and actively experiment.” In addition to operating several global R&D labs, Gurmeet has tasked the company’s CTO with testing new technologies and conducting proof-of-concept tests. Already, 7-Eleven has deployed a Facebook Messenger chatbot that allows users to sign up for the 7Rewards® loyalty program, find a store location, learn about the latest discount offers, and more. The bot, which was developed through a partnership with the tech firm Conversable, is part of Gurmeet’s strategy to redefine the customer experience through technology.

    In 2019, 7-Eleven’s technology organization will leverage open-sourced AI libraries such as TensorFlow to explore how AI can streamline back-office processes such as merchandising and operations. They’ll also look to apply voice interfaces to redefine the customer experience.

    For more insight from Gurmeet, listen to the.

    5. At 174-year-old Pearson, AI is at the heart of the latest product innovations

    Albert Hitchcock is the CIO turned COO and CTO of 174-year-old education company Pearson, where he oversees not just IT and digital transformation, but also product development, procurement, supply chain, customer service, and more. Given his broad purview, Hitchcock is well positioned to apply AI across the business. “AI is not five years out. It’s real and it’s happening today,” . “We’re looking at how we transform all spokes of our business using AI, from how we transform customer call centers using chatbots to how we bring AI, learning design, pedagogy, and insights into brain functions to create a personalized learning experience.”

    Machine learning is at the heart of many of Pearson’s most recent product innovations, from authentic assessments and automated essay scoring to adaptive learning and intelligent tutoring. To accelerate the infusion of AI into current and future products and services, the company has hired Intel veteran Milena Marinovaas its first SVP, AI Products and Solutions. While Marinova’s initial focus is updating Pearson’s math homework tool to provide more detailed feedback, the vision to to create omniscient virtual tutors personalized for every student. “[Education] is different for every human and therefore you can potentially accelerate learning and delivery, improve outcomes, and help everyone progress in their lives of learning,” notes Hitchcock. “AI is at the center of that thinking.”

    For more insight from Albert, listen to the.

    “Given a ten percent chance of a 100 times payoff, you should take that bet every time. But you’re still going to be wrong nine times out of 10.” –Jeff Bezos

    Leading organizations like Amazon, Walmart, Uber, Netflix, Google X, Intuit and Instagram have all vigorously embraced the philosophy that rapid experimentation is the most efficient and effective path to meeting customer needs. In an interview with Metis Strategy’s Peter High, entrepreneur Peter Diamandis explains that the most nimble and innovative companies like Uber and Google X “are running over 1,000 experiments per year and are creating a culture that allows for rapid experimentation and constant failure and iteration.”

    Traditional strategic planning taught us to study all the pieces on the chess board, develop a multi-year roadmap, and then launch carefully sculpted new products or services. Executives believed that there was only one chance to “get it right,” which often left organizations allowing perfect to be the enemy of the good.

    However, in the digital era, decision velocity is more important than perfect planning.

    Accelerating decision velocity through experimentation

    The most successful organizations cede the hubris of believing they will always be able to perfectly predict customer or user demands, and instead let data—not opinions—guide decision making. The data that informs decision making is derived from a series of experiments that test a hypothesis against a small but representative sample of a broader population.

    The experiment should examine three questions

    And then lead to one of three conclusions:

    Often, experiments fall into the second category, in which case organizations demonstrate enough viability to iterate on the idea to further hone and enhance the product-market fit. The key is to gain this insight early, and course-correct as necessary. It is easy to correct being two degrees off course every ten feet but being two degrees off course over a mile will cause you to miss your target considerably (+/-0.35 feet vs. +/- 184 feet).

    One simple example is when Macy’s was evaluating the desire build a feature that would allow customers to search for a product based on a picture taken with their smartphone. Other competitors had developed something similar, but before Macy’s invested significant sums of money, the retailer wanted to know if the idea was viable.

    To test the idea, Macy’s placed a “Visual Product Search” icon on its homepage and monitored the click-through behavior. While Macys.com did not yet have the capability to allow for visual search, tens of thousands of customers clicked through, and Macy’s was able to capture emails of those that wanted to be notified when the feature was ready.

    This was enough to begin pursuing the idea further. Yasir Anwar, the former CTO at Macy’s, said teams are “given empowerment to go and test what is best for our customers, to go and run multiple experiments, to test with our customers, (and) come back with the results.”

    To accelerate decision velocity, we recommend that all companies develop a framework to create a “Business Experimentation Lab” similar to the likes of Amazon and Walmart. This Business Experimentation Framework (BEF) should outline how people with the right mindset, enabled by technology (though sometimes technology is not necessary), can leverage iterative processes to make more well-informed, yet faster decisions. Doing so frees organizations from entrenched, bureaucratic practices and provides mechanisms for rapidly determining the best option for improving customer experiences out of a list of possibilities.

    A Business Experimentation Framework is crucial to:

    Business experimentation through A/B testing at Walmart

    While nearly every department can introduce some flavor of experimentation into their operating model, a core component and example in eCommerce is A/B testing, or split testing. A/B testing is a way to compare two versions of a single variable, and determine which approach is more effective.

    At a recent meetup at Walmart’s Bay Area office, eCommerce product and test managers discussed the investments, processes, and roles required to sustainably hold A/B testing velocity while ensuring the occurrence of clean, accurate, and controllable experiments. Walmart began its journey towards mass A/B testing with a top-down decree—“What we launch is what we test”—and now is able to run roughly 25 experiments at any given time—and Walmart has grown the number of tests each year from 70 in 2016 to 253 in 2017.

    To enable A/B testing at this velocity and quality, Walmart developed a Test Proposal process that organizes A/B tests and provides metrics for test governance, so teams can quickly make decisions at the end of a test. A Test Proposal defines:

    To facilitate the lasting adoption of a Business Experimentation Framework, organizations must staff critical roles like test managers, development engineers, and test analysts. Walmart, for instance, has created the following roles to enable the launch and analysis of 250 tests per year:

    Creating an experimentation-oriented organization

    Institutionalizing a bias for experimentation is not easy. We have seen several barriers to adopting a Business Experimentation Framework, such as:

    Typically, enthusiasm for experimentation gains momentum with one beachhead department. That department develops a test-approval process that is supported by the tools and data necessary to test, analyze, learn, and make accurate go/no-go decisions.

    Here is a blueprint for introducing a test-first culture:

    If done well, establishing a Business Experimentation Framework will allow organizations to figure out what matters to most customers, within a limited amount of time, for a limited cost, and with a risk-reward tradeoff that will ultimately play to their favor.

    As Bezos said, “We all know that if you swing for the fences, you’re going to strike out a lot, but you’re also going to hit some home runs. The difference between baseball and business, however, is that baseball has a truncated outcome distribution. When you swing, no matter how well you connect with the ball, the most runs you can get is four. In business, every once in a while, when you step up to the plate, you can score 1,000 runs. This long-tailed distribution of returns is why it’s important to be bold. Big winners pay for so many experiments.”

    12/5/17

    By Chris Davis and Brandon Metzger for CIO.com

    Technology is transforming our world at an unprecedented rate. New technologies like virtual assistants and augmented reality are changing consumer expectations faster than ever. The impact of cybersecurity breaches is intensifying. And digital enablers are allowing upstarts to steal market share from incumbents in a matter of months or years, rather than decades.

    While it is tempting to believe that these disruptive times will eventually stabilize, our analysis suggests that the rate of technological progress will only accelerate. If this year indeed represents both the fastest rate of change we ever haveexperienced, and the slowest rate of change we ever will experience—as many experts have posited—then this raises a critical question for executives in all industries:

    How do I understand the consequences of accelerating technological change, and position my company to capitalize on the opportunities presented by emerging paradigms?

    To accomplish this, companies can develop innovation systems that consist of a variety of methods and processes, ranging from strategic foresight to a portfolio of corporate innovation programs. One such program — innovation labs — is gaining steam in corporate America, with some of the biggest and best-known companies opening new outposts focused on developing and scaling breakthrough technologies, processes, and business models.

    Through Metis Strategy’s work with Fortune 500 companies and rapidly growing businesses alike, we have identified seven critical factors to consider when creating such a corporate innovation lab.

    1. Define the charter

    The charter is a concise description of the innovation lab’s objectives and its method for achieving them. But a charter is not just lofty PR: many of the best innovation labs use their charter as a guiding light that provides a deeper sense of purpose and direction. Subsequently, the charter should also clarify what the lab is not focused on.

    Consider the differences between the charters of Lowe’s Innovation Lab and of Bayer’s U.S. Innovation Center and Science Hub:

    While Lowe’s focuses on identifying and utilizing new technologies to enhance the retail experience, Bayer’s priority is forming partnerships to accelerate drug discovery. Given their differences, it should be no surprise that these innovation labs utilize different metrics, governance models, funding sources, and innovation ecosystems to accomplish their objectives.

    2. Identify innovation metrics

    Large companies thrive when business conditions are certain and their targets are clear. While execution metrics can measure the performance of existing business models, they are less capable of accurately quantifying progress at innovation labs, where the work is sometimes less precise, longer term, or more conceptual. Kyle Nel, Executive Director of Lowe’s Innovation Lab, has noted that “it does not make sense to apply mature metrics to something in its nascent form.”

    Innovation labs can develop a portfolio of innovation metrics to measure not only the results of the innovation effort, but also the preconditions and innovation process itself.

    With this focus on measuring both process and progress, innovation metrics help labs assess their innovation maturity, but may also bolster the support of their executive sponsors, especially in the early days. For example, Harvard Business Review notes that “revenue generated by new products,” an output metric, is the metric most commonly used by senior innovation executives. By establishing a portfolio of innovation metrics that also includes input and development metrics, the conversation can shift from focusing solely on results to focusing also on the maturing evolution of the innovation capability. This ability to develop unique innovation metrics has helped Nel push back when Lowe’s executives expect significant revenue growth from new and disruptive products.

    3. Employ a process for innovation

    Innovation is as much a cultural attitude as it is a business process. A generic approach to innovation may begin by defining the customer and uncovering their unmet need, formulating a hypothesis on what product or service the company can offer to meet that need, and validating the hypothesis by using customer feedback to rapidly experiment and iterate. Further, to foster the right mindset, innovation labs should:

    That said, many of the best labs develop unique processes influenced by their charter. Consider Lowe’s Innovation Lab (LIL), which uses a narrative-driven approach to identify and articulate opportunities. First, LIL conducts market research, compiles trend data, and collects customer feedback on unmet needs and pain points. Next, LIL shares this information with science fiction writers who create strategic documents in the form of comic books, which follow characters through a narrative arc that illustrates a new solution to the character’s problem. Then Lowe’s executives use the comic books to make prioritization decisions, and, finally, LIL works with its partners to create the solutions introduced in the comics.

    Another example of an organization employing a unique process is X, Alphabet’s “moonshot factory,” which is charged with creating world-changing companies that could eventually become the next Google. X adheres to a three-part formulafor identifying opportunities: (1) it must address a huge problem, (2) it must propose a radical solution, and (3) it must employ a relatively feasible technology.

    Using this formula, X has spun out numerous subsidiaries under the Alphabet umbrella. One of those companies is Waymo, the autonomous vehicle pioneer that Morgan Stanley recently suggested could be worth $70 billion.

    4. Who and how to recruit

    If companies believe an innovation lab will help them more effectively navigate the waters of disruption, it is essential that they recruit for passion and cognitive diversity, rather than just skill. Labs often include a wide range of technical and non-technical roles, from data scientists and designers to experts in anthropology and psychology. Breadth and depth of both skill set and mindset are essential components of a successful innovation lab that creatively explores new technologies and business models.

    Ideal job candidates should be innate risk-seekers, strong questioners and connectors, and comfortable with failure and restarts. Deloitte Center for the Edge Co-Chair John Hagel described people who have these traits as personifying the “passion of the explorer.”

    Organizations searching for these passionate explorers will find advantages and disadvantages in looking both internally and externally. Internal employees may more deeply understand the customer, but they also may have difficulty looking at problems from a different perspective. External hires may bring new viewpoints and skills, but recruitment may prove challenging.

    Companies can use several tactics to attract talent. Buzzfeed’s Open Lab for Journalism, Technology and the Arts, for example, targets specific individuals and groups based on their past projects. Recruitment efforts have been successful, in part, because Buzzfeed offers company resources that support their creative freedoms. Alternatively, companies can be deliberate in how they share their innovation initiatives with the public. For example, Airbus has a blog that reports news from the company’s A3 innovation lab, Airbus Ventures, and from other teams across its innovation ecosystem. This type of focused communication both targets and attracts an audience of individuals who are the most knowledgeable and interested in innovation currently taking place within the industry, and, in so doing, Airbus can create an informal pool of potential new hires.

    5. Establish a funding source and budget

    The process for establishing a funding source will differ depending on the company. For example, Allstate CIO Suren Gupta has described how a formal Innovation Council evaluates ideas and allocates funding. At other companies, if the innovation ties closely to a particular business unit, then funding may come from that group’s budget.

    Though the specifics will vary, a generic process for establishing funding may include

    The actual size of the budget depends on whether a lab is building the technology itself, partnering with other organizations, or acquiring a company, product or talent. Amazon and Google have spent millions of dollars developing parcel delivery drones. Meanwhile, companies like UPS and Daimler AG have opted to partner with—and make strategic investments in—established drone makers. This lowers both the risk and the cost of innovation while still allowing the company to develop new capabilities.

    Regardless of how funding is established—or the size of the budget itself—it is critical to measure how much money was spent at each stage of the process: preparation (i.e. percentage of capital budget allocated to innovation projects), development (i.e. R&D spending at each phase of development the innovation process), and results (i.e. percentage of sales from innovation projects). As with the portfolio approach to general innovation metrics, the use of financial metrics across the innovation lifecycle reduces the focus on ROI, which can cripple innovative projects in the early stages.

    6. Where to locate the lab

    Silicon Valley is the quintessential innovation ecosystem. The region’s unique characteristics undoubtedly make Silicon Valley the right innovation ecosystem for many labs—particularly those charged with discovering and/or acquiring startups, or gaining business and technical intelligence about emerging technologies.

    Other locations should not be overlooked, however. Cities such as New York City, Austin, and Chicago in the U.S.; London, Paris and Berlin in Europe; Tel Aviv in the Middle East; and Singapore, Shanghai and Tokyo in Asia all offer rapidly maturing innovation ecosystems, each with their own unique advantages and disadvantages.

    To determine the ideal location for an innovation lab, consider which ecosystem characteristics (such as those highlighted in the adjacent visual) best support the objectives defined in the charter.

    For example, former ADP CTO Keith Fulton (now CIO of Bank Systems with Fiserv) has described how ADP’s innovation lab is focused on creating “best-in-class user experiences.” Accordingly, ADP opened its second lab in Midtown Manhattan, since the proximity to top visual design and creative firms provide high concentrations of the right skill sets.

    7. Develop a strategy for successfully integrating innovation

    There is one final challenge, even for innovation labs that successfully deliver results in accordance with their charter: integrating the innovation with the core organization. From Kodak’s invention of the digital camera to Xerox pioneering the GUI, there is no shortage of companies that failed to capitalize on their innovations.

    To be sure, innovation integration is the culmination of an innovation lab successfully delivering on its charter, so the way in which the company captures the value of the innovation very much depends on decisions that were made along the way. We recommend that executive sponsors and innovation leaders discuss early and often what successful innovation integration looks like. Here are a few key questions to consider:

    While there is no set template for innovation integration, a definable, well-articulated vision of what the desired success will look like should be a primary priority, not an afterthought.

    More than ever before, established companies are struggling to keep up with both the deployment of new technology by their competitors and consumers’ rapidly changing expectations. Careful consideration of these seven factors can empower companies to build an innovation lab that fosters energetic challenges to preconceived notions, creative experimentation with new technologies and business models, and thorough exploration of potential products and services that will enable it to survive—and thrive—amidst the accelerating forces of disruption.