This article originally appeared on CIO.com. Steven Norton co-authored the piece.
You have heard the hype: Data is the “new oil” that will power next-generation business models and unlock untold efficiencies. For some companies, this vision is realized only in PowerPoint slides. At Western Digital, it is becoming a reality. Led by Steve Philpott, Chief Information Officer and head of the Digital Analytics Office (DAO), Western Digital is future- proofing its data and analytics capabilities through a flexible platform that collects and processes data in a way that enables a diverse set of stakeholders to realize business value.
As a computer Hard Disk Drive (HDD) manufacturer and data storage company, Western Digital already has tech-savvy stakeholders with an insatiable appetite for leveraging data to drive improvement across product development, manufacturing and global logistics. The nature of the company’s products requires engineers to model out the most efficient designs for new data storage devices, while also managing margins amid competitive market pressures.
Over the past few years, as Western Digital worked to combine three companies into one, which required ensuring both data quality and interoperability, Steve and his team had a material call to action to develop a data strategy that could:
To achieve these business outcomes, the Western Digital team focused on:
The course of this analytics journey has already shown major returns by enabling the business to improve collaboration and customer satisfaction, accelerate time to insight, improve manufacturing yields, and ultimately save costs.
Driving cultural change management and education
Effective CIOs have to harness organizational enthusiasm to explore the art of the possible while also managing expectations and instilling confidence that the CIO’s recommended course of action is the best one. With any technology trend, the top of the hype cycle brings promise of revolutionary transformation, but the practical course for many organizations is more evolutionary in nature. “Not everything is a machine learning use case,” said Steve, who started by identifying the problems the company was trying to solve before focusing on the solution.
Steve and his team then went on a roadshow to share the company’s current data and analytics capabilities and future opportunities. The team shared the presentation with audiences of varying technical aptitude to explain the ways in which the company could more effectively leverage data and analytics.
Steve recognized that while the appetite to strategically leverage data was strong, there simply were not enough in-house data scientists to achieve the company’s goals. There was also an added challenge of competing with silos of analytics capabilities across various functional groups. Steve’s team would ask, “could we respond as quickly as the functional analytics teams could?”
To successfully transform Western Digital’s analytics capabilities, Steve had to develop an ecosystem of partners, build out and enable the needed skill sets, and provide scalable tools to unlock the citizen data scientist. He also had to show his tech-savvy business partners that he could accelerate the value to the business units and not become a bureaucratic bottleneck. By implementing the following playbook, Steve noted, “we proved we can often respond faster than the functional analytics teams because we can assemble solutions more dynamically with the analytics capability building blocks.”
Achieving quick wins through incremental value while driving solution to scale
Steve and his team live by the mantra that “success breeds opportunity.” Rather than ask for tens of millions of dollars and inflate expectations, the team in IT called the High-Performance Computing group pursued a quick win to establish credibility. After identifying hundreds of data sources, the team prioritized various use cases based on those that met the sweet spot of being solvable while clearly exhibiting incremental value.
For example, the team developed a machine learning application called DefectNet to detect test fail patterns on the media surface of HDDs. Initial test results showed promise of detecting and classifying images by spatial patterns on the media surface. Process engineers then could trace patterns relating to upstream equipment in the manufacturing facility. From the initial idea prototype, the solution was grown incrementally to scale, expanding into use cases in metrology anomaly detection. Now every media surface in production goes through the application for classification, and the solution serves as a platform that is used for image classification applications across multiple factories.
A similar measured approach was taken while developing a digital twin for simulating material movement and dispatching in the factory. An initial solution focused on mimicking material moves within Western Digital’s wafer manufacturing operations. The incremental value realized from smart dispatching created support and momentum to grow the solution through a series of learning cycles. Once again, a narrowly focused prototype became a platform solution that now supports multiple factories. One advantage of this approach: deployment to a new factory reuses 80% of the already developed assets leaving only 20% site-specific customization.
Developing a DAO hybrid operating model
After earning credibility that his team could help the organization, Steve established the Digital Analytics Office (DAO), whose mission statement is to “accelerate analytics at scale for faster value realization.” Comprised of a combination of data scientists, data engineers, business analysts, and subject matter experts, this group sought to provide federated analytics capabilities to the enterprise. The DAO works with business groups, who also have their own data scientists, on specific challenges that are often related to getting analytics capabilities into production, scaling those capabilities, and ensuring they are sustainable.
The DAO works across functions to identify where disparate analytics solutions are being developed for common goals, using different methodologies and achieving varying outcomes. Standardizing on an enterprise-supported methodology and machine learning platform enables business teams faster time-to-insights with higher value.
To gain further traction, the DAO organized a hackathon that included 90 engineers broken into 23 teams that had three days to mock up a solution for a specific use case. A judging body then graded the presentations, ranked the highest value use cases, and approved funding for the most promising projects.
In addition to using hackathons to generate new demand, business partners can also bring a new idea to the DAO. Those ideas are presented to the analytics steering committee to determine business value, priority and approval for new initiatives. A new initiative then iterates in a “rapid learning cycle” over a series of sprints to demonstrate value back to the steering committee, and a decision is made to sustain or expand funding. This allows Western Digital to place smart bets, focusing on “singles rather than home runs” to maintain momentum.
Building out the data science skill set
“Be prepared and warned: the constraint will be the data scientists, not the technology,” said Steve, who recognized early in Western’s Digital journey that he needed to turn the question of building skills on its head.
The ideal data scientist is driven by curiosity and can ask “what if” questions that look beyond a single dimension or plane of data. They can understand and build algorithms and have subject matter expertise in the business process, so they know where to look for breadcrumbs of insight. Steve found that these unicorns represented only 10% of data scientists in the company, while the other 90% had to be paired with subject matter experts to combine the theoretical expertise with the business process knowledge to solve problems.
While pairing people together was not impossible, it was inefficient. In response, rather than ask how to train or hire more data scientists, Steve asked, “how do we build self-service machine learning capabilities that only require the equivalent of an SQL-like skill set?” Western Digital began exploring Google and Amazon’s auto ML capability, where machine learning generates additional machine learning. The vision is to abstract the more sophisticated skills involved in developing algorithms so that business process experts can be trained to conduct data science exploration themselves.
Designing and future-proofing technology
Many organizations take the misguided step of formulating a data strategy solely about the technology. The limitation of that approach is that companies risk over-engineering solutions with a slow time to value, and by the time products are in market, the solution may be obsolete. Steve recognized this risk and guided his team to develop a technology architecture that provides the core building blocks without locking in on a single tool. This fit-for-purpose approach allows Western Digital to future-proof its data and analytics capabilities with a flexible platform. The three core building blocks of this architecture are:
Designing and future-proofing technology: Collecting data
The first step is to be able to collect, store and make data accessible in a way that is tailored to each company’s business model. Western Digital, for example, has significant manufacturing operations that require sub-second latency for on-premise data processing at the edge, while other capabilities can afford cloud-based storage for the core business. Across both spectrums, Western Digital consumes 80-100 trillion data points into its analytics environment on a daily basis with more analytical compute power pushing to the edge. The company also optimizes where it stores data, decoupling the data and technology stack, based on the frequency with which the data must be analyzed. If the data is only needed a few times a year, the best low-cost option is to store the data in the cloud. Western Digital’s common data repository spans processes across all production environments and is structured in a way that can be accessed by various types of processing capabilities.
Further, as Western Digital’s use cases became more latency dependent, it was evident that they required core cloud-based big data capabilities closer to where the data was created. Western Digital wanted to enable their user community by providing a self-service architecture. To do this, the team developed and deployed a PaaS (Platform as a Service) called the Big Data Platform Edge Architecture using cloud native technologies and DevOps best practices in Western Digital’s factories.
Future-proofing technology: Process & govern data
With the data primed for analysis, Western Digital offers a suite of tools that allow its organizations to extract, govern, and maintain master data. From open source Hadoop to multi-parallel processing, NoSQL and TensorFlow, data processing capabilities are tailored to the complexity of the use case and the volume, velocity, and variety of data.
While these technologies will evolve over time, the company will continually need to sustain data governance and quality. At Western Digital, everyone is accountable for data quality. To foster that culture, the IT team established a data governance group that identifies, educates and guides data stewards in the execution of data quality delivery. With clear ownership of data assets, the trust and value of data sets is scalable.
Beyond ensuring ownership of data quality, the data governance group also manages platform decisions, such as how to structure the data warehouse, so that the multiple stakeholders are set up for success.
Future-proofing technology: Realize value
Data applied in context transforms numbers and characters into information, knowledge, insight, and ultimately action. In order to realize the value of data in the context of business processes – either looking backward, in real time, or into the future – Western Digital developed four layers of increasingly advanced capabilities:
By codifying the analytical service offerings in this way, business partners can use the right tool for the right job. Rather than tell people exactly what tool to use, the DAO focuses on enabling the fit-for-purpose toolset under the guiding principle that whatever is built should have a clear, secure, and scalable path to launch with the potential for re-use.
The platform re-use ability tremendously accelerates time to scale and business impact.
Throughout this transformation, Steve Phillpott and the DAO have helped Western Digital evolve its mindset as to how the company can leverage data analytics as a source of competitive advantage. The combination of a federated operating model, new data science tools, and a commitment to data quality and governance have allowed the company to define its own future, focused on solving key business problems no matter how technology trends change.
Price matters, a lot. In an era of hyper price transparency, the subtlest price discrepancies will drive consumers to purchase on channels with the lowest price. Often consumers make buying decisions in two steps: first, what they want to buy; second, where they will buy. Especially for goods and services that are not substantially differentiated in terms of quality or features, your average consumer will naturally gravitate towards the lowest price. This has been felt in an especially acute manner for retailers such as Best Buy, where consumers go to window shop, but complete their purchases on lower priced ecommerce alternatives (i.e., Amazon, eBay, Jet, etc.). Best Buy has since woken up to the fact that without differentiating the customer experience, they were unable to create stickiness to convert foot traffic.
When selling a commodity, or a good/service with a comparably substitute, price parity is arguably the most important driver in decision making. The challenge, of course, is that the manufacturers of a good, or a provider of a service, don’t always own the end touch point with the consumer. Many companies rely on a network of distribution partners to help market and sell their products. While this approach allows companies to scale revenue without the risk of building a massive salesforce, it also means that the manufacturer/provider will not be able to control all the variables that influence consumer’s buying decisions.
To strike the right balance, many companies develop a distribution strategy that comprises two dimensions: direct and indirect sales. Direct distribution focuses on selling directly to customers, while indirect distribution depends on intermediaries to complete a transaction. A distribution strategy needs to be married with a robust approach to inventory management, which may mean different things to a manufacturer than a service provider. Manufacturing firms typically have robust Sales & Operations process (referred to S&OP), during which they forecast sales and ensure there is enough inventory produces and physically distributed to distribution centers or shelf space to meet consumer demand. Service providers tend to look at inventory as an expiring asset: once time has passed, you can no longer sell that service (e.g., once a plane takes off with an empty seat, or a tee time passes without a foursome teeing off).
Although hospitality was one of the first industries to create robust distribution channels and networks through Online Travel Agencies (OTAs) to capture additional business, one of the consequences of that arrangement is that customers were conditioned to view hotel rooms as a commodity where price was the primary decision factor. While OTAs let reviews and minimal merchandising try to differentiate hotels, consumers also got lost in the noise of the difference between one chain versus another.
Over the past 5 years, intermediaries successfully crafted a narrative that they had the consumer’s best interest at heart by negotiating with the hotels, and only the OTAs could be trusted for the lowest price. Some of this was true; you could find lower prices for last minute deals, and there was benefit to both the OTAs and hotel operators that did not want to see a bed go empty. However, as OTAs further influenced the customer experience, and ate into profits with a greater share of bookings, the hospitality, airline, and other industries recognized that they would have to take decisive action to remove price disparity as the primary reason a consumer would purchase products or services on any indirect channel.
One compelling example is Icelandair and El Al who have begun experimenting with displaying sample prices of their competitors on their own websites, to show how competitive their direct prices are, and to hopefully prevent customers from “clicking” away to competitors and other price aggregators. With the explosive growth of options in the online distribution environment, there are two primary factors that companies should concentrate on: Price Integrity and Price Parity.
Price integrity is the concept of a customer being confident that they are purchasing a product of a certain value. While a customer may be willing to pay more or less, depending on the time and place of their purchase, there is a psychological range that they base their expectations on.
Price parity is the practice of maintaining a consistent rate for the same product across all distribution channels, including both owned and partnership channels. Nothing destroys trust more than being able to find a cheaper price on another website, or worse, when a company’s website is cheaper than its stores.
For industries that rely both on direct channels and distribution channels, there is a “co-opetition” relationship in which it is not uncommon for a firm to be competing with their distribution partners for sales. On the one hand, if a consumer wasn’t going to come to AlaskaAirlines.com, they would be more than happy with a referral from KAYAK, or a booking through Expedia to fill an empty seat. But if there was a chance that customer could have booked directly with Alaska Airlines, they would have fought hard to win that booking.
Hospitality and travel companies are in the middle of an ongoing competition with their distribution partners (OTAs and Metasearch engines – METAs) for the future of guest bookings. According to Hitwise, hotel direct booking only made up ~30.56% of online booking market share in 2017, at the same time OTAs continued to eat away further at market share, growing 60 basis points from 2016 to 2017.
While OTAs and METAs have become an invaluable component of hospitality marketing and distribution campaigns, there are contractual violations that stress the trust necessary for heathy “co-opetition” Some OTAs and METAs may display available prices that undercut contracted prices. Often these discounted prices are provided to the OTAs and METAs by wholesalers in violation of price-parity contracts, but the complex web of distribution relationships and flash-speed of online pricing engines makes it difficult for hospitality companies to really hold their distribution partners accountable.
Despite the challenges, companies must maintain a vigilant eye on how inventory and experiences are being displayed by distribution partners to ensure that consumers that may have the inclination to purchase on direct channels are not actively dissuaded from doing so. A successful distribution strategy must be aggressive and can quickly be implemented and maintained by following these six critical steps:
Metric tracking allows you to better understand if your chosen distribution partners are worth their distribution costs. For example, “NRevPAR” (Net Revenue per Available Room) is the industry standard in hospitality for calculating the revenue generated per available room, net of any discounts or commissions paid to intermediaries. Through the re-evaluation of their NRevPAR, hoteliers can evaluate their current distribution partnerships across their current distribution channels to ensure that their distribution costs are harmonized with their expectations for each partner. A significant drop in a key metric is a telltale sign that it is time to either renegotiate with your current distributors or start looking for replacements.
It is imperative that you monitor how and where your inventory is displayed across your distribution partners’ platforms. You want to have the ability to confirm that your partners are playing by the rules as well as ensuring that your offering is not appearing unofficially on other public channels with rogue prices that undercut you and your partners. If a partner determines that your inventory is floating around the public space at prices that undercut their contracted prices, it won’t be long before you observe your inventory being pushed to the bottom of their display pages—if they don’t remove you altogether for being out of parity.
Andrew Sheivachman of Skift pointed out that in 2017, global digital travel sales were projected to reach $189.6 billion in 2017, of which 40 percent was to be attributed to purchases made through mobile (4% gain over 2016). With such a rapid rise in the adoption of mobile booking and shopping, you cannot let your mobile channel development lag. You must work proactively with your distribution partners to refresh user interfaces and user experiences to optimize their mobile shopping experience. Rich content, descriptions, and high-quality photography also allow you to differentiate your product when it is sitting on a digital shelf with comparable products.
Dynamic yield pricing allows you to base your pricing relative to demand and other variables. Dynamic pricing is being employed across various industries to match supply and demand to move expiring inventory: preventing waste in grocery stores, ensuring that there are enough drivers on the road for ride-sharing platforms, or driving loyalty by generating customer-specific fares for airlines. Within the hospitality industry, dynamic pricing allows for inventory to be priced appropriately in response to the timing of a booking, local events, or any occasion that could cause fluctuating demand. Just make sure that your dynamic price is not undercut by a distribution partner or cached by that distribution partner and out of date when prices go back up.
While channels you directly manage (a website, a social presence, in-store), may not be the first point of interaction between you and your prospective consumer, you still can convert customers to complete their purchase through your owned direct purchase channels as you get to know them and earn their attention. In 2015, of booking journeys that were initiated on OTAs – over 34% of bookings were completed through supplier websites. Bolstering your available offers for customers through loyalty programs, subscription email campaigns, and social media can help drive customers from your distribution partners to your direct-booking channels.
Legacy backend systems may cause you millions of dollars in system outages and will almost certainly inhibit your ability to proactively adjust your distribution network. These legacy platforms cause transactional friction during the process in which a supplier’s prices are sent out to the systems of distribution partners, which in turn forces revenue managers to spend hours a day manually validating that prices and inventory are being migrated accurately to various distribution channels and partners. Rate monitoring platforms are now available that allow for revenue managers to monitor the behavior of their distribution partners using automation. The use of these platforms also increases transparency of your distribution partners’ networks. These platforms can be used to not only monitor the integrity and parity of pricing for your own inventory, but they can be used to quickly determine if you are competitively priced across the globe. With our earlier example of Icelandair and El Al, technology can also automatically allow revenue managers to know when their rates are being advertised by competitors (either accurately or inaccurately).
While your distribution partners can help you reach new customers and markets, you must ensure that their role as an intermediary does not equate to them “owning” the customer. It’s the incentive of your distribution partners to provide you revenue, but they are unlikely to share customer information that can be used to convert a customer into a loyal patron (i.e. personal email address, mailing addresses, etc.). Providing an amazing customer experience is the best way to overcome a consumer’s bias to make decisions based on price. If a company can pair a differentiated customer experience, with an enticing loyalty program that rewards purchasing goods or services through direct channels, there is still hope to maintain a balanced distribution strategy.
In early 2015, when The Manitowoc Company decided to split into two companies, the executive leadership called on the CIO, Subash Anbu, to lead the charge.
The transformation would be the most consequential in its 113-year history. Leaders from the company, then a diversified manufacturer of cranes and foodservice equipment, decided that the whole of the diversified organization was no longer greater than the sum of its parts. It would split into two publicly-traded companies: Manitowoc (MTW), a crane-manufacturing business, and Welbilt (WBT), which manufactures foodservice equipment.
The CIO was a natural choice to lead a change of this magnitude because his role allowed him to understand the interconnectedness of the company’s various business capabilities, which processes and technology were already centralized or decentralized, and where there may be opportunities for greater synergy in the future-state companies.
Subject matter expertise, however, would not have been enough to qualify a candidate; the leader had to be charismatic, and Subash was widely recognized for his servant-leadership mentality. That would prove essential to removing critical blockers across the organization.
It was also important that the CIO had long-standing credibility with the Board of Directors, who were the ultimate decision makers in this endeavor.
Subash embraced the daunting challenge, saying, “While change brings uncertainty, it also brings opportunities. Change is my friend, as it is the only constant.”
In some ways, splitting a company into two may be harder than a merger. When merging, you have the luxury of more time to operate independently and merge strategically.
When Western Digital acquired HGST in 2015 and Sandisk in 2016, CIO Steve Philpott decided to move all three companies to a new enterprise resource planning (ERP) system rather than maintain multiple systems or force everyone onto the incumbent Western Digital solution. When splitting a company, there is greater urgency to define the target state business model and technology landscape and execute accordingly.
This split for Manitowoc introduced major consequences for change: duplication of every business function, completed within a fixed four-quarter schedule, while still executing the 2015 business plans. All business capabilities would be impacted, especially Finance, Tax, Treasury, Investor Relations, Legal, Human Resources, and of course, Information Technology.
While the Manitowoc Company had experience with divesting its marine segment (it started as a shipbuilding company in 1902), the scope and scale of the split was unprecedented for the company.
Breaking apart something that has been functioning together is an inherently risk-laden proposition. Subash and his team recognized that to mitigate risk, they would need to be both thoughtfully deliberate in planning and agile in their execution that breaks down big risks into smaller risks, prioritizing speed over perfection.
As Subash led the split of the company into two, he encountered the following risks:
When splitting a public company, the deadlines and outcome are clear. How Subash and the team would execute the split of the company, however, remained largely undefined.
The enormity of the task could have created a paralysis, but the team quickly began working backwards: getting on the same page with the right people; identifying the big-rock milestones; identifying the risks; sketching out a plan to reach the big-rock milestones; breaking the plan into smaller rocks to mitigate risk; and keeping everyone informed as the plan unfolded with greater detail.
In the process, Subash learned five critical lessons that all executives should heed before splitting a company:
Splitting a company requires cross-functional collaboration and visibility at the strategic planning and execution level. Start by creating a Separation Management Office, consisting of senior functional leaders that will oversee the end-to-end split across HR & Organizational Design, Shared Services & Physical Location Structuring, IT, Financial Reporting, Treasury & Debt Financing, Tax & Legal Entity Restructuring, and Legal & Contracts. The Separation Management Office should report to a Steering Committee consisting of the Board of Directors, CEO, CFO, and other C-level leaders. When faced with difficult questions that require a decision to meet deadlines, the Steering Committee should serve as the ultimate escalation point and decision maker to break ties, even if it means a compromise.
A split will require dedicated, skilled resources that understand the cross-functional complexities involved. This project team will need people that understand the interconnectedness of technology architecture, data, and processes, balanced with teams that can execute many detailed tasks. When forming the team, it is important to orient everyone on the common objective to create unity; departmental silos will not succeed. Variable capacity will almost certainly be necessary for major activities, and you may be able to stabilize your efforts by turning to trusted systems integrators or consulting partners to help guide the transition.
Agile evangelists often frown upon working under the heat of a mandated date and scope, but a public split forces such constraints. Treat the constraints as your friend: Work backward to identify your critical operational and transactional deadlines. Ensure the cross-functional team is building in the necessary lead time, especially when financial regulations or audits are involved. Dedicate a budget, but be prepared to spend more than you anticipate, as there will always be surprises to which teams will have to adapt. As part of your project planning, create a risk management framework with your highest priority risks, impacts, and decision makers clearly outlined. When time is of the essence, contingency plans need to be in place to adapt quickly.
Any time a working system is disassembled, there unquestionably will be problems. The key is not to wait for a big bang at the end to see if what you have done has worked. Spending nine months planning for and three months executing this split would have introduced new risks. Instead, Subash and his team built their plan and then iteratively built, tested, and improved in an agile-delivery process. The team was able to identify isolated mistakes early and often, allowing them then to proceed to the following phases with greater confidence—not with bated breath.
In a split, every employee, contractor, supplier, or customer will be impacted. Create a communication plan for the different personas: Steering Committee, operational leaders, functional groups, customers, partners and suppliers, and individual employee contributors. The Manitowoc Company had to communicate on everything from where people would sit, to who would be named as new organizational leaders. In the void of communication, fear and pessimism can creep in. To prevent this, the Separation Management Office launched “Subash’s Scoop,” a monthly newsletter on the separation progress. It brought helpful insight, with a flare of personality, to keep the organization aligned on its common goal.
The Manitowoc Company successfully split into two public companies—Manitowoc (MTW) and Welbilt (WBT)—in March 2016, hitting its publicly-declared target. In fact, many of the critical IT operational milestones were completed in January, well in advance of the go-live date.
Over the last two years, the stock prices for both companies have increased, validating the leadership evaluation that the whole was no longer greater than the sum of its parts.
If you’re not thinking like a software company, you’re already behind.
Software companies focus on codifying and then scaling everything they do. To do that, business subject-matter expertise and technical expertise must become one in the same, converging once siloed disciplines.
In a recent interview with Metis Strategy, Cathy Bessant, Bank of America’s Chief Operations & Technology Officer, explained that convergence must apply to all companies, saying, “Technology has completely changed the notion of business integration. You cannot say the business is technology or technology enables the business—they are one and the same.”
Your company will not be able to compete at scale and speed if delivery teams have not gone beyond typical IT-business hand offs to true convergence. This convergence extends beyond obvious points of technology dependence, such as an eCommerce website or managing internal productivity tools; it is happening everywhere.
“Metis Strategy helped us make big decisions on a number of key initiatives. Their real-world experience coupled with their ability to perform deep analysis gave our organization confidence in our new direction.” – Gary Reiner
Still, many companies struggle with where to start on this transformation. Business function leaders often communicate high-level goals that are difficult for technology leaders to translate into concrete actions, and technology leaders often approach a problem by addressing the technology first, and the business outcome second. They end up six months into a “digital transformation” effort with a disparate collection of projects, but no cohesive sense of prioritization or interdependence to create a more tech-driven future.
The solution to bridge this gap between strategy and execution is for IT leaders to be better collaborators and communicators, and to understand the business and customer needs as well as their business partners do. But that is easier said than done.
Start by rooting your IT plans in a well-defined business capabilities map, and then transform the way that IT goes to market by driving cross-functional operating model convergence in the long term.
Business capabilities are an integrated set of processes, technologies, and deep expertise that are manifested as a functional capacity to capture or deliver value to the organization. They outline “what” a business does, as opposed to “how” a business does it. They are the definition of your organizational skills, best represented in a landscape map that allows you to evaluate the full spectrum of capabilities against each other.
Business capability maps are not just about technology; these tools are designed to improve an organization’s holistic ability to improve a business outcome, and in many cases, it is not the technology that is the constraint, but rather a process, skill, or policy issue.
Consider the process for onboarding a new employee. Strong onboarding capabilities make the experience seamless for the new hire. From the second an employee steps into the office, they might:
Business capability maps are designed to improve an organization’s holistic ability to improve a business outcome.
There are various people, process and technology components behind each of the steps in the employee’s journey. However, the employee does not—and should not—feel the transition between, in this case, HR, facilities, and IT.
If the desired outcome for this capability is to provide a seamless employee experience where the employee is productive in less than three days, the different functional areas should integrate their strategic plans to meet that objective. This is often challenging in an organization that thinks and acts in functional silos, but a capability-driven approach will bridge that gap.
Many organizations have never formally documented their business architecture and therefore struggle to understand business priorities. To bridge that gap, IT will generally dispatch enterprise architects or business relationship managers to form bonds with functional leaders, understand their current processes, and identify the pain points. As a result, they map the business capabilities. This exercise elevates technology leaders and their business partners to common ground, on which both can add value to the conversation: one around business process improvement, and the other around technology enablement.
We generally suggest no more than four levels of cascading capabilities, with the fourth level most resembling the associated process. Keep in mind that business capability maps are not organizational charts. By definition, they are anchored by the business outcome, with many functional areas converging to realize that outcome.
Once you define your capabilities, prioritize them to help provide strategic direction to the organization. Not all capabilities are of equal importance to your ability to compete, so you need to ensure you are not boiling the ocean. While there is more nuance in practice, for simplicity, capabilities fall on a scale of achieving competitive parity through sustaining competitive advantage, and it is important to evaluate which are the most important to your business’ success. This segmentation will not change tremendously year by year, unless there are major shifts in the competitive forces at play.
Capabilities that—currently, or in the future—are critical to creating or sustaining your market position in a fundamentally unique way. Customers will hire you because of these capabilities, your employees will love you for them, and your investors will celebrate the cost effectiveness that they bring. For example, you may be able to segment customers and tailor offerings in a way that economizes your marketing spend far better than a competitor. Or, if your competitor competes on price, you may compete on amazing customer service. Thus, you might prioritize your capability on managing customer cases. To be clear, further segmentation is needed within the “Competitive Advantage” bucket; remember: not everything is created equal.
Capabilities that maintain customer expectations and operational needs. You don’t lose (but also probably don’t gain) fans because of these capabilities. For example, your “process payroll” capability probably needs to stay at current levels, but it does not need to be the target of heavy investment and prioritization. This doesn’t mean you don’t invest in these areas. For example, Uber uses Stripe to instantly pay drivers, giving them cash in hand each day, but Lyft also offers this capability. Uber needs to continue to invest in this area to stay at parity, in the case that, say, Lyft started predicting revenue for drivers and giving them advances. Still, if the offerings are similar, they may not be a deciding factor for whether a driver goes with Uber or Lyft.
Once you segment and prioritize your capabilities, you should evaluate the current state maturity for each capability, as well as the target future state. Evaluating maturity levels is as much art as it is science. As a result, the defining of maturity levels cannot be done independently, and often the conversation around why something is or is not mature is as valuable as whatever score you give yourself.
We recommend undertaking this exercise with cross-functional groups that have an understanding of the capability from different perspectives. We often evaluate capability maturity as a function of process definition, degree of automation, organizational reach, and the measurement of the business outcome. This evaluation will influence the prioritization of near-term investments and will not always coincide 1:1 with the segmentation mentioned above. For example, if you have low maturity in a “parity” capability, you would still want to invest in that capability to get it up to par.
Enhancing a capability may require investments in people, processes, or technology. Therefore, a converged team of business function experts and technology leaders should jointly identify improvement activities. IT should lead in aligning the technology services (if your organization uses an ITSM approach) and technical architecture needed to enable these capabilities—but all in the context of how the business process may change. Once you have aligned your technical architecture, IT can identify gaps and redundancies. For example, if you have multiple applications supporting your “expense management” capability, you might opt to undertake a cost-benefit analysis of maintaining all of the applications. Conversely, you might discover you have a prioritized business capability of sales forecasting without a technology architecture supporting or enabling that capability. You might identify this an area where a new technology services is needed to provide data analytics to the sales operations team.
Once developed, capability maps can bridge the gap between strategy and execution by driving organizational alignment around where investments are needed.
For example, we recently helped a growing technology company through this journey. The IT organization had been viewed as an order-taker, and it often struggled to get budget consideration for more strategic projects that would add value to the business, but the CIO was intent on evolving the organization into a more strategic partner.
The CIO knew that the convergence of business process improvement and technology enablement was key, so the team worked closely with business function leaders to develop prioritized capability maps across the organization. Then they leveraged the capability maps to identify areas in greatest need of investment, and in turn forced trade-off decisions that resulted in a meaningful prioritization of focus areas that galvanized the team. The converged business and technology teams, oriented around shared business outcomes, had threaded the needle from strategy to execution.
In the end, one of the business partners said, “We have tried to do this many times over the past six years, and this is by far the best it has ever gone.” That is how IT goes to market differently, and wins.