Thank you to everyone who participated in the 15th Metis Strategy Digital Symposium. Check out some event highlights below, and stay tuned to the Metis Strategy YouTube channel and Technovation podcast in the coming weeks for full sessions.  

For businesses across the globe, 2023 was the year of generative AI. Since ChatGPT’s launch and meteoric rise last November, digital leaders have been experimenting with a range of new GenAI products and services as they searched for the most effective, and least risky, way to bring the technology to their organizations. As GenAI (and the hype around it) took off, it prompted important and complex conversations about the future of work and how to accelerate innovation while managing new and significant risks.

A little over a year in, leaders continue to experiment with new tools as a means to drive new value and improve the experience for customers and employees. They are also turning their focus back to the fundamentals, building strong data governance and data hygiene practices to ensure their organizations have the strategic and operational foundation needed to take advantage of their data. 

GenAI is still the shiniest thing out there, but as technology leaders look to 2024, they are focused on integrating it, and AI more generally, into their organization’s operating model and championing use cases that can produce tangible value at scale. 

Scaling AI’s ROI

If year one of generative AI was about deciphering its risks and enabling organizations to experiment safely, year two will be about finding ways to drive value at scale. With new use cases emerging regularly, technology leaders are figuring out how to prioritize the initiatives that show the most promise to the business. 

At BNSF Railway, CIO Muru Murugappan and his team use a value feasibility matrix to assess technical feasibility, timing, and complexity of new AI initiatives versus the expected payback. Some companies are also using business interest and executive sponsorship as criteria for deciding which initiatives to pursue. One overarching lesson: the more ambitious the project, the more challenges it is likely to face. 

In addition to delivering on generative AI’s opportunities, CIOs now are contending with a new set of costs as well. Speakers also highlighted the fact that simply “turning on” a new AI tool does not guarantee value. 

For many, the path to unlocking AI’s value means getting back to basics. “This is reinforcing the need for analytics fundamentals,” said Filippo Catalano, Reckitt’s Chief Information & Digitisation Officer. “If you don’t have good data practices, at best you’re going to use whatever others are using, but you will not be able to generate competitive advantage. Great data practices … become even more important.” 

Encouraging innovation while managing risk 

While new AI tools have helped organizations explore the art of the possible, they also have created a number of new risks, from more advanced cyberattacks to the negative impact of training algorithms on biased data. Just over one quarter of attendees cited data privacy as the largest AI-related risk to their organizations. The delicate balance for CIOs: managing the new risk landscape while empowering teams across the organization to experiment and innovate.

Martin Stanley, who leads the Cybersecurity and Infrastructure Security Agency R&D portfolio, is currently assigned to the Trustworthy and Responsible AI program at NIST. Among his team’s goals is promoting adoption of the AI Risk Management Assessment Framework, which provides a construct for deploying AI responsibly and managing risk among a diverse set of stakeholders. The framework means to address a few key concepts: building a language around AI risk beyond simply monitoring potential vulnerabilities, creating a shared understanding of how to manage that risk across the enterprise, and driving a trust-driven, “risk- aware culture” that influences how people interact with the technology. 

CIOs are working to build trust into every layer of the process. As Vishal Gupta of Lexmark noted, technology is only as good as people’s ability to adopt it and trust what it’s saying. “Otherwise, you really can’t do much with it.” At Lexmark, Gupta is taking a layered approach, creating trust in the underlying data via stronger governance and management practices; driving trust in AI and machine learning models by setting up an AI ethics board and rigorously vetting use cases; and continuously testing to validate AI’s ability to truly drive business outcomes. 

Taking a human-centered approach

New AI tools such as developer copilots have the potential to drive significant productivity gains and reshape how many of us do our jobs. As humans and technology continue to interact in new ways, CIOs are focused on optimizing the digital employee and customer experience while helping teams navigate a changing world of work. Indeed, 50% of MSDS attendees plan to apply AI and generative AI to impact employee experience and productivity, with 30% planning to use it to improve the customer experience. 

At TransUnion, CIO Munir Hafez and his team are taking a human-centered approach to the digital experience with a focus on ensuring tech equity and establishing policies that allow teams to safely experiment with new tools, among other initiatives. When investing in employee experience, “our goal was to create a consumer-grade experience that enables employees to be engaged and productive in an environment that is integrated, modern, frictionless, and connected anywhere,” Hafez said.

AI’s ability to deliver frictionless employee experiences and deliver real productivity gains far beyond IT is likely to be a big focus for 2024. Many panelists noted how access to accurate, AI-enabled real-time data can help field managers make decisions more quickly, and how technologies like digital twins can streamline design processes and speed time to market.  

A world of possibilities in 2024

Navigating an uncertain economic environment and rapid technological advances are top of mind for CIOs in the year ahead. The convergence of these two factors continues to underscore the importance of bringing a strategic, value-based lens to AI development and adoption.

The hype around generative AI may come back down to earth in 2024 as companies begin to understand its complexities in the enterprise. “I think there is going to be…a little bit of a trough that we’ll hit with GenAI,” said Graphic Packaging International CIO Vish Narendra. “The commercialization of that is going to take a little longer in the enterprise than people think it’s going to.” 

As technology becomes embedded across a broader range of products and services, the spotlight will be on CIOs to show the art of the possible, create future-ready workforces, and manage risk. Given their broad purview that spans horizontally across organizations, CIOs are well positioned to influence and shape enterprise strategy in the year ahead, setting their companies up for continued resilience and growth.

Organizational agility — the ability to continuously improve, iterate, and adapt to fast-changing technology developments and customer expectations — has long set apart corporate leaders from laggards. The pace of change and innovation has never been faster and technology developments and digitalization set the pace as much as they demonstrate the impact of change for businesses and individuals alike.

If the pace of change, (and need to keep up to remain competitive) wasn’t already fast enough, the current environment, shaped by a global health crisis and the related economic uncertainty, is recognized as a (digital) change accelerator extraordinaire. A number of data points emerge that confirm the extraordinary pace and magnitude of change that is occurring. In a survey by Fortune, for example, 77% of CEOs say that their company’s “digital transformation was accelerated during the crisis.” And of the 100 CIOs, CTOs, and CDOs who attended the Metis Strategy Digital Symposium this summer, 72% said that the pace of their organization’s digital transformation accelerated since the pandemic started.

The crisis is expected to further shape the competitive landscape and likely widen the gap between organizations on the path to a successful future and those fighting for survival in a post-Covid “new normal” world.

Of the factors that will determine success or failure, two feature prominently on essentially all executives’ agendas: digital readiness and organizational agility. Both of these are tied to an organization’s organizational change management capabilities.

It may appear logical that organizations and their business and technology leaders would focus relentlessly on making sure that organizational change management (OCM) capabilities are mature and ready to be deployed at a moment’s notice, especially since change initiatives will remain an integral part of business operations and are widely expected to increase. However, despite the widely recognized need for more organizational agility, OCM is still an underdeveloped, underutilized, and underappreciated competence, even in organizations that are otherwise recognized as being high-performing and successful in their core competencies.

Most companies have some change management capabilities within their organizations, but often these efforts start too late or are haphazard in their implementation. This can lead to frustration among employees and customers and may ultimately lead to higher cost and/or risk. Digital transformation,  for example, requires a great deal of change, and 70 percent of digital transformation efforts do not fulfill the promises made. Sometimes, change management may be viewed as a “soft” topic that is difficult to explore and even harder to influence in the pursuit of “hard” business results.

This presents a significant opportunity to improve operational performance and shape more favorable business outcomes by applying well-established change management approaches differently and adopting a more strategic and data-driven approach to change leadership.

Adapting John Kotter’s 8-Step model

Bestselling author, thought leader, and Harvard Business School professor John Kotter and his 8-Step Process for Leading Change are widely regarded as the authority on change management and leadership. The 8-Step Process, which ranges from “Create a Sense of Urgency” to “Institute Change,” provides a useful framework for a number of change initiatives. Metis Strategy has used it as a starting point for the change efforts we are involved in and built upon it to address individual situations.

As robust and proven as Kotter’s 8-Step process is, it doesn’t guarantee success; it’s thoughtful execution and careful tailoring to the unique organizational context will make the difference.

In order to build upon the power of Kotter’s framework and to address the issues we have encountered in our OCM and organizational agility work with technology and business leaders across industries, we have identified five “Moments of Truth” in change management. These work in concert with Kotter’s eight steps and make the approach more powerful and more likely to produce the desired business outcomes. We will explore each Moment of Truth below:

Metis Strategy’s Change Management “Five Moments of Truth,” combined with John Kotter’s Eight-Step change management framework.

Moment 1: Recognize change 

The delineation between continuous evolution, which ideally is part of business-as-usual, and a significant change event is gray. Where the line is crossed will depend on how much change is “normal” within an organizational context. As soon as an action or development falls out of the norm, leaders should communicate that change is taking place even if the extent and impact of any such change is still not entirely known. Doing so will allow organizations to begin managing that change and reduce the potential costs and risks associated with not addressing it early enough.

Moment 2: Acknowledge change management needs and opportunities

After significant change is identified but before Kotter’s “burning platform” has been identified or the sense of urgency created, firms should determine the outcomes that the change process is expected to deliver. Companies should list desired outcomes of the change initiative, as well as consequences that may occur if the change is left to occur without significant oversight. Think of this as the risk/return calculus or scenarios analysis of change. If the outcomes of freely occurring change pose a risk of being costly or distracting to the organization, active management may be necessary.

If active management is necessary, firms should assess their change readiness and put an explicit plan in place. These plans can be brief, corresponding to the magnitude and implications of the change and the desired business outcomes. At this point, the needs for change management (such as risk avoidance or mitigation), as well as related opportunities (such as faster realization of benefits) will begin to emerge. These will help make the case for – as well as increase the likelihood of – successful change management. At this stage, leaders should also gather and gain alignment among relevant stakeholders, as well as have initial conversations about change management activities and success metrics.

A high-level change readiness assessment will help firms understand the costs and benefits of OCM efforts in its fundamental terms. It will also provide an opportunity for companies to solicit perspectives and perceptions from teams or employees affected by the change. People respond to change – or even the prospect of change — in different ways, and it’s important to acknowledge the different types of perspectives and reactions that play a role in change management. This may include enthusiasts, skeptics, those who are complacent, and others.

Most importantly, leaders should identify key change agents and potential change inhibitors, as both of these groups should be engaged with care and diligence. The former can serve as advocates and accelerators of change, while the latter may need to be proactively engaged to limit the emergence of negative sentiments or inaccurate information that will be difficult to remedy after the fact. Ideally, a broad set of perspectives from all relevant levels and functional areas will be represented as the change management work kicks off in earnest. Now that the need for change management has been identified and the fundamentals are in place – with the opportunity to iteratively enhance or scale them as efforts progress – companies are ready to identify the burning platform and create the sense of urgency that will launch the change efforts in a more public and open manner.

Moment 3: Face reality [Create a favorable context for change management] 

As organizations prepare to share their change management plans widely (in the transition between Kotter’s “Create a Vision for Change” and “Communicate the Vision” steps), it is important to assess the organizational context that is relevant for the change at hand. Many change efforts often fall short because they fail to consider organizational realities, such as the circumstances of an organization’s operations or the perceptions and feelings of its employees.

At this point in the change management process, the desire for and commitment to the change efforts should be clear. This may cause employees’ natural fears and anxieties to flare up. Unless the change management plan, specifically the communication and engagement efforts, reflect specific thoughts and concerns, there’s a risk that the change efforts will meet resistance from a growing number of people affected by the change.

In order to manage this critical juncture effectively, empathy and transparency become important tools in the change leader’s toolkit. Employees are looking for leaders and colleagues to listen and to understand. They want to be heard and see their concerns and expectations addressed. The more clearly and specifically senior leaders can relate to employees of all functions and demographics, the more likely it is that their message will resonate.

A practice that I have seen work well to express empathy and ensure that team members feel heard and understood is cross-functional communication at the executive level. For example, if the chief marketing officer can explain the move towards DevOps and a microservices architecture, the CIO references the newest design standards, or the Chief Data Officer shares the vision for the customer engagement campaign, employees will take note and recognize the shared commitment to the change objective.

In our experience, honesty and plain-spokenness go a long way. Simple language and basic concepts coupled with real-world examples will be more effective than a well-drafted and well-polished presentation.

A key recommendation relative to this step is to embrace the ideas of “servant leadership” and the role of clearing obstacles in the way of change, no matter what they are. Empowering employees, delegating responsibilities, and providing space for creativity will instill and strengthen trust that is likely to yield benefits well beyond an individual change effort.

Moment 4: Scale change 

As you begin to implement your change management initiatives and realize a few “quick wins,” change leaders will transition to ensuring the sustainability and institutionalization of change, as Kotter outlines in the last two steps of his 8-Step Change Process. This is the time to not only scale the change management initiatives, but also to document and scale lessons learned so that they can be applied to other change management efforts, even those that are not the focus of the original project.

The opportunity to scale, repeat, and improve what has already worked is both difficult and valuable. If the change efforts can be broken down into individual components, the organization has the opportunity to iterate on each component in pursuit of different change objectives and business outcomes. If, for example, the original change effort focused on developing a Scrum product team, the organization could consider taking the dedicated or capability team concept to other parts of the business; explore whether other parts of the organizations are ripe for a project to product operating model shift; improve demand and capacity planning practices; or apply minimum-viable-product (MVP) principles to general operations.

The scalability and repeatability of change will be both a source and an indicator of change management maturity and the culture of change at an organization. In many cases, the success of change can be attributed to heroic efforts of individuals or teams or is the result of extensive deployment of resources to effect the change. While changes achieved under these circumstances are still commendable and, and many will be deemed successful, they may not be repeatable or scalable, either because motivation or resources cannot be replicated, or other contributing factors are no longer at play (e.g., a crisis or emergency, a regulatory deadline).

Moment 5: Enhance the organization’s change management playbook 

Like all business capabilities, an organization’s change management competencies should not be static. They should be subject to continuous review and iterative improvement. As you learn which change management activities work and which ones don’t, you will develop the type of change management capabilities that best meet your organization’s needs. To the extent possible, it can be useful to create a data-driven after-action review. Identify and document the insights, and work with change management and functional leaders to turn the lessons learned into concrete improvements.

A well-written OCM playbook will enable an organization to leverage the advantages of similar or even repeatable change management efforts while also building capabilities that can successfully manage new or uncertain challenges. While there are universal OCM “best practices,” what is right for your organization will largely be driven by the prevailing organizational culture.

Two overarching and mutually reinforcing success factors: culture of change and a data-driven mindset

In addition to these five Moments of Truth in Change Management, two critical ingredients to OCM and Organizational Agility must be called out. First is recognizing that organizational culture will not only make or break any particular change effort. It will also determine the sustainability and repeatability of change.

Secondly, data and analytics in change management is becoming an increasingly powerful tool to enhance the effectiveness of an organization’s change management capabilities and counter the false yet common perception that change management is “soft.” These powerful tools at change leaders’ disposal should be used throughout the change effort and beyond.

A data-driven approach to change management includes:

  • Baselining the current (pre-change) state
  • Identifying measurable objectives success factors (e.g., KPIs, success metrics, OKRS) for the overall change efforts as well as interim steps and milestones
  • Creating and using a dashboard or other visualization tools to manage, monitor, and communicate the status and progress of the change initiatives
  • Using available data to identify likely change leaders and areas to pilot or commence the change efforts
  • Leverage employee and/or customer sentiment and engagement analysis (e.g., social listening, social media, collaboration tool usage) to keep a pulse on the change efforts as well as to gauge improvement relative to the desired outcomes
  • Using data and analytics to model alternative change scenarios (possibly with the help of machine learning/AI tools)

These two critical success factors of culture and data and intrinsically linked. An organizational culture that embraces changes and develops a data-driven mindset is a rare but powerful combination that all organizations and leaders should aspire to attain.  

Final thoughts: Success factors and measures of organizational agility

Throughout this article, we have emphasized the symbiotic relationship between organizational agility, organizational culture, and change management capabilities. In this setup, culture and agility will likely continue to shape change management capabilities more than the other way around. Ultimately, the general maturity of the organizational change management capability, as well as the success of each individual change management effort, should be judged by the organization’s universally accepted measures of business outcomes, business impact, and business value.

Chris Davis co-authored this article.

Companies continue to face implementation challenges as they rush  to comply with data privacy regulations such as Europe’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). This is due largely to a mismatch between their management of data and the stringent requirements set by the regulations.

Organizations can address the complexities of privacy regulations via a well-defined data governance framework, which leverages people, processes and technologies to establish standards for data access, management and use. Such a framework also enables companies to address elements of privacy, including identity and access management, consent management and policy definition.

As leaders implement data governance models with privacy in mind, they may face challenges, including lukewarm executive buy-in, lack of a cohesive data strategy, or diverging opinions about how data should be used and handled. To address these obstacles, leaders should consider the following actions:

  • Establish cross-functional data ownership and awareness
  • Streamline data policies and procedures
  • Upgrade technology and infrastructure 

Establish cross-functional data ownership and awareness 

While a Chief Data Officer or CIO may lead the implementation of a data governance framework or model, data governance should be a shared responsibility across a company.  At a minimum, the IT department, privacy office, security organization, and various business divisions should be involved, as each has an important stake in data management. Bringing in a variety of stakeholders early allows firms to establish key data objectives and a broader data governance vision. This collaboration can take the form of a dedicated task force or may involve regular reporting on data governance and privacy objectives to the executive board.

Data privacy, similarly, is also a shared responsibility. All employees have a part to play in maintaining data privacy by following accepted standards for data collection, use and sharing. Indeed, implementing a successful data governance model with privacy in mind requires educating employees on governance concepts, roles and responsibilities, as well as data privacy concepts and regulations (e.g. the definition of “personal information” vs. “consumer information”).

After establishing a governance vision and driving employee awareness, organizations can define their desired data governance roles – such as data owners, data stewards, data architects and data consumers – and tailor the roles to their needs. Some companies may distinguish between data stewards and data owners, for example, with the former responsible for executing daily data operations and the latter responsible for data policy definition. For one client with a large and complex IT department, Metis Strategy established a governance hierarchy with an executive-level board, combined data steward/owner roles, and other positions (e.g. data quality custodians). This structure facilitated ease of communication and enabled the client to scale its data management practices. 

In the long term, firms should incorporate data governance and management skills into their talent strategy and workforce planning. Given the expertise required and the shortage of qualified people for some data-intensive roles, organizations can consider enlisting the help of talent-sourcing firms while focusing internal efforts on talent retention and upskilling. As companies’ strategic goals and regulatory requirements change, they should remain flexible in adjusting their data governance roles and ownership.

Streamline data policies and procedures

To respond adequately to consumer privacy-related requests for data, organizations should establish standardized procedures and policies across the data lifecycle. This will allow companies to understand what data they collect, use and share, and how those practices relate to consumers. 

For example, the CCPA provides consumers with the right to opt out of having their personal information sold to third parties. If a retailer needed to comply with such a request, it would need to be able to answer questions in the following categories:

  • Data classification: What data elements pertaining to the consumer does the company have, such as address, credit card information or product preferences? Has the company classified these data elements appropriately, if at all?
  • Data lineage: Where did the customer’s data originate and what happens to that data across its lifecycle? For example, does the company only share the customer’s data internally, or does it share the data with marketing and payment vendors to facilitate transactions or personalized ad campaigns?
  • Data collection and acceptable use: How does the company currently collect data from the consumer? Does the company have the appropriate consent from the consumer to collect and process their data? If the company shares the customer’s data with external parties, are there appropriate data sharing agreements with those parties in place? 

Establishing policies and standards for the above can help organizations quickly determine the actions needed to respond to customer requests under privacy regulations. Companies should communicate policies widely and ensure that they are being followed, as failing to do so can propagate the use of inconsistent templates and practices. At one Metis Strategy client, for example, few stakeholders had sufficient awareness of data management and access standards, despite the fact that the client’s IT department had established extensive policies around them.

Consider technology and infrastructure upgrades

To successfully implement data governance frameworks and ensure privacy compliance, firms may also need to address challenges posed by legacy infrastructure and technical debt. For example, data often is stored in silos throughout an organization, making it difficult to appropriately identify the source of any data privacy issues and promptly respond to consumers or regulatory authorities.

Firms also need to evaluate the security and privacy risks posed by outsourced cloud services, such as cloud-based data lakes. Those using multiple cloud providers may want to streamline their data sharing agreements to create consistency across vendors.

Some technologies can help companies leverage customer data while mitigating privacy risks. In a Metis Strategy interview, Greg Sullivan, CIO of Carnival Corporation, noted that data virtualization enhanced his organization’s analytics capabilities, drove down operational and computing costs and reduced the company’s exposure to potential security and privacy gaps. 

Companies can also consider new privacy compliance technologies, which can enhance data governance through increased visibility and transparency. Data discovery tools use advanced analytics to identify data elements that could be deemed sensitive, for instance, while data flow mapping tools help companies understand how and where data moves both internally and externally. These tools can be used to help organizations determine the level of protection required for their most critical data elements and facilitate responses to consumer requests under GDPR and CCPA. 

Although legacy technology overhauls can be time-consuming and costly, firms that are decisive about doing so can reduce their privacy and security risks while avoiding other challenges related to technical debt.

Creating an adaptable model 

 As the global data privacy landscape evolves, organizations should continuously adapt their data governance models. Companies should proactively address their obligations by designing data governance roles, processes, policies, and technology with privacy in mind, rather than reacting to current and forthcoming privacy legislation. Companies doing so can not only improve risk and reputational management, but also encourage greater transparency and data-driven decision-making across their organizations.

In order to compete with the speed and agility of startups, organizations need efficient, disciplined financial management practices that detail how their money is spent and how each investment ties to specific business outcomes. This requires decision-making frameworks and management systems that use credible, timely information to empower leaders to quickly evaluate a situation and determine a course of action. Often, the fastest-moving organizations either are those with the most streamlined financial management practices, or the most careless. Developing these sound financial practices can give leaders critical information they need to act with confidence during uncertain times.

IT financial management is a journey. CIOs can mature throughout this process by managing costs, increasing cost transparency and partnering with the business to communicate the true value of transformation initiatives.

Establish a framework for service-based cost models

Many organizations still manage their budget based on traditional general ledger categories such as hardware, software, labor, and the like. The difficulty with this approach is that it provides a financial view that is not particularly helpful for IT. Business functions might track revenue by the accounts served or services provided. To improve cost transparency and promote accountability, IT leaders should do the same, tracking and managing costs based on the services provided, whether end-user, business or technology services.

Technology Business Management (TBM) is one of the most common service-based cost models we have encountered with our clients. The TBM framework allows organizations to track how costs and initiatives align to different cost pools, IT towers (e.g. compute, network etc.), services and business units. This helps drive cost accountability among IT teams by establishing baseline and ongoing costs for the services provided to the organization while providing business owners with the true cost of IT services.

With the help of Metis Strategy, an international financial services organization implemented a similar framework to gain more clarity about how it’s nearly $500 million budget aligned with business goals and created value for the company. We first analyzed the labor and managed service spend on key IT services such as application support, IT service desk, network and telecom, and other business functions. With this breakdown, the client was able to identify cost per employee based on location, job type, and which application or service the employee supported. This increased transparency ultimately allowed the organization to save or reallocate $15 million in costs.

Financial management maturity curve. Source: Metis Strategy

Increase transparency through showback or chargeback models

While service-based models provide greater cost transparency, they come with their own set of challenges. A common one is tracing shared infrastructure costs back to the business unit that consumed them. Often things like laptops or storage budgets are listed as run items that aren’t tied to specific business units. This often results in a large bucket of “run items” that no one outside of IT quite understands. Without the ability to see how these costs directly support business units, CIOs often face pressure to undertake arbitrary budget cuts.

To provide more clarity on how costs are allocated, adopt an allocation model across the entire financial portfolio. Based on their maturity, organizations typically use the following two allocation models:

  • Percentage-Based Allocation: Allocate shared costs evenly across all stakeholders or based on the size of the organization. This can be controversial but is often a good place to start.
  • Usage-Based Allocation: Fully define each service offering, implement the service in order to track usage by stakeholder groups, and allocate costs to each group based on usage. This is a common model with the advent of services like AWS, in which you “pay for what you use,” but organizations sometimes have difficulty segmenting internal user groups with legacy applications.

After defining an allocation model, IT organizations should aim to influence business demand and accountability for IT services by educating them on the cost impact of their decisions. We recommend that IT start with a “showback” model that illustrates the cost allocation through a dashboard or report. This will give IT the data it needs to shape the demand for additional requests and have more productive conversations with colleagues: “What is the return on this investment? We can show the cost, but are you able to articulate the value?”

In many cases, a showback approach can create a sense of shared ownership for how a business decision may impact an IT budget (i.e., if I hire 10 more people, the IT costs will go up by $100*10). In other cases, where a single stakeholder is consuming a large volume of a service, or has a justifiable business need to control spending, a direct “chargeback” may be more appropriate. For example, if a business unit is driving a major sales campaign, they may need a burst of capacity on a website for a finite period of time. There is a clear return on the investment, but very little value in IT governing whether it is the right way to spend the money. The business unit should simply be charged directly for its consumption and be empowered to control its own destiny.

Shift the conversation away from costs and toward value

Once a well-defined financial management framework is established, IT can begin to shift conversations with business partners away from IT costs and toward IT-driven value. A showback or chargeback model will provide transparency on the total dollar spent and can also help illustrate the benefits and trade-offs of different initiatives.

Metis Strategy worked with a manufacturing company that went on this journey. The IT organization was responsible for running and maintaining the Manufacturing Execution Systems (MES) in the factories. Over time, the systems had become disjointed and expensive to maintain. However, upgrading them would be a multi-million-dollar project that would span two to three years. The CIO tried to make the case for an upgrade, but his proposal fell on deaf ears until he was able to articulate the hard and soft benefits of the upgrade to the business. Implementing a showback model allowed his team to build a robust business case that highlighted the potential for future savings by reducing data storage, maintenance and labor support costs. That financial information also allowed the CIO to show how the upgrade would create a more harmonious manufacturing environment and better access to data.

Integrate with your project portfolio management process

Financial management cannot happen in isolation from project and portfolio management processes (PPM). Organizations need to align their portfolio to the company’s strategy, manage demand, and prioritize investments. This becomes easier to achieve when these key activities are supplemented with the right financial data. Instead of prioritizing a project portfolio based on arbitrary soft benefits, improved financial management practices can help organizations understand and quantify costs and negotiate a seat at the table by demonstrating value for the company.

Process first, technology second

There are many financial management solutions in the marketplace, but they will be of little use if they are codifying and scaling a broken process. Before adopting a solution to kickstart your financial management practices, it is important to start with the problem you are trying to solve and define the financial metrics that will help improve decision making. It is also critical to ensure your company can produce credible data. If the data collection, manipulation, publishing, and consumption processes are not ironed out first, organizations are likely to run into data quality issues, which can ultimately lead to a lack of trust, branding challenges, and a less successful implementation.

Dynamic companies need well informed leaders who can quickly decide how to respond to a competitive threat, where to invest more money or where to make tradeoffs. With IT budgets often among the top five cost centers in companies, a clearly defined IT financial management framework can provide greater cost transparency and help influence those decisions. An elevated financial management discipline will also strengthen relationships throughout the business by streamlining investment decisions and more clearly quantifying IT’s value.

This article originally appeared on CIO.comChris Boyd co-authored the piece.

As technology departments shift from traditional project management frameworks to treating IT as a product, it is triggering a broader re-think about how technology initiatives are funded.

Under the existing “plan, build, run” model, a business unit starts by sending project requirements to IT. The IT team then estimates the project costs, works with the business to agree on a budget, and gets to work.

This setup has several flaws that hamper agility and cause headaches for all involved. Cost estimates often occur before the scope of the project is truly evaluated and understood, and any variations in the plan are subject to an arduous change control process. What’s more, funding for these projects usually is locked in for the fiscal year, regardless of shifting enterprise priorities or changing market dynamics.  

To achieve the benefits of a product-centric operating model, the funding model must shift as well. Rather than funding a project for a specific amount of time based on estimated requirements, teams instead are funded on an annual basis (also known as “perpetual funding”). This provides IT product teams with stable funding that can be reallocated as the needs of the business change. It also allows teams to spend time reducing technical debt or improving internal processes as they see fit, improving productivity and quality in the long run. 

“We have to adapt with governance, with spending models, with prioritization,” Intuit CIO Atticus Tysen said during a 2019 panel discussion. “The days of fixing the budget at the beginning of the year and then diligently forging ahead and delivering it with business cases are over. That’s very out of date.”

Business unit leaders may be skeptical at first glance: why pay upfront for more services than we know we need right now? A closer look reveals that this model often delivers more value to the business per dollar spent. For example:

  • Perpetual funding allows dedicated teams to have end-to-end accountability for the full product lifecycle, from introduction to sunset. This structure encourages teams to develop greater domain knowledge and to prioritize reusability and technical debt reduction, which can improve the technical estate and pay dividends on future iterations.
  • Rather than the business unit throwing business requirements “over the wall” to IT, business teams work directly with the product teams to co-author the product roadmap and validate the value, usability, feasibility and operational fit of an idea before it is prioritized in the backlog.
  • Perpetual funding promotes greater autonomy and empowerment since cross-functional teams are responsible for determining the best ways to invest on behalf of the business.
  • Progress is measured in terms of business outcomes achieved, and perpetual funding can be discontinued or reallocated by the business without a change control process.

Smart first steps

Shifting away from old ways and adapting a new funding model can seem like a daunting task, but you can get started by taking the following first steps:

Establish the baseline

First, establish the baseline to which you will measure the funding shift’s effectiveness. A technology leader must consider all the dimensions of service that will improve when making the shift. Two areas of improvement that have high business impact are service quality and price. To establish the baseline for service quality, it is important to measure things like cycle time, defects, net promotor score, and critical business metrics that are heavily influenced by IT solutions.

The price baseline is a little more difficult to establish. The most straightforward way we have found to do this is to look at the projects completed in the last fiscal year and tally the resources it took to complete them. Start with a breakdown of team members’ total compensation (salary plus benefits), add overhead (cost of hardware/software per employee, licenses, etc.) and then communicate that in terms of business value delivered. For example, “project A cost $1.2M using 6 FTE and improved sales associates productivity by 10%”. When phrased this way, your audience will have a clear picture of what was delivered and how much it cost. This clear baseline of cost per business outcome delivered will serve as a helpful comparison when you shift to perpetual funding and need to demonstrate the impact.   

Pilot the shift with mature teams

The shift to a new funding model will be highly visible to all business leaders. To create the greatest chance of success, focus on selecting the right teams to trial the shift. The best candidates for early adoption are high-performing teams that know their roles in the product operating model, have strong credibility with business unit stakeholders, and experience continuous demand.

In our work with large organizations piloting this shift, e-commerce teams often fit the mold because they have a clear business stakeholder and have developed the skills and relationships needed to succeed in a product-based model. Customer success teams with direct influence on the growth and longevity of recurring revenue streams are also strong candidates as their solutions (such as customer portals and knowledge bases) directly influence the degree to which a customer adopts, expands, and renews a subscription product.

Teach your leaders the basics of team-based estimation

Estimation in the product-based funding model is different than in the project model. Under the new model, teams are funded annually (or another agreed-upon funding cycle) by business units. As funding shifts to an annual basis, so should cost estimation. Rather than scoping the price of a project and then building a temporary team to execute it (and then disbanding after execution), leaders should determine the size and price of the team that will be needed to support anticipated demand for the year, and then direct that team to initiate an ongoing dialogue with the business to continuously prioritize targeted business outcomes. 

When completing a team-based cost estimation, it is important to include the same cost elements ( salary, benefits, hardware, licenses, etc.) that were used to establish your baseline so that you are comparing apples to apples when demonstrating the ROI of product-based funding. Where you will see a difference in the team-based model is resource capacity needed to deliver demand. In a product model, a cross-functional team is perpetually dedicated to a business domain, and there is often zero ramp-up time to acquire needed business and technical knowledge.

Since the teams have been perpetually dedicated to the domain, they are encouraged to take a longitudinal view of the technology estate and are able to quickly identify and make use of reusable components such as APIs and microservices, significantly improving time to market. For these reasons, among others, teams in the product-based operating model with perpetual funding can achieve more business value for less cost.

Pilot teams should work closely with the BU leadership providing the funding.  Stakeholders should work together to generate a list of quantitative and qualitative business outcomes for the year (or other funding cycle) that also satisfy any requirements for existing funding processes operating on “project by project” basis.

Talk with finance early and often

If you don’t already have a great relationship with finance, start working on it now. Your partnership with finance at the corporate and BU level will be critical to executing your pilot and paving the way to wider enterprise adoption of team-based funding models. Ideally. Leaders should engage with finance before, during, and after the team-based funding model so that everyone is in lockstep with you throughout the pilot. This alignment can help bolster adoption with other areas of the enterprise.

Each finance department has unique processes, cultures, and relationships with IT, so while you will need to tailor your approach, you should broach the following topics:

  • Team-based funding basics. Explain the drivers for team-based funding, the timeline for a pilot, the mechanics of cost/benefits estimation, and how you plan to demonstrate ROI.
  • Business sponsorship. Let finance know that you have support from your corresponding business unit. In our experience, this goes a long way in the early phases of the discussion.
  • Baseline estimation. Discuss how you plan to develop your cost/benefit baselines to make sure you are thinking about your cost estimates accurately and holistically. At the end of the day, you will need finance to nod their heads when you demonstrate the ROI of the pilot and make the case for further adoption.
  • Funding process. Define how you marry your team-based pilot with existing funding processes. This helps ensure finance has everything they need in terms of qualitative and quantitative data to support the pilot in the interim.
  • Spend allocation. Discuss any changes to team structures that may impact finance’s attribution of spend to specific cost centers.
  • Capitalization. If you are currently a waterfall shop, communicate that your pilot teams will be working in agile to maximize the benefits of the funding pilot. Emphasize the need to agree on criteria for identifying capitalization candidates in the agile framework.

Evaluating success and sustaining success

Measure success and demonstrate value

You will need to achieve success in the pilot to bolster adoption in other areas of the business. Your success needs to be communicated in terms that resonate with the business. As your pilot comes to an end, gather your baseline data and match it up with the results of your pilot. Put together a “roadshow deck” to show a side-by-side comparison of costs, resources, and business outcomes (Business KPIs, quality metrics, cycle times, NPS, etc.) before and after the shift to team-based funding.

Depending on your organization, it may be prudent to include other observations such as the number of change control meetings required under each funding model, indicators of team morale, and other qualitative benefits such as flexibility. Have conversations with other areas of the business that may benefit from team-based funding (start off with 1-on-1 meetings) and offer to bring in your partners from finance and the product teams as the discussion evolves. The most important part of your story is that the team-based funding model delivers more business impact at a lower cost than the old model.

Results governance

Establish light and flexible governance mechanisms to monitor performance of the teams operating in the teams-based model. The purpose of these mechanisms is to validate that the increased level of autonomy is leading to high-priority business outcomes, not to review progress on design specs or other paper-based milestones. A $40B global manufacturing client adopting the team-based funding model established quarterly portfolio reviews with BU leadership and the CIO to review results. BU leadership reviews results of the teams and the planned roadmap for the subsequent quarter. BU leadership is then given the opportunity to reallocate investment based on changing business needs or can recommend the team proceed as planned. 

Change management

It is important to communicate that this process requires constant buy-in from business units. While funds will be allocated annually, demand will need to be analyzed and projected on at least a quarterly basis, and funds should be reallocated accordingly. In cases where investments need to be altered in the middle of a fiscal year, it is important to note that the unit of growth in this model is a new cross-functional team focused on a targeted set of business outcomes. The idea is to create several high-performing, longstanding, cross-functional teams that have the resources needed to achieve targeted business outcomes, rather than throw additional contracted developers at teams as new scope is introduced. 

Making the shift from project-based funding to product team-based funding is a major cultural and operational change that requires patience and a willingness to iterate over time. When executed successfully, CIOs often have closer relationships with their business partners, as well as less expensive, more efficient ways to deliver higher-quality products.

In early 2015, when The Manitowoc Company decided to split into two companies, the executive leadership called on the CIO, Subash Anbu, to lead the charge.

The transformation would be the most consequential in its 113-year history. Leaders from the company, then a diversified manufacturer of cranes and foodservice equipment, decided that the whole of the diversified organization was no longer greater than the sum of its parts. It would split into two publicly-traded companies: Manitowoc (MTW), a crane-manufacturing business, and Welbilt (WBT), which manufactures foodservice equipment.

The CIO was a natural choice to lead a change of this magnitude because his role allowed him to understand the interconnectedness of the company’s various business capabilities, which processes and technology were already centralized or decentralized, and where there may be opportunities for greater synergy in the future-state companies.

Subject matter expertise, however, would not have been enough to qualify a candidate; the leader had to be charismatic, and Subash was widely recognized for his servant-leadership mentality. That would prove essential to removing critical blockers across the organization.

It was also important that the CIO had long-standing credibility with the Board of Directors, who were the ultimate decision makers in this endeavor.

Subash embraced the daunting challenge, saying, “While change brings uncertainty, it also brings opportunities. Change is my friend, as it is the only constant.”

Mitigating and managing risk

In some ways, splitting a company into two may be harder than a merger. When merging, you have the luxury of more time to operate independently and merge strategically.

When Western Digital acquired HGST in 2015 and Sandisk in 2016, CIO Steve Philpott decided to move all three companies to a new enterprise resource planning (ERP) system rather than maintain multiple systems or force everyone onto the incumbent Western Digital solution. When splitting a company, there is greater urgency to define the target state business model and technology landscape and execute accordingly.

This split for Manitowoc introduced major consequences for change: duplication of every business function, completed within a fixed four-quarter schedule, while still executing the 2015 business plans. All business capabilities would be impacted, especially Finance, Tax, Treasury, Investor Relations, Legal, Human Resources, and of course, Information Technology.

While the Manitowoc Company had experience with divesting its marine segment (it started as a shipbuilding company in 1902), the scope and scale of the split was unprecedented for the company.

Breaking apart something that has been functioning together is an inherently risk-laden proposition. Subash and his team recognized that to mitigate risk, they would need to be both thoughtfully deliberate in planning and agile in their execution that breaks down big risks into smaller risks, prioritizing speed over perfection.

As Subash led the split of the company into two, he encountered the following risks:

  • Business Environment Uncertainty. Credit markets were tightening for the company’s debt refinancing, creating concerns about the timing of executing the split. Also, the EU data privacy laws required approval from the EU work councils to split the companies, so the team would have to work diligently to comply with evolving regulation.
  • Operating Model Definition. The organizational design for the two companies continued to evolve, with the creation and appointment of new C-level executive stakeholders. The target-state operating model was also in flux, as each of the new leaders evaluated how centralized or decentralized the future-state companies would be.
  • Employee Impacts. Initially, there was uncertainty for most corporate employees in the company. Which company would I work for? How would my role change? Where would I be physically located? With this uncertainty in the air, employee attrition during the project posed a material risk to meeting target deadlines.
  • Technology Target State. Manitowoc’s IT landscape had both centralized and decentralized application portfolios, which required a rationalization of the IT portfolio, as well as determination of what would be most appropriate for each company. Standards and enterprise agreements for the technology infrastructure that included synergies for volume pricing would have to be restructured and renegotiated, potentially introducing increased cost to the new companies. And, as with all functions, there would now have to be two IT departments.

5 lessons for successfully splitting a company

When splitting a public company, the deadlines and outcome are clear. How Subash and the team would execute the split of the company, however, remained largely undefined.

The enormity of the task could have created a paralysis, but the team quickly began working backwards: getting on the same page with the right people; identifying the big-rock milestones; identifying the risks; sketching out a plan to reach the big-rock milestones; breaking the plan into smaller rocks to mitigate risk; and keeping everyone informed as the plan unfolded with greater detail.

In the process, Subash learned five critical lessons that all executives should heed before splitting a company:

1. Establish a separation management office and steering committee

Splitting a company requires cross-functional collaboration and visibility at the strategic planning and execution level. Start by creating a Separation Management Office, consisting of senior functional leaders that will oversee the end-to-end split across HR & Organizational Design, Shared Services & Physical Location Structuring, IT, Financial Reporting, Treasury & Debt Financing, Tax & Legal Entity Restructuring, and Legal & Contracts. The Separation Management Office should report to a Steering Committee consisting of the Board of Directors, CEO, CFO, and other C-level leaders. When faced with difficult questions that require a decision to meet deadlines, the Steering Committee should serve as the ultimate escalation point and decision maker to break ties, even if it means a compromise.

2. Assemble the right project team

A split will require dedicated, skilled resources that understand the cross-functional complexities involved. This project team will need people that understand the interconnectedness of technology architecture, data, and processes, balanced with teams that can execute many detailed tasks. When forming the team, it is important to orient everyone on the common objective to create unity; departmental silos will not succeed. Variable capacity will almost certainly be necessary for major activities, and you may be able to stabilize your efforts by turning to trusted systems integrators or consulting partners to help guide the transition.

3. Sketch out the big-rocks project plan and manage risk

Agile evangelists often frown upon working under the heat of a mandated date and scope, but a public split forces such constraints. Treat the constraints as your friend: Work backward to identify your critical operational and transactional deadlines. Ensure the cross-functional team is building in the necessary lead time, especially when financial regulations or audits are involved. Dedicate a budget, but be prepared to spend more than you anticipate, as there will always be surprises to which teams will have to adapt. As part of your project planning, create a risk management framework with your highest priority risks, impacts, and decision makers clearly outlined. When time is of the essence, contingency plans need to be in place to adapt quickly.

4. Prioritize speed over perfection

Any time a working system is disassembled, there unquestionably will be problems. The key is not to wait for a big bang at the end to see if what you have done has worked. Spending nine months planning for and three months executing this split would have introduced new risks. Instead, Subash and his team built their plan and then iteratively built, tested, and improved in an agile-delivery process. The team was able to identify isolated mistakes early and often, allowing them then to proceed to the following phases with greater confidence—not with bated breath.

5. Communicate relentlessly

In a split, every employee, contractor, supplier, or customer will be impacted. Create a communication plan for the different personas: Steering Committee, operational leaders, functional groups, customers, partners and suppliers, and individual employee contributors. The Manitowoc Company had to communicate on everything from where people would sit, to who would be named as new organizational leaders. In the void of communication, fear and pessimism can creep in. To prevent this, the Separation Management Office launched “Subash’s Scoop,” a monthly newsletter on the separation progress. It brought helpful insight, with a flare of personality, to keep the organization aligned on its common goal.

The Manitowoc Company successfully split into two public companies—Manitowoc (MTW) and Welbilt (WBT)—in March 2016, hitting its publicly-declared target. In fact, many of the critical IT operational milestones were completed in January, well in advance of the go-live date.

Over the last two years, the stock prices for both companies have increased, validating the leadership evaluation that the whole was no longer greater than the sum of its parts.