Read the Highlights from Our Most Recent Metis Strategy Summit in NYC | Read the Recap

Enterprises that embraced product thinking years ago are now discovering they built the perfect foundation for an AI-first future.

The shift to an AI-first enterprise is not simply about adopting new tools or building bigger data pipelines. It is about rethinking how the organization operates, giving AI the right of first refusal to solve problems, and extending its capabilities beyond a central team into the business units and functions that own outcomes.

It is a bold ambition, and for many, the hardest part is knowing where to start. But for organizations that have already embraced a product-oriented operating model, the groundwork for an AI-first future may already be in place.

Product Thinking as the Foundation for AI

In many ways, the product model was AI’s organizational dress rehearsal. Companies that have embraced it have already restructured around enduring business capabilities and customer journeys such as “market and sell products” or “quote to cash.” They have aligned teams by domain, empowered product owners to drive measurable outcomes, and brought together cross-functional expertise spanning business, design, and technology.

That combination of empowerment, context, and accountability creates fertile ground for AI. When context is king, and it always is with AI, domain-aligned teams already possess the deep understanding of data, customers, and processes needed to identify meaningful use cases and implement them responsibly. Moreover, product teams are accustomed to operating within enterprise standards for architecture and security, the same scaffolding required to scale AI safely and sustainably.

Why Project-Based Organizations Struggle

For organizations still structured around a project-based, plan-build-run model, the road to AI is much steeper. Teams form and disband based on project timelines rather than enduring business capabilities, making it difficult to know where to embed or federate AI expertise. Without standing teams, there is no clear home for ownership, and no natural connection between business outcomes and the application of AI.

This model also reinforces dependence on a central data and AI function that is already spread thin. Demand outpaces supply, and with finite centralized resources, scaling becomes a bottleneck rather than a catalyst. In contrast, a domain-based product model allows business units and functions that already own their product outcomes to begin funding, prioritizing, and managing their own AI initiatives. This shift toward democratization, teaching each domain to fish rather than feeding them from the center, is where true enterprise-scale AI begins to take shape.

A Fortune 500 Example: The Power of a Product Foundation

One Fortune 500 company recently discovered how natural this transition can be. Facing a fundamental business model shift from selling SKUs to delivering integrated solutions that cut across traditional organizational seams, they adopted a product domain model to strengthen collaboration between business and technology, increase visibility, and align priorities across the enterprise.

Once the model was in place, they turned immediately to AI. Because the structure was already built for alignment and empowerment, the pivot was seamless. In their B2C business unit, half a dozen product teams with empowered product owners and cross-functional resources could evolve into AI-empowered teams in an accelerated fashion. The central data and AI hub continued to provide the standards, platforms, and governance required for responsible adoption, while domain-based AI centers of excellence emerged within each product domain to drive adoption, education, and technical execution.

Those domain centers became both catalysts and guardians, spreading AI literacy, providing engineering horsepower, and ensuring security and governance standards were applied consistently. And because the organization already operated with agile ways of working and quarterly planning routines, coordinating AI priorities across teams became a natural extension of how the enterprise already worked. The result was not a reinvention of the operating model, but an evolution of it.

Looking Ahead: Product and AI as Twin Transformations

For organizations still living in a project world, the good news is that product and AI transformations can happen in tandem. As 2026 approaches, a year many are calling the “scale or fail” moment for enterprise AI, leaders should view this as a window to reshape the foundation that will determine long-term success.

It will not be easy. Real change rarely is. But for companies willing to invest in building durable, domain-aligned structures that connect technology to business outcomes, the payoff is profound. If a year of transformation is what it takes to unlock a decade of advantage, then the juice is well worth the squeeze.

This article was originally published on CIO.com by Metis Strategy Partner Michael Bertha.

Everyone has been car shopping and noticed the miles per gallon printed on the window sticker. You get excited when you see a number like 35 MPG, but once you drive the car home, your dashboard seems to stay closer to 26. It often feels as though those numbers assume you are driving downhill with the wind at your back.

A similar pattern shows up in conversations about AI productivity. Whether the estimates come from a major consulting report, a vendor eager to sell the latest platform or an urgent request from a CEO for a quick ROI projection, they almost always paint an overly optimistic picture. The results look great on paper, but they rarely hold up in practice. When leaders plan around those inflated expectations, they set themselves up to miss the mark.

As a digital and strategy consultant, I have found that we need a better way to set realistic expectations, one that balances the excitement of potential with the discipline of execution. Interestingly, the inspiration for that approach came not from technology but from finance.

Borrowing from finance: The power of discounting

Anyone who has taken an introductory finance course remembers the concept of discounted cash flow analysis. When valuing an investment, you do not simply total all future cash flows. You discount them to reflect both time and risk. A dollar tomorrow is worth less than a dollar today, especially if there is uncertainty about whether that dollar will ever appear.

A similar mindset helps when assessing AI productivity. The headline productivity gain, for example, “Copilot can double developer output,” represents the gross potential. To reach a realistic number that you can plan around, you need to apply discounts that account for three things: the human effort required to reach an outcome, the gradual ramp-up of adoption and the risk that comes with AI’s imperfections.

Human effort: The human + machine reality

Generative AI acts as an accelerator, not an autopilot. People are still essential to frame the problem, guide the model and validate the output. In software engineering, for instance, tools like GitHub Copilot can produce working code instantly, yet much of that code still needs debugging, testing and revision.

In one client pilot, our team found that engineers spent roughly a quarter of their time reviewing or rewriting AI-generated code. The net productivity gain was meaningful but well below the theoretical doubling of output that vendors often cite. It was closer to a 40% improvement, which proved far more believable and sustainable. The key lesson was clear: AI amplifies talent but does not replace human judgment. Accounting for that in your projections makes your models more credible and your plans more realistic.

Ramp-up and adoption curve

Another important discount reflects the pace of adoption. Productivity gains from AI do not arrive all at once. As with any enterprise technology, adoption follows a curve shaped by learning, experimentation and scaling.

One of our Fortune 500 manufacturing clients modeled this curve while deploying Copilot for code development. In the first year, only about a quarter of developers were active users. Over time, they projected steady growth in adoption, with both costs and benefits expanding as the tool became part of daily workflows. By modeling a four-year adoption period, they could present a credible ROI trajectory that matched the organization’s ability to absorb change. The result was a measured, believable forecast rather than a sharp, unrealistic surge.

Even with a strong business case, productivity must be earned. Adoption takes time, training and reinforcement. When you include that reality in your estimates, the projections become both defensible and actionable.

Risk adjustment: Accounting for AI’s hallucinations

Every productivity model should also include a discount for risk. Even the best systems can produce errors and the costs of those mistakes, both operational and reputational, can be significant.

We have all seen examples in the news. Earlier this year, a global technology company withdrew a marketing campaign after its AI image generator created offensive results. The issue was not a lack of oversight; it was an underestimation of risk. The company spent weeks addressing the problem, coordinating communications and repairing trust. That period of recovery consumed time and resources that could have been spent on productive work.

When estimating productivity, CIOs need to plan for these inevitable setbacks. The time and effort required to validate outputs, correct errors and perform remediation should be built into the analysis. Just as investors demand higher returns for riskier assets, technology leaders should temper productivity expectations for higher-risk AI applications.

From concept to practice

This idea of discounted productivity becomes powerful when applied in real situations. Imagine a software engineer using Copilot. The theoretical potential might suggest a doubling of productivity, but after adjusting for human oversight, the gradual adoption curve and risk, the realistic gain might fall closer to 30% or 40%.

When visualized, the result looks like a waterfall chart. You begin with the total AI opportunity, then reduce it step by step to account for human effort, phased implementation and risk. What remains is the achievable productivity impact, the number you can confidently share with your CFO and CEO, knowing it reflects how your teams actually operate. And as your teams gain experience, you may even outperform that estimate.

The bottom line: Your AI mileage will vary

AI is transforming how we work, but just like the miles per gallon rating on a new car, your results will depend on your terrain, your driver and your discipline. By adopting a discounted productivity mindset, CIOs and technology leaders can close the gap between AI promise and practical performance, setting expectations that are credible, defensible and achievable.

Because with AI, just like with driving, your mileage will vary.

This article was originally published on CIO.com by Metis Strategy Partner Michael Bertha.

AI is everywhere and nowhere at the same time. Every week brings another headline about breakthroughs, pilots, or fears of being left behind. For CIOs, the harder question isn’t what AI can do but where to start.

Praveen Jonnala, global CIO at global network infrastructure provider CommScope, has a deceptively simple answer: apply the 80/20 rule.

In his view, 80% of success with AI has little to do with the technology itself. It comes from preparing the organization by clarifying purpose, aligning skills, and putting the right guardrails in place. “Success comes from organizational readiness,” he says. “The other 20% is the tech.”

And even within that 20%, Jonnala insists on continued focus. Just as many industrial markets derive 80% of revenue from 20% of customers, AI value follows a similar curve. The companies that win aren’t the ones experimenting everywhere but the ones doubling down on the 20% of use cases that can transform revenue, profitability, customer experience, or time to market.

CommScope, based in North Carolina with about 20,000 employees, reflects that philosophy by not selling AI models but gaining a competitive edge by how effectively the company applies AI to factories, procurement, engineering, and customer service. That pragmatism is the blueprint of the 80/20 operating model to be focused, measurable, and built for repeatability.

Culture before code

When AI enthusiasm surged a few years ago, CommScope encouraged exploration but soon saw the downside of too many pilots and tools. “Everyone tinkered with chatbots, and then the noise started,” Jonnala says. “We had to bring it back to value.”

In a manufacturing business, he adds, adoption depends on making employees see AI as a tool, not a threat. “People are rejecting AI and worry it’ll replace them,” he says. “You have to put them in the position of validators so the workflow changes but the work still belongs to them.”

CommScope let employees try different tools in 2024 while quietly monitoring usage. The next step was standardization, launching Microsoft 365 Copilot and consolidating code assistants. Leadership buy-in proved decisive. “If a manager says, don’t worry, this is IT’s problem, adoption dies,” he says. “If leaders become the messengers, adoption grows.”

For Jonnala, education is ongoing. “With 20,000 employees, the challenge is keeping awareness alive without pulling people out of their day jobs,” he says. “We’re trying short videos, sharing wins, and making the ‘why’ explicit. Culture compounds over time.”

Focus, not frenzy

To keep ambition manageable, CommScope deliberately limits scope. “We allow curiosity, but we don’t chase 50 things hoping one works,” Jonnala says. “We’re in the business of consuming AI for impact, not selling it.”

The filter is simple: Will the use case transform the customer experience, drive revenue or margin, improve factory performance, or accelerate the roadmap? If the answer is fuzzy, the idea stays experimental. If it’s concrete, the company funds it and holds business owners accountable.

Procurement is one example. During supply chain disruptions, an AI assistant helped triage suppliers and accelerate approvals, and the results of shorter cycle times and higher adoption among buyers were measurable, which earned credibility. “Improving procurement during volatility is tangible,” he says. “That kind of result earns adoption.” Of course, the plumbing makes or breaks scale. CommScope treats data as an asset and builds in oversight, lineage, and controls. Jonnala stresses risks that are easy to overlook, like feeding supplier code into a generation tool without consent. “Our job is to keep people safe while letting them move,” he says.

The company tightened boundaries by standardizing on a small set of tools, embedding security and IP reviews in the intake process, and adding governance as pilots scaled. Boards, he notes, start with risk. “We make it explicit,” he says. “We’re protecting the company, experimenting in a governed way, and investing where the scoreboard shows value.”

Measuring results that matter

Boards don’t need to understand model parameters. They need consistent signals of progress reported by business owners with technology at the table. CommScope tracks adoption, cycle time, quality, throughput, customer impact, financial contribution, and risk posture. The point is not perfection but consistency quarter after quarter.

That ties into an annual rhythm. In Q1, the company selects high-impact use cases and readies data. In Q2, it delivers a POC and shares lessons learned. In Q3, it scales what works by strengthening pipelines and controls, and in Q4, it abstracts and reuses components for the next wave. “A focused workshop with a partner can compress months of learning into a day,” Jonnala says. “We’re not reinventing the wheel. Our problems look like everyone else’s in manufacturing. We can learn from those ahead of us.”

Operating model, not model hype

What endures, Jonnala says, isn’t the toolset but the operating model, since tools evolve every week but culture compounds over time. So that means embedding technology leaders in the business, being transparent about risks, and insisting on measurable outcomes.

He reframes the developer debate as an example: “We didn’t set a target for fewer people,” he says. “We set a target for earlier releases and safer code. That’s a better conversation.”

Boards, for their part, focus on risk, product roadmap, and customer experience. Revenue follows when those are managed well.

The question every CIO should answer

AI tempts organizations to do a little bit of everything, but the enterprises pulling ahead are the ones doing a lot with very few things, and with discipline. They put culture before code and focus on two use cases at a time. They also invest in data and guardrails so pilots can scale, and report the same scoreboard quarter after quarter. And they reuse what works while retiring what doesn’t.

Jonnala’s challenge to peers is direct. “Be honest about your focus,” he says. “Are you spending most of your energy on the 80 — readiness, leadership, governance, adoption — and when you do invest in the 20, are those the few use cases that can move your P&L?”

For CIOs forging AI as consumers not creators, that’s the operating model that compounds.

This article was originally published on CIO.com by Metis Strategy Partner Michael Bertha.

For decades, power companies were the definition of dull. Powering your home was necessary but uninspiring, a commodity no different than running water. NRG Energy is proving that view outdated with acquisitions, partnerships, and a willingness to reimagine what a power generator can be, turning electricity into an engaging experience.

“Energy providers have always been thought of as the background players in people’s lives,” says Dak Liyanearachchi, the company’s EVP and CTO. “Our job now is to move to the foreground and show that energy can be personal and intelligent.”

From residential power to connected experiences

NRG began as a power generator before broadening its reach to serve residential customers. In recent years, it’s redefined its consumer strategy by bringing together electricity, smart home solutions, and digital platforms. The acquisition of Vivint Smart Home and partnerships with Renew Home gave NRG the foundation to link thermostats, energy services, and residential devices into one ecosystem, unlocking the potential of virtual power plant (VPP) technology.

A VPP aggregates distributed resources like thermostats, batteries, and rooftop solar across thousands of homes. Coordinated effectively, it becomes a flexible network that shifts demand, balances supply, and supports the grid during times of crisis.

The result is energy management that feels seamless and intentional. The Vivint and Nest thermostats don’t just heat and cool, they dynamically reduce demand on the grid during peak hours, improving reliability while realizing new capabilities for the company. Consumers see lower bills and greater comfort, but behind the scenes, the technology powers something larger.

In Texas, where extreme weather and surging demand from AI data centers can strain the grid, NRG’s VPP approach provides resilience. The company is targeting a gigawatt of capacity, enough to power hundreds of thousands of homes without building a single new plant.

“This isn’t just about keeping the lights on,” Liyanearachchi explains. “It’s about turning millions of small resources into a collective force that ensures reliability, reduces costs, and creates value for everyone.”

Rewiring technology to match strategy

NRG’s consumer story wouldn’t be possible without reimagining how technology operates inside the company. Under Liyanearachchi’s leadership, NRG has moved to a product operating model, aligning business and technology teams around shared outcomes rather than siloed projects.

“Historically, electricity providers approached technology as a request and deliver model,” he says. “The business would hand over requirements, and six to 12 months later, technology would deliver, often out of sync with the need. We knew that wasn’t sustainable if we wanted to bring smart home and energy into one seamless experience.”

The shift has introduced budget transparency, empowered business leaders to set technology priorities, and created a single cadence for planning and execution. Quarterly sessions also bring residential power, smart home, and energy teams together to align on customer priorities and deliver integrated products.

When developing new home energy offerings, NRGs’ teams now also span smart home, energy, engineering, and business leaders. The result is concepts moving quickly into products delivered to millions of households.

Additionally, AI enablement throughout the software development lifecycle is poised to continue pushing the operating model toward leaner, faster deployment, while promoting democratized AI usage by non-technology team members.

“The product operating model has given us a shared language,” Liyanearachchi says. “It’s no longer about technology supporting the business from the sidelines. We’re building solutions together, side by side.”

The AI catalyst

AI runs through NRG’s industry and internal strategies, and traditional machine learning powers personalization, fraud detection, and forecasting. Plus, generative AI has opened new horizons, particularly in customer care and sales, where it reduces friction and improves both customer and employee experience.

“We see AI through two lenses,” Liyanearachchi says. “One is industry wide with the surge of data centers, EV adoption, and smart devices creating unprecedented demand for power. AI will be part of managing that. The second is internal, using AI to transform how we work, from forecasting weather driven demand to reimagining customer interactions.”

NRG’s transformation office plays a central role, too, looking beyond immediate use cases to envision the business of 2030. Forecasting demand with more precision, integrating renewables, and managing a grid strained by AI workloads are all on the agenda as well.

Building for unprecedented demand

States like Texas, Georgia, and Virginia face surging energy consumption from population growth, electrification, and AI driven data centers. Left unchecked, that demand could result in brownouts or blackouts.

NRG is addressing this challenge on two fronts: through its VPP strategy, which reduces strain on the grid by orchestrating demand at scale, and through continued investment in traditional generation to ensure reliability by acquiring assets such as the Rockland portfolio, which closed earlier this year, and pursuing additional opportunities like LS Power (subject to close).

“It isn’t either or,” Liyanearachchi adds. “The future of energy will be both virtual and physical. We’re investing in power plants and power pixels, the distributed intelligence that turns homes into part of the solution.”

Lessons for leaders

For Fortune 500 CIOs, NRG’s transformation is more than a power provider story. It’s a playbook for reinvention. Energy may be the context, but the underlying themes resonate across industries.

NRG shows what it looks like to turn a commodity into an experience, shifting electricity from a background service into something customers actively value. It demonstrates how aligning business and technology through a product operating model can move IT from service provider to co-creator. It also highlights how AI can serve not only as an efficiency driver, but as a catalyst to reimagine how the business operates and engages customers.

The reminder is clear that even the most commoditized industries can reinvent themselves. The real question is whether leaders are ready to challenge assumptions and rewire their organizations to make it happen.

For NRG, the days of being seen as just another power provider are over. The company is showing that transformation isn’t reserved for digital natives. It’s available to any industry bold enough to put experience at the center.

“Power may never be sexy in the traditional sense, but when you deliver comfort, reliability, and innovation into people’s homes, and reshape how the grid works in the process, that’s pretty exciting,” says Liyanearachchi.

This article was originally published on CIO.com by Metis Strategy Partner Michael Bertha.

The days of treating corporate and digital strategy as separate entities are over since their convergence has become central to data-driven transformation. Yet very few companies believe in it strongly enough to restructure themselves around that reality.

“Investments and behaviors follow org design,” says Sandeep Davé, CBRE’s chief knowledge officer. “In the era of AI, the world requires integration, not isolation.”

A new role for a new reality

For Davé, becoming chief knowledge officer represents more than a role change. It reflects a deliberate reframing of how CBRE approaches technology, strategy, and data.

The company, which serves clients in more than 100 countries and offers services ranging from capital markets and leasing advisory to investment management, project management and facilities management, has long been strategy-led. By elevating Davé from CDTO into a newly created role that unites corporate strategy, research, and data with the overall technology direction, CBRE is signaling that these functions are inseparable in shaping the company’s future.

Davé points to three forces behind the move.

The first is scale and complexity. With our scale — the clients, asset classes, services, and global reach — CBRE needed a way to harness and translate its vast data assets into knowledge and insights. “If we can see every property we touch, and convert that data into knowledge and insights, we create a formidable competitive moat,” he says.

The second is AI as a differentiator. “The thing that distinguishes what you can do with AI is data,” he says. “If AI delivers the transformative impact it promises, then your data foundation, governance, and strategic alignment will determine your rate of success.”

The third is organizational maturity. After years of scaling cutting-edge technology across the business, including the latest AI offerings, CBRE now has the platforms, infrastructure, and cultural readiness to take a bold next step. Functions that once operated in isolation are being reshaped.

Research offers a clear example of that reshaping. “We’re elevating our global research function by streamlining processes and improving outcomes,” Davé says. “Now, by applying AI and automation, we’re increasing efficiency while also significantly enhancing the quality of our outputs.”

Nothing in isolation

The unification of functions reinforces a central truth in broad technology and AI adoption that context is king. Without it, even the most advanced tools deliver limited results. When technology, research, and strategy move together, the impact can be transformative.

Evidence of that transformation is already visible at CBRE. In facilities management, predictive analytics now inform repair-versus-replace decisions, cut duplicate work orders, and optimize service delivery. And across the enterprise, more than 65,000 employees use Ellis AI, the firm’s gen AI platform, to access trusted data, generate insights, and automate routine tasks.

“Tools alone don’t bend the cost curve,” says Davé. “It’s important to understand the environment, intent, and nuances that shape intended outcomes. When we combine the richness of our data with the insight of our people and the discipline of strategy, AI stops being a showcase of use cases and becomes a driver of real market differentiation.”

Lessons for the C-suite

CBRE’s establishment of this position, particularly under Davé, signals a deliberate strategic move that says consolidating key functions under one remit is more purposeful and productive.

Conway’s Law is a useful reminder here whereby systems often mirror the communication patterns and structures of the organizations that build them. Fragmented companies and cultures yield fragmented technology solutions. For leaders serious about capturing the full value of AI, progress will demand more than governance frameworks or technology investments. It may require rethinking reporting lines, incentives, and collaboration models.

Convergence isn’t just an idea, it’s an operating model. And the future of AI-driven transformation will be shaped not only by the technology deployed, but how organizations choose to design themselves around it.

In a world where innovation is outpacing leadership playbooks, executives need fresh approaches to prepare for tomorrow’s enterprise challenges. On September 9 and 10, a select group of digital and technology leaders gathered in Washington, DC for the Metis Strategy Technology Leadership Institute, a two-day immersive experience designed to help high performers step confidently into enterprise leadership.

Executives from leading organizations, including Viatris, Intuit, Thermo Fisher Scientific, Goodyear, FINRA, Hearst, and Ingram Micro, came together to sharpen their leadership capabilities, learn from one another, and expand trusted networks that will serve them well beyond the event.

Crossing the Chasm in the Age of AI

“What got you here won’t get you there.” That familiar refrain framed the Institute once again, but this time it carried fresh urgency. The rapid rise of artificial intelligence has opened extraordinary opportunities while also heightening the need for technology leaders to stretch beyond execution and into enterprise strategy.

Peter High, President of Metis Strategy, set the tone in his opening remarks, “With the emergence of AI, the role of the CIO and other technology leaders has never been more critical,” High noted. “There has never been a greater opportunity for technology executives to take the reins and drive business outcomes. Yet making the leap from VP to C-suite still requires deliberate action, and that’s the gap this program is designed to close.”

A Curriculum for Future CXOs

Over two days, participants immersed themselves in collaborative sessions tailored to the realities of modern enterprise leadership:

  1. Developing Your Brand – Building credibility across five dimensions essential to the C-suite
  2. Business-Driven Technology Strategy – Moving from operational excellence to enterprise-wide impact
  3. Next-Generation Operating Models – Adopting digital ways of working to increase agility and scale
  4. Emerging Technologies and AI – Harnessing disruption to accelerate transformation
  5. Influence and Change Leadership – Mastering storytelling, communication, and stakeholder alignment

Artificial intelligence was a thread running through every topic. Participants examined how AI will reshape product development, team design, and culture, and how leaders can deploy it responsibly while keeping pace with business demands.

As Chris Davis, Partner and West Coast Office Lead, noted, “Leaders must understand how to operationalize AI responsibly and at speed if they want to keep pace with business demands.”

Building Personal Advisory Networks

The Institute was as much about people as it was about content. Participants candidly shared their growth areas, identified peers who could help them accelerate development, and built personal advisory circles intended to last well beyond the two days together.

Mike Bertha, Partner and Central Office Lead, emphasized the importance of this dynamic, stating, “No leader succeeds in isolation. The Institute gives executives a safe space to learn from one another, build lasting relationships, and strengthen the muscles they’ll need to step confidently into enterprise leadership.”

Those connections grew even stronger outside the sessions. On the first evening, the group gathered for a dinner that sparked lively conversation. Laughter, lively conversation, and shared stories reminded participants that building leadership skills also means building relationships. Over the meal, executives compared leadership challenges, traded perspectives on AI adoption, and debated strategies for scaling cultural change.

Key Takeaways for Leaders

As the Institute concluded, several themes emerged as guideposts for leaders preparing for the next chapter of their careers:

As one executive put it, “If you don’t carve out time to think and read during the day, you’ll never build the perspective required to lead at scale.”

The Path Forward

The Institute underscored that leading in the age of AI is not just about mastering technology but about elevating leadership itself. Success in the C-suite will come to those who can move with speed, scale their influence, and maintain the strategic perspective to guide their organizations through disruption. While technology is rewriting the rules of business, it is leaders who will ultimately define what success looks like in this new era.

This article was originally published on CIO.com by Metis Strategy Partners Michael Bertha and Chris Davis.

Most organizations are somewhere on the path to becoming AI-first, but few have figured out how to scale their wins. It’s one thing to have pilots popping up in silos; it’s another to orchestrate those wins across the enterprise in a way that builds momentum. In our work, one of the clearest indicators of AI maturity is whether an organization has stood up a formal AI center of excellence (CoE). 

Can you get wins without one? Absolutely. But can you scale those wins in a repeatable way across people, process and technology without a CoE? We haven’t seen it yet.

The AI maturity journey

While numerous AI maturity frameworks exist, we typically define AI maturity across five distinct levels. The earliest is what we call the student stage. Here, organizations are recognizing the opportunity, educating teams and making sense of the technology landscape. Many companies lived in this zone throughout 2023 and some part of 2024. 

Then comes the explorer stage. This is when AI use cases start taking root, but in pockets. In some orgs, AI gets folded into broader digital transformations. In others, it’s managed off the side of someone’s desk, without dedicated resources or consistent tooling. It’s progress, but often chaotic, and hard to scale. 

Still, these early phases serve a critical purpose: they help organizations prove the business value of AI. Even when pilots are executed in silos, they can provide the kinds of wins that help justify investment in a formal CoE. Textron Aviation, for example, launched a single use case that, in some cases, boosted productivity of maintenance tasks by 900%. They did it without a dedicated team, budget or enterprise tech stack. But the results spoke for themselves, and when other business units began adopting the solution, it created the momentum to accelerate adoption of AI across the enterprise. In other words: don’t underestimate the importance of getting some runs on the board. Most organizations won’t be able to fund a CoE without them. 

Then comes the turning point: the builder stage. This is where organizations begin laying the foundation to industrialize AI. The biggest leap from explorer to builder is the emergence of a formal AI Center of Excellence. This is the structure that transforms scattered experimentation into coordinated momentum. It facilitates the creation of an enterprise AI strategy, defines and enforces governance and serves as product manager for a central Data and AI Marketplace of reusable components. It also drives enablement across the organization — equipping executives to steer with strategy, enabling technical teams to build effectively, and empowering power users to adopt AI tools with confidence. All of this is done with the intent of establishing a federated operating model. While it may not lead every initiative, the CoE provides critical expertise to those who matter most. In short, it is the backbone for scaling AI responsibly.

From there, organizations can advance to the scale stage and ultimately to commander territory, embedding AI into how they design, deliver and operate across every part of the enterprise.

Getting the CoE right

So, if the CoE is such a pivotal unlock, how do you design it for success? While there are inevitably more, we’ve seen three models in the field, each with its own strengths and shortcomings.

The consultative model

This is the lightweight option: a team of thinkers, often chartered to set policy and review use cases. It can feel a lot like a governance PMO or an enterprise architecture group that draws beautiful pictures but doesn’t actually build anything. In most cases, this model lacks execution power and can come across as red tape. We don’t recommend it.

The shared service model

A step up. Here, the CoE assembles generalist AI teams and “loans” them out to business units to work on prioritized use cases. As they embed with teams, they help drive adoption of enterprise standards and tooling. But the catch is context, especially industry and functional context. Without deep business knowledge, these teams often end up spending too much time asking basic questions. Helpful? Yes. Scalable? Not quite.

The teach-to-fish model (our recommendation)

This model strikes the balance. The CoE acts as a central hub for strategy, enablement, standards and education. But delivery happens in the spokes, within the business units and functions themselves. The CoE exists to empower, not to approve. It provides infrastructure, reusable assets, training and guardrails. The BUs retain ownership for use case delivery, funding and outcomes.

Every organization will scale differently, but there are four roles we consistently see as critical to making this model work effectively.

First is the AI strategy leader. This person serves as the connective tissue across the enterprise, defining the AI roadmap, evolving the operating model and orchestrating how the rest of the CoE supports the broader organization. They think through priorities, risks and investment sequencing, and often develop reusable assets such as intake forms, validation templates and lifecycle checklists that domain teams can adopt and adapt for their own use cases. They also play a critical role in promoting awareness and adoption of responsible AI frameworks, facilitating reviews for sensitive or cross-functional use cases, often via an ethics or risk committee.

The second essential role is the architect. This individual owns the technology architecture that underpins enterprise-scale AI. They’re responsible for designing and maintaining the shared infrastructure: things like secure, GPU-enabled sandboxes, model registries and MLOps pipelines. These inputs allow domain teams to build and deploy responsibly and efficiently. They also define and enforce enterprise-wide data governance standards, recognizing that, like any technology, AI depends entirely on the quality and context of the data it consumes.

Next is the teacher, a role we think every CoE should prioritize early. This person leads the education motion across the organization, building awareness around the benefits and risks of AI and enabling teams to upskill continuously as the technology evolves. They’re responsible for designing role-based learning programs and for training the spokes on key delivery processes and enterprise guidelines.

Finally, we have the engineers. These are generalist AI engineers — data scientists and data engineers — who partner with the business delivery teams in the spokes. They help accelerate use case delivery by supporting data preparation, model development and deployment, especially for aspects that don’t require deep domain knowledge. They also contribute to the ongoing development of the data and AI marketplace, ensuring teams across the organization have access to curated, high-quality data products and vetted, reusable models.

Together, these roles don’t just support delivery, they enable it. They form the core of an AI CoE that’s designed not to control from the center, but to empower the edges.

From red tape to rocket fuel

Done right, an AI CoE is not a bottleneck. It’s a force multiplier. It’s what turns isolated wins into a flywheel. It’s the structure that enables you to go from building proofs of concept to building an AI-first enterprise. And if you want to drive AI maturity forward, if you want to scale, start with the center.

This article was originally published on CIO.com by Michael Bertha, Metis Strategy Partner.

On his second day as Textron’s global CIO, Todd Kackley found himself in the spotlight. During his first executive staff meeting, CEO Scott Donnelly turned to him and asked, “What are we going to do about generative AI?” There was no room for hesitation. Kackley, a longtime Textron executive who had most recently served as divisional CIO, leaned into the question in a way that would shape the company’s next major technology breakthrough: “Let me demonstrate the value,” he said.

Three months later, he returned to the same room with results that would convince even the most skeptical of leaders. But the story of how Textron, a $13.7 billion industrial conglomerate known for brands like Cessna, Beechcraft, and Bell, accelerated its use of gen AI goes far deeper, and reveals critical lessons for technology leaders everywhere.

A leap, not a request

Kackley didn’t begin with a resource ask. “I had no budget, no tech, and no team for this,” he recalls. “But I had trust. I had a team that had learned how to innovate quickly and take risks.” That trust was hard-earned. Just a few years earlier, he led through personal adversity, including a cancer diagnosis, and discovered the power of vulnerability and transparency. It was a leadership approach that unlocked new levels of trust and creativity within his teams.

His first step as global CIO wasn’t to spin up committees or request compute capacity, but find a meaningful problem and solve it. Partnering with Textron Aviation’s aircraft business unit, his team set out to answer a single question: Can gen AI help junior aircraft mechanics bridge the knowledge gap with veteran technicians?

Given the impending retirements of senior aircraft mechanics with years of specialized knowledge, and the high cost of aircraft downtime, the answer had implications far beyond experimentation.

The team focused on developing a proprietary gen AI solution to augment the work of mechanics. The Textron Aviation Maintenance Intelligence, or TAMI, aggregated decades of maintenance data, repair logs, service manuals, and even Textron’s public YouTube tutorials, to enable the AI-powered assistant. By using a RAG model, mechanics could query the system in natural language and receive precise, contextual answers, often with direct links to the exact frame in a how-to video.

The results were immediate. Senior mechanics were asked to throw their toughest questions at the system, which answered 19 out of 20 accurately, and came impressively close on the remaining one. “That was our moment,” says Kackley. By the time of a follow-up executive staff meeting, Ron Draper, the president and CEO of Textron Aviation had already become an internal champion. As Kackley began his presentation, Draper chimed in and said, “This is a game changer. We need to scale it globally.”

The power of pay-as-you-go

Textron didn’t commit upfront to a massive investment. “I didn’t build an ROI model,” says Kackley. “We just ran a consumption-based, pay-as-you-go proof of concept.” That approach gave the company visibility into what worked before locking in compute or long-term licensing costs. Free from the need to evaluate an ambiguous funding request, senior leaders including Donnelly, a licensed private pilot, personally tested early versions of TAMI. His involvement shaped its evolution and created a united front of support for the technology, sparking excitement and driving adoption across the organization.

By year-end, the solution was being rolled out to over 1,500 mechanics across global service centers, and early indicators suggest dramatic reductions in time spent searching for information and improvements in first-time fix rates. Mechanics also spend less time interpreting manuals and more time turning wrenches. As a result, aircraft return to the sky faster.

Scaling reusability and leading with intent

For Kackley, the project was never just about one use case. He quickly stood up a cross-functional council of high-potential business and IT leaders to ensure use cases could be replicated, not reinvented. A solution built for aircraft repair could easily be retooled for Textron’s industrial or defense businesses. Within weeks, a gen AI solution that was successful in one defense business was cloned for another.

“CIOs often get stuck trying to explain ROI or navigate policy hang-ups,” says Kackley. “We didn’t let policy stall innovation. We treated gen AI like any other tool, with appropriate use and evolving guardrails.”

That same ethos guides his leadership style: act quickly, fail fast, reuse where possible, and empower teams. “Sometimes you get a window of opportunity to show the art of the possible,” he says. “You have to take it, even if you don’t have the resources yet.”

This article was originally published on CIO.com by Michael Bertha, Partner at Metis Strategy

I’ve always found starting a transformation program to be a lot like starting a fire deep in the woods. You need the right kindling, a thoughtful structure, just enough airflow, and a stubborn streak of patience. You get one shot. A poorly placed twig, a damp corner of newspaper, or the wrong wind can send you straight back to square one. 

Transformation is similar. In my experience, getting buy-in and funding is actually the easy part. Selling the dream is energizing. Pitch decks and executive endorsements come quickly when the upside is clear. But as the saying goes, execution eats strategy for breakfast. 

So, how do you ensure your spark turns into something sustainable? How do you start a transformation once? And right? 

Start by securing leadership skin in the game 

As the tech or digital chief, it’s often your job to establish the vision. You are the torchbearer. But carrying it alone won’t get you far. If the executive team isn’t aligned, the fire won’t light. And I don’t just mean funding. I mean real, consistent participation. 

It’s not enough for the C-suite to add their names to a steering committee slide. They have to show up. Literally. They have to be vocal champions of the work, carve out time between their “day jobs”, and help make the hard calls when resistance inevitably surfaces. And they have to treat this work as part of their job, not a favor to you. 

One Fortune 500 industrial client did this exceptionally well. Despite decades of success, they made a hard shift toward growing recurring revenue. Their entire executive team became transformation leaders: the CFO owned the business model redesign, the CMO ran customer experience, and an EVP from one of the business units led product evolution. These weren’t ceremonial roles; they were accountable for deliverables. They ran check-ins. They drove decisions. And they exceeded the revenue growth targets they promised the Street.  

Anticipate nonbelievers

Then there’s the rest of the organization, the teams responsible for executing the strategy once it’s underway. Even with the mandate, funding, and transformation function in place, you’ll inevitably encounter skeptics. A key leader—or more often, a small cluster of them—will quietly resist the effort. Sometimes it’s subtle: slow adoption of new processes, side conversations that question the direction, or just general disengagement. 

And it matters. Like a rogue gust of wind through a fragile fire, even a few internal skeptics can kill momentum before it builds. 

Some tech executives I’ve spoken to say that in organizations founded before the digital era, as much as 50% of the workforce — leaders included — may need to change over the course of the transformation, which could be several years, to truly reset the culture. That’s not an argument for blanket turnover. It’s a reminder of how disruptive transformation really is, and how much resistance is baked into the status quo.

Coauthor the vision

Transformation doesn’t usually begin with a lightning bolt of inspiration. More often, it’s the byproduct of dozens of distributed insights, ideas buried inside business cases, tucked into pilot initiatives, or championed quietly by teams working in parallel. The challenge isn’t a lack of ambition; it’s a lack of integration. 

That’s where cross-functional visioning becomes a powerful unlock. 

Take one Fortune 500 organization I worked with: They were doubling down on AI, with multiple teams launching bold, well-scoped initiatives. Each one promised value, but no one had connected the dots across efforts. The result? Leaders struggled to describe what the transformed enterprise might look like, let alone align the workforce behind it. 

A unified vision came to be when nearly 100 stakeholders convened for a two-day offsite. Using a shared journey map, they reimagined how customers and employees would experience the business once the AI-driven projects landed. What emerged wasn’t just a slide deck; it was a co-authored narrative, capturing the collective intent of the organization. 

That vision became more than an artifact. It became a catalyst to garner buy-in. And the organization gained a north star to accelerate towards execution.

Over-invest in enrollment

When you’re leading the transformation, you live and breathe the strategy long before the rest of the company catches wind of it. You’ve socialized it with peers, refined it with consultants, and reviewed it through the budgeting process. It’s easy to assume everyone else understands it too. They don’t.

That’s why, after strategy and budget are locked, the real work begins. Go on a roadshow. Segment your audiences — by initiative, function, business unit, whatever makes sense — and engage them in smaller groups. Create a common pitch deck that starts with the “why,” clearly outlines what’s in it for them, and defines what success will require. Rinse and repeat. Send newsletters. Run surveys. Share progress updates. 

A Fortune 500 energy client did this particularly well during an operating model transformation. They rolled out the new model in phases, by cohort. Every cohort began with a two-day, in-person training that connected the dots between enterprise strategy, their role, and the new way of working. It featured industry case studies, tactical role-specific training, and a clear explanation of what was changing and why. 

It worked because it honored people’s time and perspective. Most of the folks you need to execute the transformation already have full-time jobs. Their mindshare is limited. And if you don’t give them the tools and context to understand what’s happening and why, it will take far longer than you think to get traction. 

Dedicate resources or risk running in place 

Last, but maybe most important: transformations require full-time attention. Someone needs to own the work. When everyone is in charge, no one is. Structural redesign, process mapping, change management — these aren’t things that happen in the margins. 

I once heard a chief digital officer describe transformation as “not a part-time job.” I couldn’t agree more. 

Doing the “missing middle” well, the work that turns vision into reality, often means hiring for it. A technology client staffed 6 full-time resources solely focused on transformation. They built the experience and technical architecture, facilitated process design, and drove change management. These weren’t temporary assignments. These were permanent roles, dedicated to helping the fire catch. Cross-functional transformation requires structure, continuity, and ownership. Give it what it needs.

The next time you’re building a fire… 

Remember this: the spark alone isn’t enough. Whether you’re lighting a campfire or launching a transformation, what matters most is what comes next. 

Are your materials dry? Is the wind at your back or in your face? Do you have people tending to it while you step away?

Transformation requires the same care. Plan carefully. Surround yourself with the right people. Get others to buy in, not just sign off. And when you feel the heat rising, lean in. Because once the fire catches, it’s a thing to behold.

This article was originally published on CIO.com by Michael Bertha, Metis Strategy Partner.

Shifting to a product model isn’t for the faint of heart. It demands more than just a new way of organizing teams — it’s a full-scale overhaul of how technology functions, collaborates and delivers. Most organizations embarking on the journey must rethink their operating models, redefine roles and responsibilities, retrain legacy talent and rewire governance and funding models. It’s the kind of transformation that’s easy to support in theory, but harder to champion when the enterprise is facing mounting cost pressures. 

Whether driven by macroeconomic factors like tariffs or internal mandates to centralize and streamline, the push for efficiency often forces technology leaders to scrutinize every dollar. That tension presents a paradox: The product model promises better business alignment, faster innovation and greater agility, but requires a front-loaded investment to get there. 

So, how do you make the case? 

Acknowledge the paradox up front 

The product model follows a familiar pattern: an early period of investment and change fatigue followed by long-term, compounding returns. In other words, it’s a J-curve. And if that concept sounds uninviting to an already cash-strapped executive team, that’s because it is — until you reframe the conversation. 

Instead of leading with costs, position the transition as a strategic investment. One global technology client did just that by demonstrating how hiring net-new product owners — and training existing project managers to become scrum masters — would unlock productivity in the long term. The cost of inaction, they argued, was greater: technical debt, missed opportunities, slower time-to-value. 

When product teams are aligned to business capabilities, cross-functional by design and measured by business outcomes — not milestones or ticket volume — you don’t just get a new operating model. You get a new level of accountability. That’s a trade-off worth making. 

Consider a staged adoption strategy 

For many organizations, the key to managing investment risk lies in piloting product teams in a controlled, high-impact environment. 

An automotive services company did just that. They launched product teams focused on the end-to-end customer journey — from scheduling appointments to post-service feedback — and gave them KPIs rooted in real business impact: CSAT scores, digital appointment adoption and in-store throughput. After proving the model, they used those wins to build momentum and scale to other parts of the business. The product approach wasn’t sold. It was shown. 

Translate role changes into value stories 

The shift to product often introduces roles that look like cost centers on paper — product owners, scrum masters, UX researchers. But these roles aren’t overhead; they’re levers. 

A product owner doesn’t just write user stories. They prioritize work that drives business outcomes and cut initiatives that don’t. Scrum masters don’t just facilitate standups. They streamline delivery and eliminate coordination drag. These roles help reduce cycle times, increase focus and create space for continuous discovery — not just delivery. 

To limit the growth in headcount, some companies start by mapping legacy roles into the new model. One global technology client recognized that their business analysts already had 60% of the skills needed to succeed as product owners. They invested in targeted training to close the gap and transitioned them into the new role, accelerating adoption while keeping costs in check. 

Want to reduce rework and improve velocity? Empower your teams. 

The product model is built on durable, cross-functional teams that are funded perpetually — not project by project — and have autonomy to decide what to work on next. That autonomy fuels efficiency. These teams don’t need to wait for steering committees or rejustify their existence at the end of every fiscal cycle. They’re incentivized to fix root causes, automate redundant processes and invest in technical excellence because they know they’ll be around to reap the benefits. 

Fewer handoffs, clearer accountability and tighter feedback loops all translate to faster time-to-value. In cost-conscious environments, that’s not just a win. It’s a mandate. 

Align the model to real business challenges 

Every transformation story lands better when told in the language of the business. 

If you’re in upstream oil and gas, highlight the operational cost of downtime and the premium on speed. In that context, product teams aren’t a structural change — they’re a competitive advantage. They reduce handoffs, resolve issues faster and get solutions in the field when they matter most. 

One SaaS company embraced product thinking to improve internal IT service. A cross-functional product team launched a suite of self-service AI tools that deflected 43% of help desk tickets, saving millions annually. They tracked and published those savings on a dashboard each month, using it to validate the model and expand adoption. The initial investment — training, tooling and role changes — was quickly eclipsed by the returns. 

Another client, a global manufacturer, built a product team around pricing. By rolling out real-time pricing that charged premiums for peak delivery windows, they boosted revenue per order by hundreds of basis points. Because the team was cross-functional and empowered to move quickly, they delivered the innovation in half the time it would have taken under a traditional project model. 

Your messaging shapes the outcome 

At the end of the day, every product transformation carries a window of investment before it delivers a return. But how you communicate that investment — how you tell the story — will shape whether stakeholders see it as cost or value. 

Lean into the paradox. Use business language. Showcase the wins. And most importantly, make the investments necessary to get the lift you’re after. The irony is that if you pursue a product model without resourcing it properly, you likely won’t get the results that were supposed to justify the shift in the first place. 

Like any good leader will tell you, the best way to earn trust is to deliver value. The product model is no exception.