Posted on

The New Intelligence Layer Transforming Global Business



Share

The New Intelligence Layer Transforming Global Business

How generative AI and agentic systems are crystallizing into a new enterprise “intelligence layer” that rewires productivity, operating models and competitive advantage.

Why Generative AI Is Emerging as a New Intelligence Layer

In just a few years, generative AI has moved from experimental pilots to the center of enterprise architecture. Large language models (LLMs), multimodal models and agentic systems are coalescing into what many executives now describe as a new “intelligence layer” that sits between data, applications and people, continuously interpreting information, orchestrating workflows and proposing decisions.

Industrial technology firms such as Kongsberg Digital talk about this in very concrete terms: an “intelligence layer” that fuses digital twins, real-time sensor data and agentic AI to optimize heavy‑asset operations in chemicals, energy and maritime environments. Similar patterns are appearing in finance, healthcare, retail, professional services and the public sector, even if the language differs from industry to industry.

Several powerful macro-trends are converging:

  • Near-universal AI adoption. Recent global surveys from McKinsey & Company show that almost all large organizations now report using AI in at least one business function, and a rapidly growing share are experimenting with AI agents embedded in processes, not just standalone chatbots.
  • A decisive shift to multi‑model strategies. Research from Andreessen Horowitz (a16z) and venture firm Menlo Ventures finds that more than a third of enterprises already run five or more different foundation models in production, mixing providers for cost, capability and risk diversification rather than trying to standardize on a single model family.
  • Explosive model spend. Menlo’s 2025 mid‑year LLM market update estimates enterprise LLM API spend at roughly $8.4 billion, up from about $3.5 billion a year earlier, with spend shifting dynamically between Anthropic, OpenAI, Google and others as enterprises rebalance performance and price.
  • Measured but real productivity gains. An On the Economy analysis from the Federal Reserve Bank of St. Louis suggests that more than a third of workers are already using generative AI on the job, with early data indicating measurable time savings and incremental productivity growth, even before full-scale transformation kicks in.
  • Transformational long‑term potential. A widely cited analysis by Goldman Sachs estimates that generative AI could ultimately raise global GDP by around 7% and boost annual productivity growth by 1.5 percentage points over a decade if adoption and diffusion trends continue.

The common theme: AI is no longer a sidecar application. It is becoming a horizontal capability that plugs into every workflow, every dataset and every customer or employee touchpoint. That is what we mean by the “intelligence layer”:

Working definition

The enterprise intelligence layer is the set of models, orchestration tools, guardrails and embedded agents that continuously interpret enterprise data and context, generate and evaluate options, and drive or support actions across business processes, channels and systems.

For boards and CEOs, the strategic question is no longer whether to “try generative AI” but how quickly to build this intelligence layer into the fabric of the business — ahead of competitors that are doing the same.

From Models to Platforms: The Generative AI Stack Behind the Intelligence Layer

Under the hood, the intelligence layer is not a single product. It is a stack that starts at infrastructure and stretches up into line‑of‑business experiences. At a high level, we can think in four layers:

Layer What it does Typical players & examples
1. Data & Infrastructure Stores and secures enterprise data; provides the compute, networking and tooling to train, host and scale AI models. Includes vector search, observability and MLOps. Cloud and infrastructure providers such as Amazon Web Services (AWS), Microsoft, Google and hybrid players like IBM, plus data governance and catalog platforms such as Collibra.
2. Model & Foundation Layer General‑purpose and domain‑specific LLMs, multimodal models and specialized agents that perform reasoning, generation, search and planning. Frontier model providers (OpenAI, Anthropic, Google, Meta, Microsoft, IBM, AWS) and enterprise‑centric players such as Cohere and AI21 Labs.
3. Orchestration, Safety & Governance Routes traffic to the right model, enforces security and data‑residency policies, adds guardrails, monitors quality and connects AI to tools, APIs and workflows. Enterprise AI platforms from the major clouds and from SaaS leaders like Salesforce (Einstein), Adobe (Firefly) and others, along with observability and governance tools from firms such as Collibra.
4. Application & Workflow Visible experiences for customers and employees: copilots in productivity suites, agentic workflows in CRM and ERP, AI‑augmented terminals and domain‑specific assistants. Productivity tools like Microsoft 365 Copilot, developer tools such as GitHub Copilot, creative applications from Adobe, AI‑infused CRM from Salesforce, and specialized stacks like Bloomberg’s BloombergGPT inside the Bloomberg Terminal.

On top of this layered stack, a new generation of agentic systems is emerging. Rather than simply responding to prompts, these systems plan, decompose work into tasks, call tools, interact with APIs and collaborate with people to achieve business outcomes. Research from Boston Consulting Group (BCG) describes how “AI agents” are becoming virtual team members that can run complex workflows, not just answer questions.

In practical terms, this means:

  • The intelligence layer is multi‑provider: a given application may call models from OpenAI, Anthropic and Google in the same workflow.
  • It is data‑centric: value depends less on raw model capability than on how well the system can access, govern and reason over proprietary data.
  • It is embedded: the most powerful use cases are not separate chatbots but AI woven into tools knowledge workers already use — from CRM to IDEs to clinical documentation systems.

Adoption, Investment and the Early Productivity Uplift

AI is everywhere, but value is unevenly distributed

The headline story for 2025 is that AI adoption has gone mainstream, but the distribution of value remains highly skewed. McKinsey’s latest global AI survey finds that almost all respondents say their organizations use AI, yet only a minority have scaled it across the enterprise or embedded it deeply into core processes.

A complementary view from the St. Louis Fed looks at adoption from the worker perspective: a majority of U.S. adults have tried generative AI at least once, and a substantial share now use it weekly for work tasks — but usage is concentrated in knowledge‑intensive roles and higher‑wage sectors.

Enterprise LLM spend and the multi‑model reality

On the investment side, Menlo Ventures’ 2025 mid‑year update estimates that enterprise LLM API spend more than doubled in about a year, reaching roughly $8.4 billion. The market has also shifted from a single‑vendor mindset to a multi‑model world:

  • a16z’s survey of 100 enterprise CIOs reports that around 37% of enterprises now use five or more models in production;
  • Menlo Ventures sees Anthropic gaining enterprise share even as OpenAI and Google remain core providers;
  • other analyses suggest that only about 13% of AI workloads currently run on open‑source models, even though frameworks like Meta’s Llama are widely used in experimentation.

For CIOs and CTOs, this means the intelligence layer must be architected from the start to abstract away individual models and optimize across a changing mix of providers.

What do we know about productivity effects?

While long‑term macro effects will take years to fully measure, the micro‑level productivity story is increasingly clear. Several rigorous studies and real‑world deployments point to substantial gains:

  • Software engineering. Controlled experiments with GitHub Copilot and related tools suggest that developers complete certain coding tasks roughly 50–60% faster, with equal or better code quality, when assisted by generative AI.
  • Knowledge retrieval. Morgan Stanley reports that its internal AI assistant has boosted research and policy document retrieval efficiency for financial advisors from about 20% to 80%, with 98% of advisor teams adopting the tool, and its AskResearchGPT system now surfaces insights from tens of thousands of research reports in seconds.
  • Meeting documentation. AI meeting assistants such as Morgan Stanley’s Debrief and ambient AI scribes at Stanford Medicine and Stanford Health Care, built on technologies from Nuance, are turning raw conversations into structured notes, follow‑ups and CRM entries, significantly reducing administrative burden.
  • Sales and marketing. Major platforms such as Salesforce (Einstein), Adobe (Firefly) and others are seeing broad adoption of AI‑generated content and personalization, especially in outbound campaigns and creative production.

Synthesizing across these studies, a reasonable working assumption for many knowledge workflows is: AI copilots and agents can often free up 20–40% of time on well‑structured tasks, with the biggest upside where work is highly repetitive, text‑heavy and rules‑constrained.

Key implication for leaders

The productivity opportunity is real, but it is not automatic. Organizations that simply “give people a chatbot” will capture only a fraction of the value. The leaders are systematically rewriting workflows around AI, embedding the intelligence layer into the way work actually happens.

How the Intelligence Layer Shows Up Across Industries

Although every sector has its own regulatory and operational realities, the intelligence layer tends to manifest in a consistent set of patterns. Below we highlight a few emblematic industry examples.

Financial Services and Capital Markets

Leading financial institutions are using generative AI first and foremost as an internal intelligence fabric. Consumer‑facing chatbots grab headlines, but the real transformation is in research, advisory and operations.

  • Research and advisory copilots. Morgan Stanley’s AskResearchGPT and AI @ Morgan Stanley Assistant combine internal research, market data and proprietary content in AI‑powered assistants that help advisors respond to client questions and synthesize insights. Debrief extends this intelligence layer into meetings, automatically drafting notes and follow‑up actions.
  • Domain‑specific models. Bloomberg’s BloombergGPT is a 50‑billion‑parameter model trained on financial data and integrated into the Bloomberg Terminal, improving search, question answering and analysis for market professionals.
  • Risk and compliance. Major banks including JPMorgan Chase and others are experimenting with gen‑AI‑driven tools for KYC, anti‑money‑laundering and policy interpretation, often guided by frameworks from firms such as BCG and McKinsey & Company.

The underlying pattern: a secure, governed intelligence layer that knows the firm’s products, policies and research deeply, and that exposes that knowledge through assistants, agents and API calls into existing systems.

Healthcare and Life Sciences

In healthcare, the intelligence layer is emerging at the intersection of clinical documentation, patient communication, imaging and research:

  • Ambient clinical documentation. Stanford Medicine, Stanford Health Care and systems such as Mayo Clinic are piloting ambient listening solutions, including Nuance’s DAX Copilot from Nuance, that turn doctor–patient conversations into structured visit notes and action lists.
  • Patient communication. Health systems are starting to use gen AI to draft responses to patient portal messages, summarize lab results and translate clinical language into patient‑friendly explanations, always with human review in the loop.
  • Research acceleration. Life‑science organizations are using gen AI to read literature, propose hypotheses, generate code for bioinformatics pipelines and help design experiments, often on top of domain‑specific models and tools.

Here, the intelligence layer augments scarce clinician time, reduces documentation load and helps patients navigate complex information — but only where governance, privacy and safety are robust.

Manufacturing, Energy and Industrial Operations

Asset‑intensive industries are building intelligence layers tightly coupled to the physical world. For example, Kongsberg Digital describes an “AI‑powered industrial work surface” that brings together digital twins, real‑time operations data and agentic AI to support operators in live plants, offshore assets and energy systems.

Typical use cases include:

  • real‑time anomaly detection and root cause analysis across thousands of sensors;
  • AI‑assisted procedure guidance for complex maintenance and safety workflows;
  • optimization of production parameters under constraints such as emissions, energy prices and feedstock quality.

Over time, these capabilities evolve into industrial intelligence layers that sit above SCADA, MES and other control systems, guiding human decision‑makers while respecting hard safety boundaries.

Retail, Consumer and Customer Experience

In retail and consumer services, the intelligence layer is increasingly about hyper‑personalization at scale: the ability to tailor offers, experiences and content in real time based on intent and context.

  • Personalized marketing and offers. WNS has documented a generative‑AI‑led personalization program in which a retail chain used AI to generate individualized email content and journeys, resulting in roughly a fourfold increase in engagement versus prior campaigns.
  • Conversational commerce. Retailers are deploying AI shopping assistants that help customers discover products, compare options and transact across channels, often powered by models from OpenAI, Anthropic and Google.
  • Dynamic merchandising and pricing. AI agents continuously analyze demand, inventory and competitor signals to recommend price moves, promotions and assortment adjustments.

Platforms from Salesforce, Adobe and others increasingly provide these capabilities as part of their core suites, turning the intelligence layer into a shared asset across marketing, service and commerce teams.

Legal, Consulting and Other Professional Services

Knowledge‑intensive professional services are particularly ripe for an intelligence layer, given their dependence on unstructured text, precedents and institutional knowledge.

  • Legal workbenches. Global law firm A&O Shearman (formed from the merger of Allen & Overy and Shearman & Sterling) has been an early adopter of AI legal assistants, such as Harvey, to support research and drafting — always under human supervision and within strict confidentiality rules.
  • Consulting delivery platforms. Firms such as Boston Consulting Group (BCG) and McKinsey & Company are both advising clients on AI and embedding AI into their own delivery: drafting analyses, synthesizing interviews and building agentic workflows for tasks like scenario modeling and stakeholder mapping.
  • Internal knowledge assistants. Many firms run AI assistants on top of their knowledge bases, proposals and project archives, dramatically reducing the time needed to find relevant prior work and examples.

As AI becomes integral to these professions, the intelligence layer also becomes a talent platform: the place where new hires learn “how work is done here” by interacting with codified expertise and best practices.

What Actually Changes in the Workday: From Tasks to AI‑Centric Workflows

The intelligence layer is not only a technology story; it is a workflow story. The biggest gains come when enterprises systematically redesign how work happens around AI, rather than sprinkling copilots onto legacy processes.

Search becomes “ask and act”

Traditional enterprise search — typing keywords into a portal — is being replaced by conversational “ask and act” experiences: employees pose questions in natural language, AI retrieves and synthesizes relevant information, and in many cases proposes the next action (drafting an email, configuring a report, triggering a workflow).

For example, an advisor at Morgan Stanley can ask an internal assistant, “What are the key implications of last week’s Fed meeting for high‑net‑worth clients with concentrated tech positions?” and receive a synthesized answer grounded in the firm’s own research, with links to underlying reports and suggested follow‑up materials.

Writing and analysis shift from “blank page” to “review and refine”

Across functions, AI increasingly does the first 60–80% of the work on a piece of content or analysis:

  • drafting proposals, RFP responses and memos based on templates and prior examples;
  • summarizing complex policy changes or regulatory texts for different audiences;
  • producing first‑pass variance analyses, commentary and visualizations from financial data.

Human experts then review, critique and refine. The core skill shifts from writing everything from scratch to specifying intent, critiquing AI output and steering iterations.

Software development becomes AI‑pair‑programming by default

Developer tools such as GitHub Copilot and others have already changed the daily experience of software engineers. Experiments and field data show that AI‑assisted developers often complete certain tasks dramatically faster, with many organizations now making AI tools a default part of the toolchain.

Emerging patterns include:

  • AI generating boilerplate and repetitive code, allowing engineers to focus on architecture and edge cases;
  • AI reviewing code for potential bugs, security issues and style violations before human review;
  • multi‑agent setups in tools like GitHub’s agent platforms, where different agents propose designs, write code and run tests collaboratively.

Meetings, CRM and documentation are increasingly AI‑native

AI‑enabled note‑taking and summarization are rapidly becoming standard:

  • In wealth management, tools such as Morgan Stanley’s Debrief transform client conversations into structured notes, action items and CRM updates, significantly reducing manual effort.
  • In healthcare, ambient AI from Nuance and others listens during clinical encounters, generating draft notes that clinicians review and approve.
  • In sales and service, AI summarizes calls, highlights key moments and automatically updates opportunity records and cases in CRM systems from providers like Salesforce.

Over time, this is likely to change how meetings are run: agendas and materials can be generated and tailored by AI, real‑time insights can surface during discussions, and follow‑ups can be tracked by agentic workflows rather than manual reminders.

New Operating Models Built on an AI‑First Core

As the intelligence layer matures, leading organizations are not just automating tasks; they are rethinking operating models — how work is organized, governed and measured.

From tools to AI‑enhanced “virtual teams”

BCG’s work on agentic AI and AI at work emphasizes a shift from isolated copilots to multi‑agent systems that behave like virtual team members. In this model, a process such as onboarding a new enterprise customer might involve:

  • a “document agent” that validates and extracts data from contracts and KYC documents;
  • a “risk agent” that checks exposures and compliance rules;
  • a “communication agent” that drafts customer‑facing emails and internal updates;
  • a human owner who supervises, handles exceptions and makes judgment calls.

Workflows move from linear, human‑only sequences to hybrid human–AI swarms that can operate in parallel, with humans moving more into orchestration and decision roles.

Decision intelligence as a shared capability

Many enterprises are building centralized or federated “decision intelligence” functions that own the intelligence layer:

  • curating and governing data used by AI systems;
  • defining standard patterns for building and evaluating agents;
  • setting thresholds for automation versus recommendation;
  • ensuring that AI‑driven decisions align with risk appetite and regulatory constraints.

These groups often sit at the intersection of data, engineering, risk/compliance and business leadership, and they work closely with partners such as BCG, McKinsey & Company, World Economic Forum initiatives, and AI platforms from the big model providers.

Human–AI partnership as the default org design

Over the next several years, most roles that rely heavily on information processing, analysis and communication will likely be redesigned around explicit human–AI partnership models. For example:

  • Analysts spend more time framing questions, interpreting scenarios and challenging assumptions, while AI handles most data munging, summarization and first‑pass modeling.
  • Customer‑facing professionals such as advisors, relationship managers and clinicians spend more time with clients or patients, while AI prepares materials, handles documentation and proposes recommendations grounded in policies and best practices.
  • Managers and leaders use AI to simulate options, stress‑test strategies and monitor leading indicators, freeing more time for coaching, stakeholder management and innovation.

Organizations that treat AI purely as an automation tool risk hollowing out skills and morale. Those that treat it as a capability amplifier — and redesign roles, incentives and training accordingly — will turn the intelligence layer into a durable competitive advantage.

Risk, Governance and Responsible Intelligence Layers

As the intelligence layer becomes more powerful and more deeply embedded, the risk landscape changes. Early adopters are converging on a set of governance themes that every enterprise needs to address.

Model and decision risk

Generative and agentic systems can hallucinate, propagate bias or optimize the wrong objective. Enterprises are building model risk frameworks that cover:

  • use‑case classification (from low‑risk internal summarization to high‑risk automated decisions);
  • evaluation regimes (accuracy, robustness, safety, fairness, consistency over time);
  • requirements for human review and overrides;
  • clear ownership for model behavior and business outcomes.

Research from BCG on making AI agents safe emphasizes the importance of guardrails, monitoring and layered controls as agents gain access to tools and transactional systems.

Data security, privacy and compliance

Because the intelligence layer must see data to reason over it, data security and privacy are foundational. Key practices include:

  • restricting sensitive data to private, controlled environments and private model endpoints;
  • implementing robust data minimization, anonymization and access control policies;
  • tracking model usage and prompts for audit and forensic analysis;
  • aligning deployments with emerging regulations in data protection, AI transparency and sector‑specific rules.

Governance is lagging adoption

A recent survey by Collibra, as reported by the Ohio Society of CPAs, highlights a growing gap: enterprises are rapidly rolling out AI pilots and production use cases, but fewer than half report having mature, formal AI governance frameworks fully in place. Many are still drafting policies around model selection, data residency, human‑in‑the‑loop requirements and monitoring.

Closing this gap requires joint ownership between technology, risk, legal, HR and business leaders, underpinned by education and change management at all levels.

Where the Intelligence Layer Goes Next

Looking ahead over the next three to five years, several trajectories for the enterprise intelligence layer are already visible.

From copilots to orchestrating agents

Today’s copilots mostly draft content and answer questions. Tomorrow’s agents will plan and execute multi‑step workflows: opening tickets, updating records, triggering processes and coordinating with other agents — under human‑defined goals and constraints.

Research and commentary from BCG, World Economic Forum initiatives and others suggest that this agentic shift could unlock a new wave of efficiency and innovation — provided that governance, testing and safety keep pace.

Vertical and domain‑specific intelligence layers

We should expect to see increasingly specialized intelligence layers tuned for particular sectors:

  • Financial intelligence layers that combine market data, research and risk models (as we already see from institutions like Morgan Stanley and Bloomberg).
  • Industrial intelligence layers for asset‑heavy operations, such as those being developed by Kongsberg Digital.
  • Healthcare intelligence layers built around clinical records, guidelines and outcomes data at organizations like Stanford Health Care and Mayo Clinic.

Over time, these sector‑specific layers will connect into broader ecosystems, with shared standards for data, safety and interoperability championed by players such as the World Economic Forum.

AI‑native organizations and economic impact

If projections from Goldman Sachs and other macro analysts prove directionally accurate, the intelligence layer will be a major contributor to global productivity growth over the coming decade. But impact will vary widely by organization.

AI‑native organizations — those that design products, processes and cultures around human–AI collaboration from the ground up — are likely to pull away from peers, much as digital‑native companies outperformed during earlier waves of internet and cloud transformation.

Strategic Imperatives for Business Leaders

For boards, CEOs and senior executives, the rise of the intelligence layer is not a narrow IT upgrade. It is a new operating system for the enterprise. To harness it, leadership teams should focus on a concrete, action‑oriented agenda.

Executive checklist: Building your intelligence layer

  • 1. Articulate a clear intelligence‑layer vision. Define how AI will support your strategy over the next 3–5 years. What decisions should be significantly better? Which workflows should be 30–50% faster? How will customer and employee experiences change?
  • 2. Invest in data foundations and governance. Prioritize data quality, integration and access controls. Without trusted, well‑governed data, the intelligence layer cannot deliver reliable value.
  • 3. Design for a multi‑model future. Avoid hard lock‑in to a single provider. Build orchestration capabilities that can route work to the right model based on task, sensitivity and price–performance.
  • 4. Focus on workflows, not just tools. Select a handful of high‑value workflows in each function, and redesign them around AI from first principles. Measure time saved, error reduction, revenue impact and employee satisfaction.
  • 5. Build disciplined evaluation and risk management. Treat AI systems like other critical infrastructure: test rigorously, monitor continuously and define clear thresholds for full automation versus human‑in‑the‑loop.
  • 6. Invest in people and change management. Train leaders and frontline teams to work effectively with AI, redesign roles and career paths, and ensure that the intelligence layer is perceived as an amplifier of human capability, not just a cost‑cutting tool.
  • 7. Engage externally. Stay close to evolving best practices from organizations such as World Economic Forum, leading consultancies, venture firms like Andreessen Horowitz and Menlo Ventures, and the major model providers.
  • 8. Make AI part of the culture. Follow the lead of companies such as Microsoft and others that are explicitly encouraging employees to use AI as a core skill, not an optional add‑on.

The enterprises that thrive in this new era will be those that treat the intelligence layer as strategic infrastructure: designed deliberately, governed carefully and embedded deeply into how their businesses create value. For leaders, the moment to move from experimentation to systematic transformation is now.

Sources, References and Additional Reading