"What's our intelligence layer strategy?"

This question almost never gets asked. Instead, business leaders tend to ask “What’s our AI strategy?”. The difference between those two questions matters. The AI strategy question leads to haphazardly bolting AI on to existing systems; the intelligence layer question leads to an organization that is intentionally architecting their organization to leverage AI to drive enterprise competitiveness. An intelligence layer unlocks your organization’s past value-drivers (the data systems, documents, templates, work processes) in an AI-first era. But this term is either unknown or defined in too many different ways.

I'm writing intelligence-layer.com because I think the term is too useful to give up on. Underneath the marketing churn there's something real, and I believe it's going to be the most important architectural and organizational decision in the enterprise data stack for the next decade. So this essay is an attempt at a working definition: clear enough to be useful, narrow enough to be defensible, and grounded in what I'm actually seeing data leaders build inside established businesses right now.

This isn't for AI-native startups. They don't need a working definition; they're building from scratch. This is for the data leaders whose companies still run on a stack of SaaS products, document repositories, internal applications, and BI tooling, and who are now trying to figure out where, in all of that, the new layer goes.

What it isn't

Let’s start with what an intelligence layer isn’t

The intelligence layer isn't "AI features in your existing tools." When Tableau, Power BI, or Looker bolt a chat interface onto an existing dashboard product, vendors sometimes brand it as their AI or intelligence layer. It isn't. That's a feature. The intelligence layer is something a dashboard product consumes, not something it provides.

It isn't synonymous with the LLM. Calling Claude or GPT "the intelligence layer" is a bit like calling the engine "the car." The LLM model is a critical input. The layer also includes the orchestration around it, the semantic context it operates on, the guardrails that keep it accountable, the evaluation that tells you when it's wrong, and the operating model around the whole apparatus. Without those, you have a model. With them, you have a layer. (And I’ll explain all these terms in later blog posts)

It isn't a product you can buy. This is the most consequential misdiagnosis. There are vendors selling components of an intelligence layer. There is no vendor selling the whole thing. Anyone telling you otherwise is selling you a wedge and hoping you don't ask about the rest.

A working definition

Here's where I'd start:

An intelligence layer is the tier in your enterprise stack where humans and agents collaborate to reason about, query, and act on your data and where the trust, governance, and operating model for that collaboration are deliberately designed.

It sits above your data systems (your warehouses, your transformation tooling, your SaaS data, your document stores) and below the surfaces your employees actually use to do their work. It's where intelligence (human, machine, or some hybrid) operates on what your data has prepared.

Two parts of that definition do real work, and are worth lingering on.

The first is collaboration. The intelligence layer is not "where the AI does things." It's where humans and agents do things together. The pure-AI framing is a mistake; the pure-human framing is the past. The interesting design questions are all about the seam between them.

The second is deliberately designed. Most established businesses already have an intelligence layer in a descriptive sense: there are agents and humans doing stuff with data, somewhere in the organization. Almost no one has designed it. It's accreted, by accident, from whatever tools showed up first and whichever team grabbed the AI mandate. The work I think matters most over the next few years is the move from accidental intelligence layers to designed ones.

A different starting point than the AI-natives

A few well-known firms have written about the intelligence layer from an AI-native perspective. Sequoia Capital frames it as an evolution from hierarchy to intelligence; a16z places it within frontier systems for the physical world. Both are worth reading. Both also presume a starting condition of green fields, clean architecture, and no legacy obligations, and that doesn't describe most of the businesses I've worked inside.

Most established companies aren't replacing their stacks; they're integrating with them. Their intelligence layer has to talk to a mid-2000s ERP, a department-specific CRM that someone's VP loves, three document repositories no one wants to consolidate, and a BI tool with five years of dashboards no one's willing to retire (or, better yet, no BI tool at all). That isn't a constraint to be impatient with, but instead it's the actual environment most data leaders are designing in.

This publication is for those leaders: the ones running incremental, deliberate transitions, who want to leverage AI without blowing up what already works. The AI-native framings will tell you what an intelligence layer can be in theory. They'll tell you less about what it has to negotiate with in practice.

The components, briefly

Each of these deserves its own essay, but I’ll briefly mention them here. Here are the components I think actually compose an intelligence layer inside an established business:

  • Connections to existing data systems. How the layer reads from and writes to the warehouses, SaaS apps, and operational systems you already run, at agent-grade reliability.

  • Document surfacing and retrieval. The PDFs, Word docs, internal wikis, and unstructured content most established businesses live in — and which most AI-native frameworks underestimate.

  • Templates for processes and deliverables. The repeatable workflows your business runs on, expressed in a form agents can execute or assist on. Where most of the operational return ends up.

  • Security, permissions, and access. Governance at agent-speed and agent-scale. Failure modes that didn't exist when humans were the only consumers of your data.

  • Evaluation and oversight. How you know an answer is right, when an agent is wrong, and what happens when it is. The trust infrastructure has to fail loudly in ways previous data infrastructure didn't.

  • Agent orchestration. The reasoning, planning, and coordination layer where agents actually do the work (including the handoffs back to humans!).

What's at stake

If you're running a GenAI initiative, you're sponsoring a project. Projects end. People move on. The thing built decays into the same kind of orphan tooling we've all inherited a thousand times.

If you're designing an intelligence layer, you're making an architectural commitment. You're saying: this is a permanent tier of our stack; it will be there in three years; the decisions we make now will compound; the people who own it will own it for a long time. That changes who's in the room, how the budget gets approved, and which questions get asked first.

It also changes who holds the pen. Right now, at most established businesses, the intelligence layer is being defined for the data leader by vendor decks, by AI/ML teams operating outside data leadership, by whichever pilot got funded first. Letting that drift continue is, in my experience, the most expensive structural mistake a data leader can make in this moment. The leaders who write their own definition early will be operating from a sharper architecture than the ones who let the definition get written for them.

If any of this resonates, subscribe — and forward to a peer who's quietly been wondering the same things.

Reply

Avatar

or to participate

Keep Reading