• Moative
  • Posts
  • AI Is Intuitive. Its Infrastructure Isn’t

AI Is Intuitive. Its Infrastructure Isn’t

Rent execution, but own the control plane

Last week we wrote that AI is intuitive. This week’s uncomfortable follow-on is that the infrastructure underneath it is not, and that mismatch is where otherwise promising AI programs quietly die.

Not broken. Not down. Just unpredictable. Bills that spike without warning. Latency that drifts. Governance that becomes a meeting. “One more pipeline” that spans a quarter. You can’t compound gains when the substrate keeps changing shape.

The modern data stack was supposed to prevent exactly this. Plug tools together. Move faster. Stay modular. And for a while, it worked. But incentives always win. The stack didn’t stay a stack. It became a power structure. Vendors competing to become the place where your definitions live, where your transformations run, and where your AI features get “conveniently” embedded.

This Fall, Fivetran and dbt announced they were merging. On the surface it’s just data tooling consolidation. Under the hood it’s a signal that the control plane is the battlefield now.

If you’re not in the data world, the merger is easy to dismiss. It doesn’t change your app, doesn’t ship a feature you can click. But it changes the center of gravity. And center of gravity is what creates variance. Variance in cost, in speed, and in how quickly your org can turn a question into an answer. That’s why people close to the metal paid attention.

I’ll explain.

The old promise: best-of-breed, loosely coupled

The modern data stack was sold as a clean division of labor: move data in, transform it, model it, visualize it, activate it. Each layer is best-of-breed. Swap pieces when you want to. No single vendor holds the keys.

That idea didn’t die because it was wrong. It died because it was incomplete.

It assumed the stack was a set of tools. In reality, it’s a set of incentives. The most important incentive is this: transformation is where your business logic hardens into reality, and transformation is also where a lot of compute gets burned. When your “definitions of truth” (revenue, active, churn, fraud, risk, usage) become inseparable from a particular execution environment, the modularity becomes mostly cosmetic. You can change logos on the surface while the economic and operational center stays the same.

That’s why the Fivetran-dbt combination is not just “ingestion plus transformation.” It’s a bid to sit on the pathway through which most organizations turn raw events into trusted, decision-grade data, and to own the layer where those definitions live.

Why this matters more in 2026 than it did in 2019

In the BI era, vendor gravity was mostly annoying. Bigger bills. Slower dashboards. More governance. But your product’s intelligence wasn’t usually trapped inside the data platform.

AI changes that, because AI is not “analytics, but smarter.” AI is your company’s judgment encoded into systems: what you believe, what you ignore, what you flag, what you recommend, what you automate. And judgment has a dependency. It needs stable, legible primitives underneath it.

That’s why we’re seeing AI capabilities move into the warehouse layer. LLM completion becomes a SQL-primitive, billed like everything else, governed like everything else. Snowflake’s AI_COMPLETE being generally available is a good example of this direction. “AI work” becomes “just another query.”

This is convenient. It’s also gravity.

Because once core parts of your intelligence loop live inside the warehouse, you’ve tied them to the same meter, the same execution semantics, and the same vendor surface area as the rest of your data operations. That introduces a new kind of variance: not just “our bill went up,” but “our product behavior and iteration speed are now downstream of a platform roadmap.”

At the same time, the industry is trying to reconcile this gravity with “openness.” Table formats like Iceberg are the banner: keep storage interoperable, keep exit routes real. But notice how the story often plays out in practice. Yes, the format is open, and you can interoperate. Yet the operational center still matters (catalog integrations, discovery automation, access boundaries, and even new billing surfaces). Snowflake’s own documentation around catalog-linked databases and Iceberg discovery and billing changes makes that dynamic legible.

One more nuance that’s easy to miss: some people talk as if “compute will commoditize and become cheap,” full stop. That’s not a law of physics. OLAP at scale (wide joins, scans, semi-structured data, heavy concurrency) stays genuinely expensive and hard. Warehouses will keep pricing power. So you shouldn’t bet on “compute gets cheap” saving you. You should bet on your architecture choices reducing variance.

Which leads to the stance that holds up across cycles.

Rent execution. Own the control plane

Execution is the stuff you should be able to swap without rewiring your company: warehouses and query engines, schedulers, elastic compute, and even inference endpoints (often). You rent these because they’re commodities from your perspective. The value is in reliability and price-performance, not in identity.

The control plane is the stuff that is your company: canonical definitions (metrics, entities, identity), transformation logic as versioned code, lineage and quality gates, and now, because it’s AI, an evaluation harness that tells you when “intelligence” regressed. The goal is not purity. The goal is optionality.

What does “own the control plane” mean on Monday morning?

It means your key business definitions don’t live in screenshots or tribal Slack lore. They live in versioned artifacts you can port. It means your transformations aren’t a maze of one-off jobs that only one warehouse can run. They’re a controlled set of models with tests, lineage, and clear ownership. It means AI isn’t just demos. It ships with eval datasets, rubrics, and regression checks, so iteration is safe and fast. And it means you treat “warehouse-native AI” as an optimization, not a foundation: use it where it’s convenient, but keep the right to move.

Glamorous it is not; nevertheless, variance control is a prerequisite for adjacent gains. The data stack is now the surface area where your company’s judgment gets encoded. Your job is not to win a tooling debate but to keep your intelligence loop predictable, so you can compound.

Enjoy your Sunday!

When Bharath Moro, Head of AI Solutions and Co-founder at Moative, explains guardrails using a salt shaker and tablecloth, guests take notes on napkins.