• Moative
  • Posts
  • Stop Paying for Artifacts

Stop Paying for Artifacts

The length of tasks AI can reliably complete has been doubling ~ every seven months

Labor unlock through automation is possible today without investing a single dollar in Databricks, Snowflake, or Accenture.

A couple of weeks ago, Adam Wathan, CEO of Tailwind Labs, laid off 75% of his engineering team.

Tailwind CSS is a popular tool that helps developers build web applications faster. The framework powers millions of apps and sites. Usage is up. Adoption is up. And yet, in a GitHub comment explaining why he couldn't prioritize a community feature request, Wathan was blunt:

“Traffic to our docs is down about 40% from early 2023 despite Tailwind being more popular than ever.”

And then the number that matters: “Our revenue is down close to 80%.”

Tailwind's business model was elegant. Developers visit the documentation to learn the tool, discover Tailwind Plus (a paid collection of pre-built app components) and buy. Traffic led to discovery. Discovery led to sales.

Then developers started asking AI for code instead of reading documentation. The funnel broke.

Here's what makes this story important for every company paying for software: Tailwind didn't fail because their product got worse. They failed because their product was describable.

The Describability Problem

An app-building framework is, at its core, a set of conventions. Naming patterns. Shortcuts. Best practices. All of it can be written down. All of it was written down, in documentation that AI companies ingested for training data.

Pre-built website components? Also describable. "A card with a header, image, and call-to-action button" is a prompt that works. The components Tailwind sold can now be approximated by asking nicely.

This isn't unique to Tailwind. It's a pattern.

Jasper AI raised $125 million at a $1.5 billion valuation selling AI-powered writing. Revenue collapsed from $120 million to somewhere between $55-88 million in a year. Why? Because "write marketing copy" is a describable task, and ChatGPT does it for $20/month.

The emerging rule: If your value can be prompted into existence, AI will commoditize it.

Describable doesn't mean you can write words about it. It means a prompt can replace it. Documentation, tutorials, templates, standard workflows, boilerplate anything: if a description is sufficient to reproduce the value, you're exposed.

The Describability Test

Before you renew that SaaS contract or green-light that vendor, ask two questions:

1. Can it be fully specified once, or does it require ongoing judgment?

A blog post outline can be specified. A standard contract template can be specified. Continuous uptime during your peak season cannot. Neither can knowing when to deviate from the handbook because this situation is different, or making the judgment call that this customer needs an exception. If you're paying for artifacts that can be described once and reproduced, you're exposed. If you're paying for judgment that adapts to context over time, you have more runway.

2. Is it entangled with reality?

Pure information products are the most vulnerable. Products that touch physical systems, regulatory bodies, supply chains, or organizational complexity are harder to describe because they're entangled with things that can't be captured in a prompt. The compliance rules for clinical trials. The scheduling constraints for fleet logistics. The approval workflows that require institutional sign-off. This knowledge doesn't fit neatly into a prompt because it's woven into relationships, regulations, and physical constraints that exist outside the model.

The Klarna Counter-Lesson

Now here's where companies get it wrong in the other direction.

In February 2024, Klarna announced their AI chatbot was handling two-thirds of all customer service chats, the equivalent of 700 employees. Resolution times dropped from 11 minutes to 2 minutes. CEO Sebastian Siemiatkowski declared that "AI can already do all of the jobs that we as humans can do."

By May 2025, Klarna was hiring again.

The AI wasn't bad at the describable parts. Order status, routine refunds, FAQ answers: millions of those interactions worked fine. Where it broke was the undescribable parts. The frustrated customer who needs to feel heard before accepting a solution. The edge case that doesn't match any script. The judgment call about when to make an exception. Klarna's system had no way to distinguish between these situations. Everything went to the bot.

Siemiatkowski admitted it publicly: “Cost unfortunately seems to have been a too predominant evaluation factor.”

Here's the harder truth: that line is moving.

Research from METR shows that the length of tasks AI can reliably complete has been doubling approximately every seven months for the past six years. Tasks that required human judgment in 2023 are automatable now. Tasks that require human judgment today may not in a year's time.

But the same researchers found that experienced developers using AI tools in real-world conditions were actually 19% slower than without AI, despite believing they were 20% faster. The gap between what AI can do on benchmarks and what it delivers in messy operational contexts is where companies get burned.

The describability test isn't a one-time audit. It's a discipline. Knowing where the line is today, in your context, with your edge cases, and tracking where it's heading: that's the actual operational capability.

What This Means for Mid-Market

If you're running a mid-market company in manufacturing, healthcare, distribution, or services, you're not a software startup and you're not a fintech giant.

You're not selling documentation. And you don't have thousands of employees to experiment with.

But the describability test still applies, in two directions.

First, apply it to your vendors.

That horizontal SaaS tool you're paying $50K/year for: is its value describable? If it's essentially a wrapper around workflows that AI can now approximate, you're overpaying for something that's about to get cheaper or disappear.

Industry-specific software is different. The reason vertical software commands premium valuations is that it encodes domain logic that's hard to describe. This knowledge is entangled with regulation, with physical reality, with institutional relationships. It doesn't fit neatly into a prompt.

But even vertical software isn't automatically safe. The question is whether your vendor is providing artifacts (features, interfaces, reports) or process (ongoing adaptation, accumulated learning, operational reliability).

Second, apply it to yourself.

Here's the harder question: Is YOUR competitive advantage describable?

If your edge is "we have deep expertise in our industry," that's increasingly describable. Industry knowledge exists in trade publications, analyst reports, decades of accumulated text. AI is getting trained on it.

If your edge is "we've built the operational capability to deploy, adapt, and improve our systems for our specific context," that's not describable. That's process. That's presence over time. That's a moat.

Process Ownership, Not Platform Ownership

Your survival strategy isn't to become a technology company. It's to own the process: the organizational capability to operationalize AI, not just purchase it.

This means three things:

Own the operational capability. Not the servers, but the muscle. The internal ability to deploy, monitor, adapt, and improve systems over time. A logistics company that builds internal capability to continuously retrain route optimization on their own fleet data, rather than buying a vendor's generic model, owns the process. The vendor sold them an artifact. They built a discipline. This can't be rented wholesale. It has to be built, even if you build it with help.

Shift investment from interface to integration. The flashy front-end is commoditizing. The data layer (process integrity, compliance logic, integration with your actual operations) remains valuable. Stop paying premium for dashboards. Pay for the things that hold your operations together.

Prioritize ongoing presence over one-time deployment. The value isn't in the initial rollout. It's in the compounding: the edge cases you learn to handle, the organizational knowledge that accumulates, the adaptation that happens over months and years.

This isn't a permanent moat. It's a moving window. The goal isn't to find safety; it's to stay ahead of the line long enough to build something that compounds.

Ask yourself this, the next time you're reviewing a vendor contract or evaluating an AI investment – Is this building a capability we'll own, or renting an artifact that's about to get cheaper?

The line between describable and undescribable isn't fixed. But learning to track it is now a core competency.

Enjoy your Sunday!

When Bharath Moro, Co-founder and Head of Products at Moative, explains guardrails using a salt shaker and tablecloth, guests usually take notes on their napkins.

Keep reading

The smartest way to enable an enterprise on their AI roadmap is not to say it will take 18 months and a Phase ‘0’ of preparation, but by showing what’s possible, embracing the shortest routes to value, and showing the IT teams that you will work with their aspirations and anxieties.

Why should a paper mill share 50 years of their data to get no IP for that AI and instead pay 6-figures to someone who stitched someone else’s AI? Is integration so valuable that you leave IP value at the table?