Speed is the only moat

Go fast and then go slow.

We had an interesting week at Moative.

I am in New Orleans this week. One of the highlights for me is to demo a mind-blowingly cool demo of Billie, an AI agent that handles billing and outage reporting/questions from residential home owners, on behalf of the local utilities.

I am going to break down how we approached the build-out and the lessons from it.

The brief is fairly simple. There are a handful of questions that a resident will have about their billing.

  1. What’s my bill?

  2. How do I reduce it?

  3. Can I delay the payment?

  4. (If they are on prepaid), how do I extend my validity period?, and so on.

The traditional software development lifecycle would have had the following steps:

Requirement Gathering (Too many back and forth questions/clarifications because we are still rooted to the belief that writing software and re-writing it is expensive. Requirement gathering is a ‘cover all based’ act.)

Design (There is user flow, information flow, user experience, architecture, database design, may be even test cases. You get the idea. All important, even in AI. But each activity leads to meetings. How are we going to access data? Who needs to get IP whitelisted, where is the user flow diagram, how do we handle latency, etc. – all valid questions and it may take weeks to resolve them perfectly)

Development (The actual act of building the chat bot. Here too, you can go a couple of ways. Map out all the happy paths (the expected questions and the expected answers), check them off one by one and voila you get a chat bot – except that it will be robotic. Traversing across threads, linking them, keeping a decent memory of conversation – all of these are better done through agentic design than hand-building the interactions. What about hallucinations and other unexpected things like memory drops? Hold on to those questions.

Testing and Launch (Model evals based on a multi-faceted criteria, objective scoring instead of vibe-scoring of model performances, user-experience, behavior-alignment, tone testing etc. We don’t have a standard testing framework yet for AI products. Things are evolving)

We could fight like a valiant enterprise program manager with the knowhow gained from all the PM certifications we have subjected ourselves to or just do stuff and show.

In AI, the moat is not the process, yet. Its speed.

  1. Software development is not costly. We went through regimented happy paths over a week and then rewrote to agentic methods in 2 days. Its that easy.

  2. Behavior buy-in is harder than architecture buy-in. In our case, the biggest risk is in getting utilities to put AI in the front line. We have to show that it works. The ‘how’ of it comes once they buy the premise.

  3. Address the main anxiety. Traditional software development optimizes for efficiency. AI product building should optimize for trust. The question is not ‘what features at what cost by when’ but ‘will it work reliably.’ The traditional product building process will add to anxiety and not reduce it. No one has the patience to wait for weeks to see the chat working.

  4. Sequence of what matters: It can work → It works reliably → It can be tested and continuously observed → Its secure and compliant → It can scale economically → It can do everything. The easiest promise to make is to say that the bot will do every thing. In our experience, the users are very happy getting even an hour of their day back, even if the agent won’t do everything.

Product managers are used to the ‘surface area and stickiness’ tropes. We need to own the workflow. We need to generate an outcome. Of course. Yes. But those are not moats anymore. Everyone can build. Your competitor is also going to generate outcomes.

What matters is the sequence. The sequence should prioritize shipping speed. It also matters that product managers educate the CIOs and the IT teams to ask the right questions and mostly, hold the horses.

The ‘it can work’ phase is not about resolving access questions. Create synthetic data and show that it works.

The ‘it works reliably’ is not about architecture but model performance and consistency.

Enterprises can waste cycles by getting ready for a mythical perfect launch and pre-maturely reigning in and slowing down the pace.

Users/buyers care that it works and delivers the promise. Then comes the desired enterprise IT friction of governance, security, cost, etc. This is when you slow down. Ideally, its a two-pod structure where one focuses on shipping demos and the other focuses on setting the guardrails.

Well, that’s all folks. Wish us luck for this weekend, as it is ‘show time’ for Billie. If you want to see Billie at work, hit me up and I will show a demo. He talks about power outages as fluently as he knows to slow cook chicken.

PS: I will be in SF, San Diego, Houston, Boston, Chicago and Stamford in the next few weeks. Let’s pour over a coffee?