Episode 21: Why Traditional Agile Crushes AI Projects

by | May 11, 2026

Hello and welcome back to AI Solutions: The Pathway to Profit!

Let me ask you a question that might sting a little: Are you trying to grow an exotic orchid using instructions written for assembling IKEA furniture?

That’s exactly what most teams are doing with AI projects right now. They take the same project management playbooks that worked beautifully for websites, mobile apps, and databases, then watch in slow-motion horror as their groundbreaking AI initiatives crash and burn.

Today we’re pulling back the curtain on why this keeps happening and what actually works instead. This might be one of the most practical episodes we’ve done yet.

The Retail Recommendation Engine That Became a Cautionary Tale

Last month I sat with a retail client who was building a sophisticated recommendation engine. They were bright, motivated, and well-funded. They were also using the same rigid project plan they used for updating their e-commerce website.

It was painful to watch.

The core problem? AI development isn’t software engineering with extra steps. It’s experimental science that occasionally produces code.

This distinction might seem subtle, but it changes everything.

Traditional software development assumes you can define the requirements upfront, map out the entire journey, and then march forward in neat, predictable phases. AI laughs at this assumption.

It’s like trying to write a detailed travel itinerary for a journey into a completely uncharted continent. You don’t even know what mountains you’ll encounter until you’re standing at their base.

Why Waterfall Is Laughably Wrong for AI

We’ve all seen the classic Waterfall diagram: Requirements ? Design ? Build ? Test ? Deploy. Clean. Linear. Comforting.

And completely disastrous for AI.

I watched one team spend two months writing a 180-page requirements document for a customer churn predictor. They specified that the model “must achieve 95% accuracy.”

Here’s the thing: that 95% number was completely made up.

The outcome of an AI project isn’t a specification—it’s a discovery. You don’t know what’s possible until you start exploring the data. Their historical data turned out to be a disaster (a fact they only discovered three months later), and the business eventually decided that understanding why customers were leaving was more valuable than a raw accuracy score.

The goalposts didn’t just move. They changed sports.

This is what happens when you bring a certainty mindset to an uncertainty game.

Classic Agile Isn’t Much Better (And Here’s Why)

So naturally, the reaction is “Let’s just use Agile then!”

Not so fast.

Applying traditional Scrum to AI projects is like trying to hammer in a screw. The tool isn’t terrible, but you’re using it for the wrong job.

Here are the three places where classic Agile breaks down for AI work:

1. The Sacred Two-Week Sprint
Research doesn’t run on a calendar. I’ve seen teams “fail” a sprint because a promising model training run needed three weeks, even though the experiment itself was a massive success. The process literally punished them for doing good science.

2. Story Points Are a Fool’s Errand
How many story points is “Discover if adding this new data source improves accuracy”? It’s a guess wearing a costume of estimation. My rule of thumb: for true research tasks, ditch story points entirely and simply time-box the experiment.

3. The Misleading “Definition of Done”
A data scientist can write beautiful, clean code that passes every test. But if the model creates zero business value, is the work really done? Closing the ticket while creating no impact is the ultimate agile theater.

The Mindset Shift That Changes Everything

This brings us to the fundamental paradigm shift that separates teams that struggle with AI from those that thrive.

We must move from code-driven thinking to data-driven thinking.

In traditional software, the code is the logic. You write an if-then statement and the program executes it deterministically every single time.

AI doesn’t work that way.

The code is just a framework for learning. The real “logic” emerges from the data. This is why I tell every team I work with:

Stop thinking like architects. Start thinking like expert gardeners.

You control the environment—the soil (data quality), the water (features), the sunlight (model architecture). You can nurture and guide. But you cannot predetermine the exact shape of every leaf. The plant grows according to its own nature, and your job is to observe, measure, adjust, and sometimes prune.

This metaphor has saved more AI projects than any framework I’ve ever introduced.

From Features to Hypotheses: The Language Change That Works

The first practical thing I have every team do is change how they speak.

Stop writing tickets that say “Add new data source.”

Start writing hypotheses:
“We hypothesize that adding purchase history data will improve recommendation accuracy by at least 3%. We will measure this by…”

One is a task. The other is a scientific experiment. The difference in mindset is enormous.

Next, build a Minimum Viable Model (MVM) immediately. I don’t care if it’s barely better than a coin flip. Get something working end-to-end. This validates your entire data pipeline and gives you a baseline to improve upon.

You might be surprised to discover that your most valuable “sprints” involve zero changes to model code. Some of the biggest breakthroughs I’ve seen came from a focused week of data cleaning or feature engineering.

The Practical Framework: Agile + CRISP-DM

Here’s the structure I actually use with teams.

We take the proven CRISP-DM lifecycle (Business Understanding, Data Understanding, Data Preparation, Modeling, Evaluation, Deployment) and attack it with an agile mindset.

The key difference? We don’t run it like a waterfall. We use time-boxed sprints to attack specific hypotheses within each phase. One sprint might be dedicated entirely to improving data quality. That’s not just acceptable—it’s often where the magic happens.

Your sprint reviews should feel less like software demos and more like scientific peer reviews. You’re not showing a new button. You’re presenting:

  • Here was our hypothesis
  • Here’s how we tested it
  • Here’s what we learned
  • Here’s the impact on model performance

This shift in review format alone dramatically improves outcomes.

You Also Need a Different Kind of Team

Let’s be honest: you can’t hand an AI project to a traditional software team and expect magic. It’s like asking carpenters to build a jet engine. The skills are fundamentally different.

The winning formula I’ve seen repeatedly is a tightly integrated, cross-functional unit containing:

  • Data Scientists (the researchers)
  • Data Engineers (the pipeline builders)
  • ML Engineers (the critical bridge)

That last role is criminally underrated. ML Engineers take the brilliant but messy experimental code from the lab and turn it into something that can run at scale, reliably, in production. They’re the translators between discovery and delivery.

Your Product Owner also needs to evolve. They must think like a portfolio manager, not a traditional project manager. They need to be comfortable placing smart bets on hypotheses rather than demanding fixed delivery dates for uncertain discoveries.

The Bottom Line

Trying to run AI projects with traditional development cycles is like trying to pilot a spaceship with a horse-drawn carriage manual. The vehicle is different. The terrain is different. The physics are different.

The solution isn’t adopting some shiny new framework. It’s making a fundamental mindset shift from building features to testing hypotheses, from delivering code to creating business value through discovery.

Manage your AI initiatives like a research portfolio where you’re placing smart, measured bets—not like a construction project with a fixed blueprint.

One path leads to frustration. The other leads to breakthroughs.

Which path are you on right now?


Thanks for reading Episode 21! Next time, in Episode 22, we’re exploring “AI for HR: From Unbiased Recruiting to Employee Retention”—a topic that’s becoming increasingly important as more companies look to bring intelligence into their people operations.

In the meantime, I’d love to hear from you. Have you experienced the pain of trying to force traditional agile onto AI projects? What’s working (or not working) in your organization? Drop your thoughts in the comments.

Until next time, keep experimenting, stay curious, and remember: the data gets the final vote.

— Your AI Solutions Guide