Iteration
Iteration Is Not the Enemy (Depending on Where You Work)
Iteration: Virtue or Failure
There’s a word that gets used constantly in tech circles: iteration. It gets praised, baked into team values and printed on the walls of startup offices right next to the “move fast” poster and a Polaroid of Reid Hoffman. But whether iteration is a core virtue or a failure mode depends almost entirely on the environment you’re operating in — and conflating the two is one of the most common, and quietly damaging, mindset mistakes a team can make.
TL;DR — for the people who iterate through blog posts too
In an agency, iteration is a failure signal. The whole model is built around getting the answers before you build anything — thorough discovery, airtight spec, precise execution. Rework is margin destruction.
In a SaaS startup, iteration is the engine. You can’t fully know upfront. The market will humble you. So you build a fast learning loop, ship small, measure everything, and let the data do the talking.
The core difference isn’t process — it’s what you believe about knowability. Agencies assume answers exist and can be found. Startups assume they have to be earned.
Baby steps beat giant leaps. A thousand small moves, each one buying you a little more information, will outpace one perfectly-planned lunge almost every time.
Importing the wrong mental model into the wrong environment is where things quietly go sideways. Know which game you’re playing.
I’ve spent time on both sides of this. More startup than agency, honestly — but enough exposure to agency life, and enough time hiring agencies, working alongside them, and watching the patterns play out, that I’ve developed a fairly confident hypothesis about how the two worlds work. And more importantly, why they sometimes produce people who seem to be speaking completely different languages about something as basic as how work should work.
The Agency Model: Iteration as Failure
In an agency, the engagement follows a well-worn arc. Discovery first — you sit with the client, you ask questions, you surface the need. Then you translate all of that into a specification. A real one. Detailed, explicit, signed off on. That spec gets scoped, quoted, bid, and eventually lives inside a contract. Then the team goes heads-down and executes.
Within this model, iteration is a bug, not a feature.
If the team is pivoting mid-execution, the spec was incomplete. If the client is requesting revisions after delivery, discovery didn’t do its job. Either way, someone is eating rework hours — and rework hours are margin-burning hours. The agency isn’t billing for those. Nobody’s happy.
So the incentive structure bends hard toward eliminating iteration. The ideal engagement is one where discovery is thorough, the spec is airtight, execution is precise, and delivery lands exactly as promised — first try, no drama, invoice paid. More upfront thinking, less downstream churn. It’s a precision game by design.
And honestly? For this kind of work, that’s correct. A client hiring an agency isn’t signing up for experiments with their budget. They’re buying certainty. The whole value proposition is “we’ve done this before, we know how to do it, here’s what it costs.”
The SaaS Startup Model: Iteration as the Whole Point
Now flip the context entirely.
Here’s the fundamental difference between these two worlds — and it’s not process, it’s what is actually knowable.
The agency model is built on a premise: that the answers exist, and that with the right discovery process, you can find them before you build anything. Ask the right questions. Listen carefully enough. Document thoroughly enough. The whole system is architected around that assumption — and when the client and context cooperate, it holds up.
The SaaS startup is built on the opposite premise: that you cannot fully know upfront. Not because you’re underprepared, not because you didn’t do your homework — but because you’re trying to predict how thousands of people you’ve never met will behave with a product that doesn’t exist yet, in a market that is still forming. The answers aren’t hiding somewhere waiting to be discovered in a discovery call. They don’t exist yet. They have to be earned, through doing, shipping, and measuring what happens.
This is what makes iteration not just useful but necessary. You’re not iterating because you’re scrappy or short on time. You’re iterating because it’s the correct, rational response to operating under genuine uncertainty. The team that accepts that and builds a fast learning engine will always outpace the team that keeps trying to plan their way to certainty — because certainty, at that stage, is a myth you can’t afford to chase.
A SaaS startup — especially early-stage — doesn’t have a client with a spec. It has a hypothesis. A belief that some group of people has a problem, that a product can solve it, and that someone will pay for that solution. The catch: that hypothesis is almost certainly wrong in at least a few important ways, and the job of the whole team is to find out which ways, as fast as possible.
The mechanism is experimentation. Ship something. Observe. Measure. Adjust. Ship again. Close the loop. Repeat.
This is what iteration actually looks like up close, stripped of the jargon: baby steps. One after another. Each one small enough to be low-risk, fast enough to keep information flowing, and directional enough that you’re always learning something. Not giant, perfectly-planned leaps — baby steps. Because a baby step that goes sideways costs you a week. A giant leap that goes sideways costs you a quarter.
This is where iteration stops being a bug and becomes the engine. A week spent spec’ing and polishing something, only to find out users don’t actually care about it? That’s the waste. A week spent shipping something scrappy that generates a real reaction — positive or negative — and teaches you something about user behavior? That’s a week well spent. That’s signal. Signal is the asset.
The SaaS startup isn’t playing a precision game. It’s playing a volume game. The goal isn’t to get one big thing perfectly right. It’s to run enough small, calculated experiments — quickly enough, with enough measurement — that you can triangulate toward what actually works before you run out of runway. Think less “quarterly roadmap delivered on scope” and more “we’re going to try five things this sprint and kill three of them.”
This means getting comfortable with things that feel slightly wrong if you’re wired for agency-style delivery:
“Good enough for now” is a valid and strategic output. You don’t know if this is the right thing yet. Don’t over-engineer the wrong thing.
Week-long cycles, not quarter-long projects. Long cycles bury feedback. Short cycles surface it — fast.
Small bets in parallel over single large ones. Ship multiple things. Kill what isn’t working. Double down on what is.
Metrics over vibes. Not “we think this is better” — “we shipped it, we measured it, here’s what we saw.” The data has a vote.
Do things that don’t scale. Seriously. Scaling problems are good problems. You can solve those later. Right now you’re trying to find the thing worth scaling.
When the Two Worlds Collide (And They Do)
Most real businesses need elements of both at different times. There are moments when precision and commitment are exactly right — when you’re delivering on a promise to a customer, building foundational infrastructure that will be painful to change, or cutting a contract with a real deadline. But there are many other moments where exploration and speed matter far more than polish.
The trouble starts when a team imports the wrong mental model into the wrong context.
An agency mindset inside a SaaS startup leads to over-specced features, bloated PRDs for things that haven’t been validated, long development cycles, and a deep, bone-level aversion to shipping anything that isn’t fully baked. Teams get nervous putting something out that isn’t “done” — because in the agency world, an underdone delivery is a genuine problem. But in a startup, an underdone thing that earns a real reaction is data. Precious, directional, actionable data. Fear of shipping rough is fear of learning.
Flip it, and a startup mindset inside a client services business leads to scope creep, fuzzy deliverables, and broken trust fast. Clients didn’t sign up for a two-week sprint and a retrospective. They signed up for the thing in the contract.
The Default Should Match the Environment
If you’re building a SaaS product, your team’s default posture should be: iteration is good. Not churn for its own sake, not “we’ll figure it out as we go” as a substitute for thinking — but disciplined, fast, measurement-driven cycles that keep the feedback loop tight and the learning compounding.
The teams that build great SaaS products aren’t usually the ones with the best specs. They’re the ones who close the feedback loop the fastest. They’re the ones who’ve genuinely internalized that “this will work for now” is not a cop-out — it’s an honest acknowledgment that you don’t fully know what “right” looks like yet, and the only way to find out is to ship, watch, and adjust.
Baby steps. A thousand of them. Moving faster, in aggregate, than any single giant leap ever could.
Iteration, in that world, isn’t a bug in the process.
It’s the process.
