Let’s stop pretending this is just about developers getting a little help from a clever machine.
A lot of teams have quietly rebuilt their delivery model around AI. Not around AI as a handy assistant. Around AI as an assumed layer in the engineering process itself. Planning, scaffolding, refactoring, debugging, tests, migrations, documentation, glue code, ticket digestion, code review prep, all of it. We call it vibe coding because that sounds playful and modern and a little bit dangerous.
It is dangerous.
The moment your project plan assumes a model will be smart, cheap, available, and still in business next quarter, you do not have pure leverage. You have a dependency. Worse, you have a dependency on a vendor-controlled cognitive layer that can change under your feet with all the grace of a forklift driven by a caffeinated raccoon.
And we have already seen the shape of the problem. A model people trust gets “updated” and suddenly it feels a little duller, a little less reliable, a little more likely to miss the obvious and burn tokens while doing it. Another release shows up with new promises and worse instincts. Anyone who has leaned hard on a frontier model for real development work knows this feeling. One week you have a dangerous synthetic intern that can sprint. The next week you have the same intern after a head injury and three corporate policy meetings.
That is the first lesson of AI-assisted development: you are not just depending on intelligence, you are depending on continuity.
In the old days, lock-in meant you signed a contract with a cloud provider and woke up six months later with a bill large enough to make finance develop a nervous twitch. With AI, lock-in starts earlier and gets into your bones faster. It starts in habits. It starts when the team writes less design rationale because the model can “figure it out.” It starts when junior people stop building debugging muscle because the machine usually finds the bug first. It starts when delivery estimates quietly assume a model will crank out the migration, explain the stack trace, write the tests, summarize the giant mess in the logs, and propose a patch before lunch.
That all feels brilliant while the machine is behaving.
Then reality walks in with a tire iron.
First, the model gets dumber. Not dramatically. Not enough for the vendor to admit anything useful. Just enough that edge cases take longer, hallucinations get a little bolder, architectural judgment gets a little softer, and your review burden goes up. The team starts re-running prompts. Trust drops. Velocity dribbles away. Management still expects the old delivery pace because management has already fallen in love with the AI-accelerated version of reality.
Second, the price goes up. Maybe it is the token price. Maybe it is usage caps, premium routing, enterprise bundling, context bloat, or some other little accounting trick designed to skin you without using the word “increase.” We have seen this movie before. Cloud looked cheap too, right up until everybody built the company on top of it and the meter became a weapon.
Third, the provider changes the business. The model version you tuned your workflows around disappears. The API behavior changes. The safety layer gets more aggressive. A tool gets deprecated. A feature moves behind a different paywall. A beloved workflow gets smothered by product strategy. None of that is shocking. Vendors optimize for themselves. The shocking part is how many organizations build internal processes that assume the vendor will politely stop doing vendor things.
Fourth, the provider goes down. Outage. Rate-limit incident. regional mess. legal mess. compliance mess. acquisition. shutdown. spectacular self-inflicted wound. Pick your apocalypse. The cause barely matters. The effect is the same: the invisible cognitive exoskeleton your team learned to lean on is suddenly gone.
Now what?
The fantasy answer is: “We’ll just have humans do it.”
No, you won’t. Not right away.
You are not going to re-hire an entire bench of programmers in the middle of an outage because your preferred synthetic code monkey stopped showing up for work. You are not going to instantly recover the engineering habits that were partially outsourced to autocomplete, summarization, rapid prototyping, code explanation, and agentic patch generation. You are not going to walk into the war room, say “resilience” three times, and magically restore the old throughput.
The first honest crisis response is much uglier and much more adult: increase the estimate.
If we have a Snake Plissken-style AI blackout, the immediate move is not heroics. It is arithmetic. Multiply the people required. Multiply the funds required. Multiply the time required. Re-scope the portfolio. Freeze the vanity projects. Figure out what absolutely must ship and what can wait in the penalty box. Move design review and ugly debugging to the people with scar tissue. Accept that work estimated against an AI-assisted baseline was estimated against a condition that no longer exists.
That is not pessimism. That is what competence looks like when the magic trick ends.
A real backup plan for AI-assisted development should look less like “trust the frontier lab” and more like continuity planning.
If the model gets worse, the backup plan is an evaluation harness and a routing layer. Keep a standing benchmark built from your actual work: bug fixes, refactors, migrations, test generation, framework upgrades, weird parsing jobs, code review comments, performance tuning, the whole ugly parade. Test new models against your tasks before you let them into production workflows. Do not assume “latest” means “better.” Half the time “latest” just means “different, pricier, and wrapped in a press release.” Freeze model versions where you can. Keep at least one alternate provider warm. Keep a cheaper fallback for the low-consequence grunt work.
If prices jump, the backup plan is discipline. Stop burning the expensive model on every trivial engineering chore because it feels convenient. Separate hard reasoning from repetitive transformation. Use premium models where they materially change outcomes. Push linting, rote code churn, simple tests, boilerplate, and mechanical translation to cheaper models, local models, or ordinary tooling. Treat AI spend like infrastructure spend, because that is what it is. Put alerts on it. Put limits on it. Put kill switches on it. Magic words like “productivity” have emptied plenty of wallets.
If the provider changes terms, discontinues a service, or decides your favorite workflow is now forbidden, the backup plan is portability. Prompts, agent workflows, evaluation harnesses, system instructions, and orchestration logic belong in your repos, under your control, documented well enough that another model or an annoyed human can pick them up. If the only place your process exists is inside one vendor’s shiny box, then what you have is not a strategy. It is a hostage situation with a user interface.
If there is an outage, the backup plan is manual mode plus local compute. Manual mode means decent internal docs, runbooks that do not read like a ransom note, stronger test coverage, and engineers who can still read code without a synthetic narrator whispering in their ear. But manual mode alone is not enough if AI has become part of the factory floor. You also need a degraded-capability path you control. That means standing up local compute before the emergency, not during it. A sane local stack is not there to beat the best frontier model on Earth. It is there to keep the lights on when the provider has a bad day. Local inference can still do a lot of useful work: codebase search with brains, documentation drafting, log analysis, refactoring suggestions, test generation, offline summarization, config conversion, and all the repetitive chores that otherwise fall back onto your most expensive humans. Your backup generator is not supposed to power the whole casino. It is supposed to keep the hospital wing alive.
And standing up local compute has a second benefit: it changes the power relationship. Vendors behave differently when your answer to price hikes and outages is not panic, but traffic rerouting. Even a modest self-hosted inference capability forces you to separate what truly requires frontier intelligence from what has merely become outsourced laziness. That is a healthy exercise for any engineering organization.
The deeper issue here is not whether one specific release disappointed people, or whether one specific vendor decided to get weird this quarter. That is all noise around the main signal. The main signal is that your development pipeline now depends, at least in part, on an external intelligence provider you do not control.
That means leadership needs to ask a much better question than “How much faster are we with AI?”
The real question is: What is our operating model when the AI gets weaker, pricier, unavailable, or disappears altogether?
Because that day is coming in one form or another. Maybe it looks like a nerfed model. Maybe it looks like token inflation. Maybe it looks like a vanished SKU, a policy wall, a rate limit, or a dashboard full of red circles and corporate apologies. Maybe it looks like a provider changing direction and leaving your workflows stranded on the side of the road.
When that happens, the companies that survive will not be the ones that chanted “vibe coding” the loudest while quietly hollowing out their engineering depth. They will be the ones that treated AI as a powerful but unstable dependency, kept enough human capability in the building to operate without it, built fallback paths before they needed them, and stood up local compute while everyone else was still drunk on demos.
That is the anti-fragile posture here.
Not “never use AI.”
Use it aggressively.
Use it profitably.
Use it to accelerate the boring parts, the ugly parts, and the backlog that should have died two fiscal years ago.
Just do not build your entire delivery promise on the assumption that someone else’s model will stay brilliant, cheap, online, and aligned with your needs forever.
That is not strategy.
That is borrowing your brain from a vendor and acting surprised when they change the terms.
Leave a comment