When Architecture Becomes Fluid
A few days ago, AWS announced that the AWS Serverless Agent Plugin is now in the Anthropic plugins marketplace. Install it in Claude Code, Kiro, or Cursor, and your AI agent can analyze your codebase, recommend services, generate infrastructure as code, estimate costs, run security scans, and deploy. In the same two-week window, AWS shipped two more capabilities: one that lets agents initialize SAM projects, wire up event-driven architectures, enforce least-privilege IAM, and instrument observability from the start; and another that guides developers through building checkpointed, durable Lambda workflows that can run for up to a year.
The pitch was "best practices by default." Security, observability, resilience baked into the AI-guided workflow from day one. The agent doesn’t just write the code, it now architects the application.
I want to take that claim seriously and follow it somewhere uncomfortable.
***
For most of my career, architecture was the thing you fought about in design reviews. Microservices or monolith. Event-driven or request-response. Step Functions or roll your own. Saga pattern or two-phase commit. These were consequential decisions because they were hard to reverse, expensive to get wrong, and shaped how your system would fail for years to come. The architect's value was in knowing which tradeoffs to make given the specific constraints of a team, a product, a moment.
But here's the thing. If an agent can scaffold an event-driven architecture with EventBridge, SQS, and DynamoDB Streams in the time it takes me to open a Miro board, and if a different agent can rearchitect that same system next quarter when the requirements shift, then the architecture starts to matter less as a decision and more as a snapshot. It's whatever the system happens to be shaped like right now. It's not the thing you chose. It's the thing the agent chose on your behalf, and it might choose differently tomorrow.
I went looking for people making this argument; that architecture is becoming an implementation detail rather than a design decision. I found two camps.
The first camp says architecture matters more now because AI collapses feedback loops. When you can scaffold an API, generate tests, and wire monitoring in minutes, bad architectural decisions surface immediately. AI doesn't eliminate the need for architecture, it amplifies the cost of getting it wrong. That's true, but I think it's backward-looking because it assumes humans will continue to be the ones making and evaluating those decisions.
The second camp talks about architects moving from "in the loop" to "on the loop" to eventually "out of the loop," shifting from making decisions to designing the system's ability to design itself. That's closer, but it still frames the architect as the essential human in the picture, just at a higher level of abstraction. It imagines a gentle transition rather than a structural shift.
Neither camp follows the logic to its endpoint yet, at least that I'm aware of.
***
Here's where I think this goes.
If agents can architect, deploy, maintain, and rearchitect systems to deliver a given function, then the architecture becomes a runtime variable. Nobody "chooses" it any more than you choose which TCP packets get retransmitted. The agent optimizes and the system runs. That’s it. The patterns shift underneath you based on load, cost, failure conditions, whatever the agent is optimizing for at that time. Architecture stops being the thing you decided in a design review six months ago and becomes something closer to a continuously evolving state that the agent manages.
At that point, the question "what's the architecture?" becomes roughly as interesting as "what's the current state of the routing table?" Technically answerable, practically irrelevant to most of the humans involved.
And if architecture becomes fluid, something that agents can swap to maintain function, then the whole discipline of making architectural decisions starts to look like something temporary. Not because the decisions were wrong, but because the decisions stop needing to be made by humans at all.
I know this sounds like a story about architects losing their jobs, and it is, partly. But it's also a story about something much more delicate.
***
An agent that maintains function "at all cost and at all architecture" is optimizing for one thing: keeping the system running. And chances are that it will eventually be very good at it. It will rearchitect around failures. It will find workarounds for degraded dependencies. It will swap patterns, add retries, reroute traffic, spin up compensating services. From the outside, the system will look healthy. The dashboards will be green. The SLOs will be met.
But "running" and "healthy" are not the same thing.
The agent is unlikely to notice that the system has drifted so far from anyone's mental model that no human can reason about it anymore. It won't flag that the reason it keeps having to rearchitect is that an upstream dependency changed its data contract six months ago and nobody told anyone. Of course, in theory, agents could track contract changes across teams by pulling the latest API spec, reconcile, adapt. And mostly they will. But "mostly" is where the trouble lives. When agents handle 99% of cross-team coordination flawlessly, the 1% they miss becomes invisible precisely because everything else is compensating for it. The system holds together until it doesn't, and when it doesn't, every successful compensation that masked the gap becomes part of the blast radius. It won't recognize that the system is technically meeting its SLOs while slowly becoming incomprehensible.
This is a version of the prevention paradox running at machine speed.
When human operators kept systems running, there was a natural limit: the operators themselves. They got tired. They complained. They filed tickets. They said "this is getting ridiculous" in postmortems. The friction of human maintenance was a signal. It was an ugly, expensive, inefficient signal, but it told you something about the health of the system that no dashboard could capture. The operator's frustration was information about the gap between how the system was supposed to work and how it actually worked.
Agents don't get frustrated. They don't have the felt sense that something is getting ridiculous. They just keep compensating. And every successful compensation is a small act of hiding the true state of the system from the humans who are nominally responsible for it.
In my last newsletter, I wrote about David Woods' observation that AI doesn't solve your problems, it moves them somewhere you can't see. This is the next step in that sequence. AI didn't just move the problems, it actually moved the architecture. And when the architecture is fluid, managed by agents, shifting underneath you to maintain function, the gap between what the system is doing and what you think it's doing doesn't shrink. Instead it grows. Mostly because the agents are papering over it constantly, and every successful paper-over makes the gap a little harder to see.
***
There's a concept from resilience engineering that I keep returning to: the WAI-WAD gap; Work-As-Imagined versus Work-As-Done. In every organization, there's a difference between how people think the system works and how it actually works. The interesting failures happen in that gap.
In a world where architectures are fluid, managed as a runtime variable by agents, the WAI-WAD gap takes on a new dimension. It's no longer just that humans have an outdated mental model of a stable system. It's that the system itself is changing, continuously, underneath a mental model that was never designed to track continuous change. The architecture you reviewed last quarter might bear no resemblance to what's running today. And nobody noticed because the function never degraded.
This is what makes the "best practices by default" pitch from the AWS announcement both true and misleading. The practices are there, security and observability are instrumented, resilience patterns are in place. At time of deployment, the system is well-architected. But architecture is not a point-in-time property anymore. It's an ongoing relationship between a system, its operators, and its environment. And that relationship degrades when nobody can see the system clearly anymore.
***
I don't think the response to this is to resist agents managing architecture. That ship has already sailed. Developers will use them because they're pretty useful and the productivity gains are there.
But I think the response is to recognize that what matters is shifting. The important question was never really "what architecture should we use?" It was always "is this system healthy?" Those two questions used to be tightly coupled because architecture was stable and you could reason about health by reasoning about structure. If the architecture was sound, the system was probably healthy. If the architecture had known weaknesses, you knew where to look.
When architecture becomes fluid, that coupling breaks. You can no longer infer health from structure because the structure keeps changing. Health becomes something you have to measure directly, continuously, and independently of whatever the agents are doing underneath.
That's a different discipline than architecture. It's closer to what I'd call operational awareness. It’s the ability to see the gap between what the system is doing and what you think it's doing, even when (especially when) the metrics say everything is fine. It requires understanding not just the function but the cost of the function, the drift of the function, the comprehensibility of the function.
Agents that architect applications are a real and meaningful capability. But the thing they're automating was never the hard part. The hard part was always understanding whether the system you built was actually doing what you thought it was, in the way you thought it was, at a cost you could sustain. That question just got harder, not easier.
//Adrian