Most of us have shipped code we didn't fully understand. We generated it, tested it, pushed it. It worked. That's an honest admission, and if your team is working with AI coding tools, it's probably familiar.
The speed at which we now produce software has outpaced our capacity to understand it. But this pattern isn't new.
Every generation of the software industry has hit a point where system complexity exceeded the ability to manage it. The 1960s brought the software crisis. The 1970s brought new languages. The 1980s brought personal computing. Each decade introduced tools that resolved the current bottleneck and enabled the next, larger one.
What's different with AI isn't the pattern—it's the scale. We can now generate code as fast as we can describe it. Production capacity is effectively unlimited. Comprehension capacity is not.
Conflating easy with simple
There's an important distinction that tends to get lost.
Easy means low friction: describe what you need, and an AI agent produces it. No significant effort required.
Simple means low coupling: each part of the system does one thing, doesn't entangle unnecessarily with others, and can be understood and modified without archaeology six months later.
Easy lets you add things quickly. Simple lets you understand,and change, what you've built. Making something easier is trivial. Making something simple requires deliberate design decisions.
AI is the ultimate easy button. It's made the fast path so frictionless that teams rarely pause to consider the other one.
How a system accumulates complexity
It starts innocuously. The team asks an AI agent to build an authentication system. It looks clean. They iterate, add integrations, handle edge cases. Each request is fast and the output looks reasonable.
But by the twentieth iteration, nobody is designing anymore, they're managing a context that's grown complex enough that the original constraints are blurry. There's code from abandoned approaches that never got cleaned up. Tests that were "fixed" by making them pass rather than by solving the underlying issue. Three different approaches to the same problem coexisting in the same system because the team kept changing direction.
There's no friction against bad decisions. The agent doesn't flag poor architecture. The system simply mutates to satisfy the latest request, accumulating hidden complexity until it becomes expensive to change anything.
What AI can't distinguish
Any production system contains two kinds of complexity. The first is intrinsic: the actual business logic—authentication, order fulfillment, payment consistency. The second is accidental: workarounds, compatibility layers, decisions that made sense two years ago and haven't been revisited since.
Experienced engineers can tell them apart. They know which parts of the system reflect core business requirements and which are just how someone solved something under pressure in 2022. An AI agent can't make that distinction. Every existing pattern looks equally valid to preserve.
This matters most when a system needs to change. When teams try to significantly evolve a system built this way, the cost isn't just engineering time—it's the accumulated weight of decisions nobody fully understood in the first place. The system resists change not because the problem is hard, but because the solution was never really comprehended.
Thinking cannot be delegated
The approach that works is straightforward but requires inverting the typical workflow: invest time in understanding and design first, then generate code.
Map what exists. Define clearly what needs to change and why. Make the key decisions before any implementation starts. Then, with a validated plan, let AI handle the execution.
This isn't about slowing down—it's about making the fast parts actually fast. A well-specified implementation task takes the agent a fraction of the time, produces more predictable output, and requires far less review. The bottleneck moves from "generating code" to "making good decisions," which is where it should be.
What cannot be delegated is judgment. AI accelerates execution. The decisions that determine whether a system remains maintainable and extensible long-term remain human.
The second-order risk
As code generation becomes nearly free, the barrier to entry for building software drops significantly. More products are being built by teams without deep engineering experience—teams that can produce convincing early versions, but haven't yet encountered the failure modes that emerge at scale or over time.
The divergence shows up when a system needs to grow, handle real load, or recover from an unexpected failure. Teams that haven't maintained production systems through these moments don't recognize which early decisions create serious downstream problems. They don't flag excessive complexity while generating it.
There's something subtler too. Each time a team skips the design phase to maintain velocity, they're not just adding code nobody fully understands—they're losing the ability to recognize when something is becoming harder to manage than it should be. That signal requires understanding your own system to detect.
For technical leaders, the implication is practical: the value of engineering experience isn't in writing code faster. It's in knowing which decisions will cost you later, and making them deliberately before the system makes them for you.
The question isn't whether AI will be used to build software. That's settled. The question is whether the people using it will continue to understand what they're building—and whether the organizations funding that software will know how to tell the difference.
AI accelerates execution. The decisions that determine whether a system remains maintainable and extensible long-term remain human.Liquid Team