The Comprehension Crisis
22 Feb 2026When we talk about the current AI surge, the conversation almost always centres on output.
More code produced per hour. Faster delivery cycles. Higher individual throughput. The charts from major LLM providers suggest a world where the “cost of intelligence” is dropping at an extraordinary pace.
But having spent years working in and observing how systems and teams actually perform, I’m increasingly concerned that we are optimising the wrong end of the pipeline. We are so focused on output that we are neglecting comprehension: the thinking, understanding, and learning that happen before a single line is written.
When reasoning and problem-solving become cheap and instantly available, we can produce solutions faster than we can understand them. The problem is not necessarily incorrect output, but a gradual loss of understanding about why things work and where they might fail.
I’ve started thinking of this as cognitive atrophy.
The feedback loop of understanding
In DevOps terms, using AI to bypass deep thought is similar to automating a deployment pipeline without understanding the delivery system behind it. You may get a short-term increase in speed, but you weaken the feedback loops that build long-term capability.
When we consistently outsource the act of thinking, we slowly lose the ability to reason deeply about the work itself. The output may look technically correct, but our internal mental model of why it works (and where it might fail) becomes thinner over time.
AI is excellent at removing productive friction. But in engineering, friction is often where learning happens. Wrestling with a difficult bug, tracing a production incident, or working through an architectural trade-off is how intuition and system understanding are built.
If a model generates the design, the code, and even the explanation, it becomes easy to move work forward without ever really owning the logic. The system looks faster, while the capability inside the system quietly degrades.
When solutions become cheap
This is also changing what experience and seniority mean, it used to be easy to recognise seniority in the ability to implement solutions quickly and confidently. With AI support, that signal becomes weaker. The difficult part is no longer producing a solution, but deciding whether the solution makes sense in the system it will live in.
AI can generate plausible answers almost instantly, but plausibility is not the same as fit. Deciding what belongs in a particular system still requires an understanding of constraints, history, and trade-offs.
These are the kinds of decisions that teams already deal with:
- deciding where to reduce batch size
- choosing what to automate first
- balancing short-term speed with long-term stability
- understanding system constraints and unintended consequences
These capabilities don’t disappear with AI. If anything, they become the main source of advantage.
Using AI without losing comprehension
Avoiding cognitive atrophy requires being intentional about how we integrate AI into daily work. Here are a few principles that I’ve found to be useful:
Start with the problem Before using AI, articulate the problem yourself. Define constraints, risks, and desired outcomes. This mirrors understanding your value stream before optimising it. If you cannot explain the problem clearly, the solution will not be trustworthy.
Treat AI as an accelerator Let it draft, explore, and suggest. Keep ownership of structure, decisions, and logic. If you can’t explain the solution without the tool, you don’t truly own it.
Optimise for learning In DevOps we know that speed without feedback creates instability. The same applies cognitively. If throughput rises while learning falls, we are accumulating technical and human debt.
Capability and comprehension
AI will unquestionably increase delivery capacity. But faster output does not automatically translate into stronger capability. A team can move quickly while its understanding of the system gradually becomes thinner.
Adopting the tool is easy. Preserving and growing the capability to understand the result is harder, and that is where the real advantage lies.