Personal Productivity
The Automation Paradox: When the Tools That Help Us Make Us More Vulnerable
AUTHOR: María Sáez
In 1983, psychologist Lisanne Bainbridge coined the term automation paradox to describe a seemingly contradictory phenomenon: the more sophisticated and reliable an automated system becomes, the more crucial human contributions become in order for it to function successfully. Paradoxically, at the same time that automation reduces routine workload, it also increases the importance of human skills when the system fails or encounters unforeseen situations.
When pilots forget how to fly
One of the most dramatic manifestations of this phenomenon occurred on June 1, 2009, when Air France Flight 447 plunged into the Atlantic Ocean, costing the lives of 228 people. Investigators discovered that, after a minor failure in the speed sensors, the pilots found themselves completely disoriented when the autopilot automatically disengaged. Used to having the system manage the flight, they had lost the practice and confidence needed to manually operate the aircraft in a crisis situation.
This accident perfectly illustrates the automation trap: modern pilots can spend hundreds of hours flying without touching the manual controls, but when they need them most, when an emergency situation arises, their skills are rusty. Commercial aviation has become extraordinarily safe thanks to automation, but at the same time, this very safety has created a new vulnerability.
LLMs: The autopilot of knowledge
Today we are living a similar revolution with Large Scale Language Models (LLMs) such as ChatGPT, Claude or Gemini. These tools have democratized capabilities that were previously reserved for experts: writing code, writing texts, analyzing data, solving complex problems. In a matter of seconds, they can generate solutions that would take a human hours or days to develop.
In the programming world, GitHub Copilot or Claude Code have revolutionized the way we write code. They can complete entire functions, suggest optimized algorithms, and even explain complex codes. For many developers, it has become an indispensable assistant that significantly speeds up the development process. But this is where the automation paradox begins to manifest itself in a subtle but troubling way.
When our brain accommodates
The parallel with aviation is not coincidental. A recent study from 2025 by researchers at Microsoft and Carnegie Mellon University has documented for the first time how the use of generative AI tools directly affects our critical thinking. The results are as revealing as they are troubling.
The study, which analyzed 936 real-world examples of AI use at work by 319 knowledge workers, revealed a disturbing pattern: workers who relied the most on the accuracy of AI assistants thought less critically about the conclusions of these tools. In other words, the more we rely on ChatGPT, Copilot or Claude, the less we use our own analytical capabilities.
Ultimately, the researchers observed that when humans increasingly rely on generative AI in their work, they use less critical thinking, which may “result in the deterioration of cognitive faculties that should be preserved.”
This study also shows how the use of AI is fundamentally changing the nature of critical thinking: from information gathering to information verification; from problem solving to integration of AI responses; and from task execution to task monitoring.
The risk of cognitive dependence
If the more you use AI, the less you use your brain, then, when you run into a problem that AI can’t solve, will you have the skills to do it yourself?
The researchers identified what they have called “a key irony of automation: by mechanizing routine tasks and leaving exception handling to the human user, you deprive the user of routine opportunities to practice their judgment and strengthen their cognitive musculature, leaving them stunted and unprepared when exceptions arise.”
This cognitive atrophy is not merely theoretical. Addy Osmani, a Google engineer, discusses in his article Avoiding Skill Atrophy in the Age of AI some real cases of developers who experience what we could call “dependency paralysis” when they cannot use AI: they cannot continue programming efficiently, they feel “lost” without the automatic suggestions, their development speed drops drastically, etc.
In the specific context of programming, this is of particular concern. Junior developers who take “the easy path” may reach a plateau early, lacking the depth needed to grow into senior roles. If an entire generation of programmers “never know the satisfaction of truly solving problems on their own,” as the Microsoft study warns, we could be creating a long-term skill crisis.
Finding the balance: How to use AI without losing our skills
None of this means we should abandon AI tools. As experts point out, it’s a matter of using them wisely, so that we don’t “outsource not only the work itself, but our critical engagement with it.”
Here are 5 specific strategies to protect your cognitive abilities while making the most of the power of AI:
- Implement the 80/20 rule. Use AI for 80% of the routine work, but make sure that the more complex or creative 20% you solve yourself. This will keep your cognitive muscles active.
- Adopt an active verification method. Never accept the AI’s first answer without question. Ask yourself: Does this make sense? What might be missing? Are there other perspectives?
- Practice intentional disengagement. Spend some time regularly performing tasks without AI assistance. Like a pilot who must maintain manual flight hours, keep your basic skills sharp.
- Perform critical thinking pauses. Set specific points in your workflow where you pause and critically evaluate what the AI has produced before continuing.
- Cultivate active curiosity. Instead of simply accepting answers, ask the AI “why?” and “how did you come to this conclusion?”. This keeps your analytical mind active.
Conclusion
The automation paradox is not new, but with generative AI we are at a turning point. As the history of aviation reminds us, more advanced technology requires more skilled humans, not less. The key is maintaining balance: harnessing the power of AI to amplify our capabilities, not to replace them.
The goal is not to reject these tools, but to use them in ways that make us stronger, not weaker.
Real productivity doesn’t come from doing things faster; it comes from doing things better while maintaining our ability to think, analyze and create independently. In an era where AI can do almost anything, the most valuable skill may simply be knowing when not to use it.
No comments