It’s Not Artificial Intelligence I’m Scared Of – It’s Natural Stupidity
By Andrew Rogerson
Reports warn that our increasing reliance on AI is reducing human intelligence. I can see the logic. How often do you read a piece of work in English - UK English that is - littered with American spellings?
“But I ran it through spellcheck,” comes the usual defence. Perhaps. But in doing so, the writer abdicated responsibility for their work.
But what exactly is the ‘intelligence’ we are losing? Is it our memory? Our ability to approach a question and problem-solve? Our reasoning? Our memory (!)? Our creativity — or some vital combination of all these?
Take navigation. Articles have highlighted how Google Maps is eroding humanity’s natural wayfinding skills. Some over forty-five years old may remember the older way: planning routes the night before using an A–Z road atlas, scribbling road numbers and turn directions on Post-it notes, sticking them to the dashboard, and hoping there were no unexpected roadworks. If they appeared, out came the road atlas, precariously balanced on the lap while trying to drive.
By contrast, deference to Google Maps relieves us of that task. Yet were the old methods so sensible, or so safe? Perhaps the shift allows drivers to focus on the more critical job: driving. Crucially, though, the driver still holds the wheel. The driver still decides whether to turn left or right — and the AI recalibrates accordingly. The human leads; the AI follows.
Isn’t that the true promise of intelligent systems? To enhance safety, reduce danger, and improve efficiency — but not at the cost of human agency.
We’ve faced this dilemma before. Did humanity abandon arithmetic when the pocket calculator arrived? Or did we learn to use it for what it was: a tool, not a crutch?
The real threat to humanity isn’t the advance of AI into dystopian futures — it’s the quiet descent of humans who forget how to learn, how to reason, and how to grow.
It isn’t AI’s fault if we can no longer spell or select the right homophone. It is ours, if we fail to use one of the most powerful tools ever created to deepen our own intelligence.
We now have at our fingertips a machine trained on more knowledge than any individual could absorb in a lifetime, but are we using it wisely? Are we trading away our ability to think for ourselves, when we could be using AI to teach us — to explain its reasoning, its choices, its phrasing, its priorities?
Delegation is acceptable. Abdication is not.
It is worth pausing here to make a distinction - one that goes back to the point I raised at the beginning.
I have used AI during the shaping of this work in much the same way an author might use a trusted proof-reader, or a scholar might seek the input of an expert peer.
I dictated the form, the content, and the grammar. The tool - powerful though it is - remains a tool. The thoughts, the structure, and the direction are entirely my own. The AI has not generated these ideas. It has not led the process. It has offered alternative phrasings and improvements where I have chosen to seek them.
And that distinction matters.
It is the difference between pressing “F2” to run a spellcheck and blindly accepting every suggestion, and pausing - weighing - choosing whether a correction is right, whether a phrasing is deliberate, whether advice is appropriate or should be declined.
The existence of the tool is not the problem. The abdication of human judgement is.
The risk is not in the power of the tool itself. The risk lies in forgetting that it is a tool.
When a tool suggests a better phrase, a sharper structure, a clearer delivery, true human intelligence must push back against the temptation to simply accept it. Intelligence is not about rejecting help. It is about remaining present in the act of creation, learning, and leadership.
AI can be a powerful partner in our growth, or a subtle crutch for our decline.
The choice - and the responsibility - remains ours. It always has.
