
Generative AI platforms such as ChatGPT have entered classrooms, universities, and homework routines with astonishing speed and little attention to the long-term consequences. A recent Canadian news report, aired on CBC’s
, also revealed that teachers have mostly been left to fend for themselves.
The current infatuation with AI is part of a recurrent pattern, but the latest educational fad is
far more fundamental in its impact on teaching and learning in classrooms
. It’s time to ask: Are these tools eating away at our brain power and leading schools astray?
Technology evangelists and educators espousing ‘21st century learning’ tout its ability to save time, individualize instruction, and increase access to information. But little has been done to assess its effects on students’ ability to think independently, write clearly, and engage with knowledge deeply.
What’s encouraging is the fact that leading cognitive scientists, evidence-based researchers, and experienced frontline teachers are
beginning to right the balance
.
There is
that the emergence of ChatGPT and similar AI tools are short-circuiting deeper learning, eroding critical thinking capacities, and undermining the teaching of writing. Our brains, it turns out, need knowledge to function at their best.
Generative AI is proving to be a large learning model which encourages passivity in learners. Leading cognitive scientist, Barbara Oakley, an American expert on learning how to learn, warns that “
mental effort is essential to build real understanding
.” Meaningful learning, according to Oakley, is built through deliberate practice, cognitive struggle, and retrieval of knowledge — all processes undermined when students delegate intellectual labour to AI tools.
Bypassing the productive discomfort associated with writing and problem-solving, students risk becoming consumers of content rather than producers of thought. The process of wrestling with an argument, organizing one’s thoughts, and finding the right words is foundational to critical thinking.
Generative AI, however, short-circuits this developmental trajectory by offering polished outputs without much heavy lifting. If students become accustomed to outsourcing the most demanding aspects of thinking and writing, known as
, they lose the capacity to do it themselves.
American education commentator, Natalie Wexler, author of
, sees AI as the latest educational trend that emphasizes skills over content, inhibiting our capacity to grasp and understand knowledge in context. True critical thinking, she argues, cannot be taught in isolation from a deep base of knowledge. In her view, students need a well-stocked mental library of facts, concepts, and contexts to think critically and write effectively. Generative AI, by providing
surface-level responses to prompts
, may reinforce the illusion that knowledge is readily available and easily synthesized, even when it lacks depth or coherence. Students may come to view knowledge acquisition as unnecessary, assuming that AI can fill in the gaps. This undermines both the cognitive effort required to develop coherent explanations and the long-term retention that underpins higher-order thinking and genuine problem-solving.
British educator and researcher Carl Hendrick, an education professor at Academica University of Applied Sciences, adds another layer to this critique by pointing to the performative nature of much AI-assisted writing. In his work on educational psychology and cognitive learning, Hendrick notes that true understanding is often masked by “pseudo-proficiency” — the ability to mimic knowledge without possessing it. Generative AI exacerbates this issue by allowing students to submit text that appears articulate and logically structured, even when it
. It’s what’s in your head that really matters.
Hendrick has also exposed the phenomenon of “
” — students giving the right answers for the wrong reasons. In pedagogical lingo, that might be described as the practice of speaking through AI-generated prose without owning or really comprehending the ideas expressed in your own prose. This not only corrodes academic integrity but also detaches students from the reflective practice essential to developing a personal voice in writing.
Another major consequence of generative AI is the potential degradation of writing by diminishing the writer’s craft. The practice of writing serves as a tool for thinking — what Carnegie Mellon University writing experts Linda Flower and John Hayes
as a recursive process involving planning, translating, and reviewing.
When students rely on AI-generated texts, they miss out on this iterative engagement with ideas. The act of writing becomes mechanical rather than intellectual, transactional rather than transformational. This loss is not trivial: writing is not merely a means of communication, but
. It is through writing that many learners discover what they think, clarify their positions, and challenge assumptions.
Overreliance on generative AI has a more subtle but notable effect in serving to
promote intellectual conformity
. Since AI tools are trained on vast datasets of existing language patterns, their outputs often reflect mainstream, conventional thinking. This raises concerns about the homogenization of student work and the suppression of dissenting or original perspectives. True critical thinking involves questioning norms, exploring ambiguity, and entertaining multiple viewpoints — all practices that may be dulled by AI systems optimized for coherence and predictability over original thinking challenging technocratic norms.
Educational technologies are never neutral; they also come built-in with pedagogical assumptions and consequences. Educators must guide students to use AI as a scaffold rather than a crutch, encouraging them to think critically with AI rather than passively accepting its outputs. Overreliance on generative AI risks turning schools into sites of mechanistic interactions and producing students who are adept at mimicry but impoverished in judgement.
Resisting or banning AI is out of the question, but the time has come for a pause in the onslaught. The leading AI skeptics have got it right: genuine learning is built on effort, knowledge, and reflection — three things generative AI cannot supply. The latest classroom innovation may well be leading us astray and actually hindering the core mission of schooling. Surely our mission is not to produce technically fluent automatons, but rather live heads capable of becoming curious, thinking, and responsible citizens.
Paul W. Bennett, Ed.D., is director of the Schoolhouse Institute, senior fellow of Education Policy at Macdonald-Laurier Institute, and chair/national coordinator of researchED Canada.
National Post








