Cognitive Debt: The Hidden Cost of AI Reliance Nobody Is Measuring
- David Ruttenberg
- 3 days ago
- 5 min read
By Dr David Ruttenberg | April 2026 | < 5-minute read

There's a hidden cost to AI reliance that nobody is measuring yet. I call it cognitive debt.
Not the kind you can put on a spreadsheet. Not a bug in the code or a flaw in the algorithm. Cognitive debt is what accumulates in human biology when we chronically outsource thinking, attention, and decision-making to artificial systems — and never stop to ask what that costs us downstream.
I've been working on this formally in my preprint (Ruttenberg, 2025), but the concept deserves to move out of academic language and into plain sight. Because it's already happening — in workplaces, classrooms, and homes — and the bill is coming due whether we're ready or not.
What Is Cognitive Debt?
Cognitive debt refers to the accumulated costs to attention, learning, and mental health from chronic AI over-reliance.
Think of it like financial debt: manageable in small doses, quietly compounding when left unaddressed. Every time we use AI to bypass a cognitive task we could have engaged with ourselves — every skipped struggle, every bypassed reflection, every answer retrieved rather than reasoned — there may be a neurological price.
The prefrontal cortex governs reasoning, decision-making, and working memory. It doesn't maintain itself passively. It requires use. When we consistently outsource the tasks it was designed to perform, we risk the neural equivalent of muscle atrophy. The gain is immediate. The cost is delayed. That asymmetry is what makes cognitive debt so easy to ignore — until it isn't.
What the Science Says
The most striking evidence to date comes from MIT's Media Lab. In a study by Kosmyna and colleagues (2025), participants were divided into three groups: those who used ChatGPT to write essays, those who used a search engine, and those who wrote independently. Electroencephalography (EEG) measured brain engagement throughout.
The findings were stark:
LLM users showed up to 55% reduced neural connectivity compared to brain-only participants
83% of ChatGPT users could not recall or quote passages from essays they had just written
Writing speed was 60% faster for AI users — but relevant cognitive load dropped by 32%
When AI users later tried writing without assistance, they showed weaker neural connectivity than those who had never used AI at all
The researchers named this pattern cognitive debt: the short-term productivity gain that silently accumulates long-term costs in critical thinking, memory consolidation, creativity, and independent judgment (Kosmyna et al., 2025).
This isn't a fringe finding. Gerlich (2025), in a peer-reviewed study of 666 participants across ages and educational backgrounds, found a significant negative correlation between frequent AI tool use and critical thinking ability, mediated by cognitive offloading. Younger users showed the steepest effects. Higher educational attainment offered some buffer — but did not eliminate the costs.
The Mental Health Dimension
What makes cognitive debt especially concerning is that it doesn't stay in the cognitive lane. It bleeds into mental health.
When we chronically externalize attention, memory, and judgment, we also erode cognitive self-efficacy — our confidence in our own mind's reliability. This has downstream effects on anxiety, autonomy, and identity. If your thinking is consistently mediated by a tool, what happens to your relationship with your own cognition when the tool is taken away?
This is particularly critical for neurodivergent individuals. My doctoral research (Ruttenberg, 2023) identified a specific pathway in autistic individuals — the S2MHD model — in which sensory sensitivity feeds into anxiety and fatigue, which in turn drives attentional dysregulation and distractibility. Layer chronic AI over-reliance onto that system, and you're not offering support. You're amplifying a vulnerability.
Personalized AI tools, wearables, and learning assistants can be enormously valuable for neurodivergent people. But designed without sensitivity to how over-reliance compounds existing regulatory challenges, they risk creating something that looks like accommodation while functioning as pressure.
Building the Default Without Asking Whether We Should
Here's what troubles me most: we're not doing this deliberately.
We're not rolling out chronic AI reliance as conscious policy. We're doing it incrementally — one productivity feature at a time. Autocomplete here. Summarisation there. Decision assist somewhere else. Each feature is tested for engagement and efficiency. Almost none are tested for cumulative neurological cost.
That is cognitive debt accumulating in the infrastructure before anyone has decided whether to take on the loan.
My preprint (Ruttenberg, 2025) proposes a framework for measuring and accounting for cognitive debt systematically — across workplaces, educational settings, and clinical contexts. The goal is not to reject AI. It is to use it in ways that compound cognitive capital rather than erode it.
The distinction lies between passive offloading — which reduces cognitive engagement and weakens long-term learning — and active engagement — which uses AI as a scaffold for deeper processing, not a substitute for it. That distinction isn't semantic. It is neurological.
What We Should Be Measuring
If we are serious about an honest, evidence-based relationship with AI, then cognitive debt belongs on the dashboard. We should be asking, of every AI tool we build or deploy:
Does this tool reduce or increase self-directed cognitive engagement over time?
Does it strengthen or erode memory consolidation and recall?
Does it support or substitute metacognitive awareness?
Does it work equitably for users who don't fit the neurotypical default?
What happens to the user when the tool is removed?
That last question is the most important. The MIT study showed that users who relied on AI from the outset performed significantly worse when later asked to work independently — worse even than users who had never used AI at all. That is not a usage metric. It is a dependency metric.
We're building this into the default without asking whether we should.
References
Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1), Article 6. https://doi.org/10.3390/soc15010006
Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task. arXiv preprint. https://doi.org/10.48550/arXiv.2506.08872
Ruttenberg, D. (2023). Multimodality and future landscapes: Meaning making, AI, education, assessment, and ethics [Doctoral dissertation, UCL Institute of Education]. UCL Discovery. https://discovery.ucl.ac.uk/id/eprint/10210135/
Ruttenberg, D. (2025). Cognitive debt: The cumulative cognitive cost of AI-augmented knowledge work [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/yhvec
About the Author
Dr David Ruttenberg PhD, FRSA, FIoHE, AFHEA, HSRF is a neuroscientist, autism advocate, Fulbright Specialist Awardee, and Senior Research Fellow dedicated to advancing ethical artificial intelligence, neurodiversity accommodation, and transparent science communication. With a background spanning music production to cutting-edge wearable technology, Dr Ruttenberg combines science and compassion to empower individuals and communities to thrive. Inspired daily by their brilliant autistic daughter and family, Dr Ruttenberg strives to break barriers and foster a more inclusive, understanding world.
Comments