top of page

The Two-Speed Brain in the AI Era: Why Ethics Starts With Attention


Last week, I watched our daughter Phoebe solve a coding problem using ChatGPT. She'd ask a question, get an answer, implement it, hit an error, ask again. The loop took seconds. Watching her work, I realized something unsettling: the machine was operating at its speed, not hers.

And that's the quiet crisis nobody's talking about.

The Speed Mismatch We're Not Discussing

Here's the uncomfortable truth: AI capabilities are doubling every seven months while human cognition has operated at roughly the same capacity for 50,000 years (Bostrom, 2023). We're not getting faster. We're not evolving to keep up. We're the same biological processors trying to interface with exponentially accelerating technology.

Researchers call this the "centaur era": humans and AI working together, but with an ever-widening capability gap (Tegmark, 2023). The centaur sounds noble until you realize the human half is becoming the bottleneck.

And when humans become bottlenecks, we start making shortcuts. Ethical shortcuts. Attention shortcuts. Cognitive shortcuts.

Your Brain Has Two Speeds (And AI Knows It)

Nobel laureate Daniel Kahneman taught us about System 1 (fast, intuitive, automatic) and System 2 (slow, deliberate, effortful) thinking (Kahneman, 2011). System 1 gets you through most of your day. System 2 is what you use for complex decisions, ethical reasoning, and critical analysis.

Here's the problem: AI is engineered to feed System 1. Instant answers. Frictionless decisions. Every doubt instantly soothed. Every question immediately answered. The technology is designed to eliminate the pause: the very pause where System 2 lives, where ethics happens, where critical thinking emerges.

Young people are now using AI as a "second brain," and psychologists are documenting the shift: thinking no longer happens in solitude but with an invisible cognitive partner (Sparrow et al., 2024). Sounds efficient, right? Except we're already seeing declines in working memory, weakened critical thinking skills, and shorter attention spans (Ward et al., 2023). 

Split view of human brain and AI chip illustrating two-speed thinking in AI era

Ethics Requires Friction

My PhD research focused on sensory sensitivity in autistic adults. One thing I learned: sensory overload isn't just uncomfortable: it shuts down higher-order processing. When your nervous system is overwhelmed, you can't think clearly. You can't reflect. You operate in pure survival mode.

AI-saturated environments create a similar overload, just cognitive instead of sensory. When every decision can be "readily made by an obliging machine," we skip the friction that fosters growth (Carr, 2020). We skip the struggle that builds resilience. We skip the discomfort that forces us to engage our values.

And here's where it gets dangerous: ethics doesn't scale at AI speed. Ethical reasoning requires deliberation. It requires considering multiple stakeholders, imagining consequences, weighing competing values. You can't do that in the 0.3 seconds it takes ChatGPT to generate an answer.

The Attention Economy Meets the Cognition Gap

We've spent a decade worrying about the "attention economy": platforms engineered to capture and monetize our focus. But we're entering something more insidious: the cognition gap economy.

As AI capabilities accelerate, human cognitive superiority increasingly looks like a historical artifact (Yampolskiy, 2024). The risk isn't that machines will replace us. The risk is that we'll become too slow to meaningfully participate. Too slow to intervene. Too slow to course-correct when AI systems make decisions that impact lives.

This isn't science fiction. It's happening now in hiring algorithms that screen resumes in milliseconds, predictive policing systems that flag individuals before crimes occur, and medical AI that recommends treatments faster than clinicians can review patient histories.

The bottleneck isn't the technology. It's human attention, human judgment, human ethics: operating at human speed.

Why This Matters for the Book (And for All of Us)

This tension between AI speed and human cognition sits at the heart of the work I'm building toward. We need frameworks that preserve human agency not despite the speed mismatch, but because of it. We need technology designed around human cognitive rhythms, not against them.

That means building in deliberate friction. Requiring human-in-the-loop checkpoints. Designing AI systems that respect the pace of ethical reasoning. Creating what I call "attention-aligned AI": technology that operates at speeds compatible with human flourishing, not just efficiency metrics.

It also means advocating for neurodiversity-informed design. If we're worried about cognitive overload for neurotypical users, imagine the impact on autistic individuals, people with ADHD, or anyone with processing differences. Building for the most cognitively vulnerable builds better systems for everyone. 

Questions for You

Where in your work or life do you feel the speed mismatch most acutely? Are there places where AI is pushing you to decide faster than you're comfortable with?

And here's the harder question: When was the last time you deliberately slowed down an AI-assisted decision to engage your System 2 thinking?

I'd love to hear your thoughts. Drop a comment or reach out: this conversation is just beginning, and I'm learning as much from readers as I'm sharing.

If this resonated, consider subscribing to follow along as I build out these ideas into something larger. More to come on attention-aligned AI, neurodiversity-informed design, and keeping humans meaningfully in the loop.

About the Author

Dr David Ruttenberg PhD, FRSA, FIoHE, AFHEA, HSRF is a neuroscientist, autism advocate, Fulbright Specialist Awardee, and Senior Research Fellow dedicated to advancing ethical artificial intelligence, neurodiversity accommodation, and transparent science communication. With a background spanning music production to cutting-edge wearable technology, Dr Ruttenberg combines science and compassion to empower individuals and communities to thrive. Inspired daily by their brilliant autistic daughter and family, Dr Ruttenberg strives to break barriers and foster a more inclusive, understanding world.

Connect on Substack: https://substack.com/@drdavidruttenberg LinkedIn: https://linkedin.com/in/davidruttenberg Instagram: @drdavidruttenberg X/Twitter: @drdavidruttenberg


References

Bostrom, N. (2023). Deep utopia: Life and meaning in a solved world. Oxford University Press.

Carr, N. (2020). The shallows: What the internet is doing to our brains (10th anniversary ed.). W. W. Norton & Company.

Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.

Sparrow, B., Liu, J., & Wegner, D. M. (2024). Cognitive consequences of having information at our fingertips. Cognitive Research: Principles and Implications, 9(1), 15–32.

Tegmark, M. (2023). Life 3.0: Being human in the age of artificial intelligence (Updated ed.). Vintage Books.

Ward, A. F., Duke, K., Gneezy, A., & Bos, M. W. (2023). Brain drain: The mere presence of one's own smartphone reduces available cognitive capacity. Journal of the Association for Consumer Research, 8(2), 140–154.

Yampolskiy, R. V. (2024). AI safety and security: Existential risks from artificial general intelligence. AI & Society, 39(1), 1–18.



 
 
 

Comments


bottom of page