top of page
Dr David P Ruttenberg
PhD, FRSA, FIoHE, AFHEA, HSRF
Neuroscientist & AI-Ethics Specialist
Honorary Senior Research Fellow & Fulbright Specialist
Creator of Neuro-adaptive/Sensory Sensitivity Technologies
University College London: Institute of Education | Institute of Cognitive Neuroscience | Institute of Healthcare Engineering
University of Cambridge: Centre for Attention Learning & Memory | Cognition & Brain Sciences Unit
Contacts: t.: +1.561.206.2160 | e.: david@davidruttenberg.com | e.: d.ruttenberg@ucl.ac.uk | LinkedIn | UCL Profile
I help organisations deploy AI that enhances human cognition—ethically and inclusively.
Blog
Search
![[HERO] From Compliance to Care: A Simple Ethical AI Checklist for Leaders Who Hate Checklists](https://cdn.marblism.com/lBwLMfTzpKe.webp)
![[HERO] From Compliance to Care: A Simple Ethical AI Checklist for Leaders Who Hate Checklists](https://cdn.marblism.com/lBwLMfTzpKe.webp)
From Compliance to Care: A Simple Ethical AI Checklist for Leaders Who Hate Checklists
I'll admit it: I hate checklists. They feel reductive. Bureaucratic. Like someone's trying to turn complex human decisions into a paint-by-numbers exercise. But here's the paradox: when it comes to ethical AI, most leaders need something concrete. Not because they're lazy, but because "be ethical" is about as actionable as "be innovative." So let's try this: a checklist that doesn't feel like compliance theater. A framework that moves from checking boxes to caring about out
![[HERO] Why “Ethical AI” is More Than a Compliance Checklist (And Why CXOs Should Care)](https://cdn.marblism.com/dQE7TkUpssE.webp)
![[HERO] Why “Ethical AI” is More Than a Compliance Checklist (And Why CXOs Should Care)](https://cdn.marblism.com/dQE7TkUpssE.webp)
Why ‘Ethical AI’ is More Than a Compliance Checklist (And Why CXOs Should Care)
I am a parent first. Our daughter, Phoebe, is 23. She is autistic, ADHD, and epileptic. We have done the long stretch of diagnoses, therapies, school meetings, and the kind of ER visits that erase your sense of time. She has also survived two craniotomies. When you have lived that, the words “trust,” “safety,” and “accountability” stop being abstract nouns. They become the whole point. So when I hear leaders talk about “ethical AI” like it is a compliance chore, I have a hard
![[HERO] 7 Mistakes You’re Making with AI Governance (And How to Fix Them)](https://cdn.marblism.com/TA9kkaqR_sK.webp)
![[HERO] 7 Mistakes You’re Making with AI Governance (And How to Fix Them)](https://cdn.marblism.com/TA9kkaqR_sK.webp)
“7 Mistakes You’re Making with AI Governance (And How to Fix Them)”
Here’s the uncomfortable truth about AI governance: most organizations are getting it wrong. They’re getting it wrong in ways that cost money. They’re getting it wrong in ways that create legal exposure. And they’re getting it wrong in ways that undermine the very innovation they’re trying to achieve. The numbers don’t lie. Despite massive investments in artificial intelligence, only 5% of companies successfully scale their AI projects (Fountaine et al., 2019). That’s a stagg
![[HERO] AI Risk Management in 2026: Moving Beyond the Compliance Checklist](https://cdn.marblism.com/20h-mAuYZXl.webp)
![[HERO] AI Risk Management in 2026: Moving Beyond the Compliance Checklist](https://cdn.marblism.com/20h-mAuYZXl.webp)
AI Risk Management in 2026: Moving Beyond the Compliance Checklist
I’m going to start somewhere more human than a checklist: with our daughter, Phoebe. She’s 23, autistic, ADHD, and epileptic, and our family’s learned the hard way what “risk” feels like when it’s real, immediate, and personal: diagnoses that took years to untangle, therapies that helped (and some that didn’t), too many ER visits, and two craniotomies that changed the shape of our lives overnight. That’s why I’m allergic to performative safety. Because when the stakes are h
![[HERO] 7 Mistakes You](https://cdn.marblism.com/GmIh82Rre2o.webp)
![[HERO] 7 Mistakes You](https://cdn.marblism.com/GmIh82Rre2o.webp)
7 Mistakes You're Making with AI Risk Management (and How to Fix Them)
Let me be honest: watching organizations deploy AI systems without proper risk management is like watching someone hand car keys to a teenager without teaching them to drive. It is terrifying, preventable, and almost guaranteed to end badly. As someone who has spent decades working at the intersection of neuroscience, technology, and ethical AI deployment, I have seen brilliant organizations make the same seven mistakes repeatedly. The good news? Every single one is fixable.
![[HERO] Technical Test & Audit: Why](https://cdn.marblism.com/nR205nUk0q_.webp)
![[HERO] Technical Test & Audit: Why](https://cdn.marblism.com/nR205nUk0q_.webp)
Technical Test & Audit: Why ‘AI Governance’ Doesn’t Work Without Human Scrutiny
You've built your AI governance framework. You've checked the boxes, risk assessments, bias audits, compliance documentation. Everything looks perfect on paper. But here's the uncomfortable truth: your AI governance isn't actually governing anything if humans aren't actively scrutinizing the system . AI governance without human oversight is like autopilot without a pilot, technically functional until the moment it isn't. And by then, the damage is done. Let me explain why eve
![[HERO] Technical Test & Audit: Why](https://cdn.marblism.com/nR205nUk0q_.webp)
![[HERO] Technical Test & Audit: Why](https://cdn.marblism.com/nR205nUk0q_.webp)
Technical Test & Audit: Why ‘AI Governance’ Doesn’t Work Without Human Scrutiny
You've built your AI governance framework. You've checked the boxes, risk assessments, bias audits, compliance documentation. Everything looks perfect on paper. But here's the uncomfortable truth: your AI governance isn't actually governing anything if humans aren't actively scrutinizing the system . AI governance without human oversight is like autopilot without a pilot, technically functional until the moment it isn't. And by then, the damage is done. Let me explain why eve
![[HERO] Why “Ethical AI” is More Than a Compliance Checklist (And Why CXOs Should Care)](https://cdn.marblism.com/dQE7TkUpssE.webp)
![[HERO] Why “Ethical AI” is More Than a Compliance Checklist (And Why CXOs Should Care)](https://cdn.marblism.com/dQE7TkUpssE.webp)
Why ‘Ethical AI’ is More Than a Compliance Checklist (And Why CXOs Should Care)
I am a parent first. Our daughter, Phoebe, is 23. She is autistic, ADHD, and epileptic. We have done the long stretch of diagnoses, therapies, school meetings, and the kind of ER visits that erase your sense of time. She has also survived two craniotomies. When you have lived that, the words “trust,” “safety,” and “accountability” stop being abstract nouns. They become the whole point. So when I hear leaders talk about “ethical AI” like it is a compliance chore, I have a hard
![[HERO] 7 Mistakes You’re Making with AI Governance (And How to Fix Them)](https://cdn.marblism.com/TA9kkaqR_sK.webp)
![[HERO] 7 Mistakes You’re Making with AI Governance (And How to Fix Them)](https://cdn.marblism.com/TA9kkaqR_sK.webp)
“7 Mistakes You’re Making with AI Governance (And How to Fix Them)”
Here’s the uncomfortable truth about AI governance: most organizations are getting it wrong. They’re getting it wrong in ways that cost money. They’re getting it wrong in ways that create legal exposure. And they’re getting it wrong in ways that undermine the very innovation they’re trying to achieve. The numbers don’t lie. Despite massive investments in artificial intelligence, only 5% of companies successfully scale their AI projects (Fountaine et al., 2019). That’s a stagg
![[HERO] 7 Mistakes You](https://cdn.marblism.com/GmIh82Rre2o.webp)
![[HERO] 7 Mistakes You](https://cdn.marblism.com/GmIh82Rre2o.webp)
7 Mistakes You're Making with AI Risk Management (and How to Fix Them)
Let me be honest: watching organizations deploy AI systems without proper risk management is like watching someone hand car keys to a teenager without teaching them to drive. It is terrifying, preventable, and almost guaranteed to end badly. As someone who has spent decades working at the intersection of neuroscience, technology, and ethical AI deployment, I have seen brilliant organizations make the same seven mistakes repeatedly. The good news? Every single one is fixable.


The Autism Advantage in AI Ethics: Why Neurodivergent Minds Are Essential for Responsible Technology
As AI systems increasingly impact healthcare, hiring, and public safety, hidden risks and biases continue to arise. This article reveals why autistic and neurodivergent professionals bring unmatched skills in pattern recognition, data validation, and ethical oversight that are essential for responsible AI.
bottom of page