From Compliance to Care: A Simple Ethical AI Checklist for Leaders Who Hate Checklists
- David Ruttenberg
- Feb 11
- 4 min read
I'll admit it: I hate checklists.
They feel reductive. Bureaucratic. Like someone's trying to turn complex human decisions into a paint-by-numbers exercise. But here's the paradox: when it comes to ethical AI, most leaders need something concrete. Not because they're lazy, but because "be ethical" is about as actionable as "be innovative."
So let's try this: a checklist that doesn't feel like compliance theater. A framework that moves from checking boxes to caring about outcomes. Because the truth is, ethical AI isn't about avoiding lawsuits. It's about building systems that respect human dignity, agency, and neurodiversity.
Why Most AI Ethics Frameworks Miss the Mark
Most AI governance documents read like legal disclaimers written by committees. They're exhaustive, exhausting, and: ironically: easy to ignore. Leaders skim them, sign off, and move on. The problem isn't lack of good intentions; it's that traditional compliance frameworks treat ethics as a constraint rather than a strategic advantage (European Commission, 2021).
But organizations that embed ethical thinking into decision-making: not as an afterthought, but as a competitive differentiator: earn greater customer trust and long-term loyalty (Fjeld et al., 2020). They also attract and retain talent who care about mission alignment. That's not soft stuff. That's business sense.
The Anti-Checklist Checklist
Here's what actually matters. Six questions. No jargon. No lawyers required (though you should still consult them).
1. Do You Know What Your Data Actually Says?
Not "do you have data governance policies." Do you know where your training data comes from? Have you audited it for accuracy, diversity, and hidden bias? If your AI learns from historically skewed datasets, it will replicate: and amplify: those patterns (Gebru et al., 2021). This isn't hypothetical. It's math.
2. Can a Human Override the AI When It Matters?
High-stakes decisions: hiring, medical triage, loan approvals: demand human oversight. Not as a formality, but as a built-in safeguard. Define when humans must review outputs and ensure they have the authority (and training) to intervene (Jobin et al., 2019). Treat human judgment as essential, not optional.
3. Who's Accountable When Things Go Wrong?
If your AI makes a mistake, who owns it? This isn't a technical question: it's a leadership one. Assign clear responsibility. Form cross-functional ethics committees that include leadership, legal, and technical roles (Mittelstadt, 2019). Not to check boxes, but to embed ethical thinking into every product decision.

4. Are You Testing for Fairness Before and During Deployment?
Bias isn't a one-time problem you solve at launch. It evolves as systems learn and as societal norms shift. Build feedback mechanisms so users can flag fairness issues in real time (European Commission, 2021). Make ongoing audits part of your operational rhythm, not an annual performance review.
5. Does Your AI Reflect Your Brand Values: or Undermine Them?
Ethical alignment isn't abstract philosophy. It's brand integrity. If your company claims to value transparency but deploys opaque algorithms, customers will notice. If you champion diversity but build systems that exclude neurodivergent users, you've failed (Ruttenberg, 2025). Your AI outputs should reinforce trust, not erode it.
6. Are You Treating User Data as a Responsibility, Not a Commodity?
Privacy isn't just compliance with GDPR or CCPA. It's treating people's information: especially sensitive data like health metrics or learning patterns: as a sacred trust (Mittelstadt, 2019). If you wouldn't want your own family's data handled the way you're handling user data, rethink your approach.
The Neuroscience Connection
Here's where this gets personal. My work in neurodiversity and assistive technology has taught me that ethical AI isn't just about avoiding harm: it's about designing for difference. Systems built for the "average user" often exclude people with sensory sensitivities, cognitive differences, or non-linear processing styles (Ruttenberg, 2025).
When we design AI that accommodates neurodivergent users: clearer interfaces, adjustable notification settings, fewer sensory triggers: we build better systems for everyone. That's the insight neuroscience brings to AI ethics: human variability isn't a bug. It's the feature we should be optimizing for.

From Theory to Practice
This checklist isn't meant to replace your legal team or your compliance officers. It's meant to start conversations before you're in damage control mode. Use it in product planning meetings. Reference it when evaluating third-party AI tools. Revisit it quarterly, not annually.
Because ethical AI isn't a destination. It's a practice. It's choosing care over convenience, accountability over automation, and human dignity over efficiency: every single time you make a design decision.
And if that sounds idealistic, good. We need more idealism in tech, not less. We need leaders who recognize that the most innovative thing you can do isn't building faster AI: it's building better AI.
So here's my question for you: Which of these six questions makes you most uncomfortable? That's probably the one you need to address first.
And if you're ready to go deeper: if you want frameworks, case studies, and real-world implementation strategies: join me over on Substack. That's where I share the detailed breakdowns, the research updates, and the lessons learned from building ethical tech in the messy real world.
Let's move from compliance to care. Together.
About the Author
Dr David Ruttenberg PhD, FRSA, FIoHE, AFHEA, HSRF is a neuroscientist, autism advocate, Fulbright Specialist Awardee, and Senior Research Fellow dedicated to advancing ethical artificial intelligence, neurodiversity accommodation, and transparent science communication. With a background spanning music production to cutting-edge wearable technology, Dr Ruttenberg combines science and compassion to empower individuals and communities to thrive. Inspired daily by their brilliant autistic daughter and family, Dr Ruttenberg strives to break barriers and foster a more inclusive, understanding world.
References
European Commission. (2021). Proposal for a regulation laying down harmonised rules on artificial intelligence. https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence
Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, M. (2020). Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center Research Publication, (2020-1).
Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé III, H., & Crawford, K. (2021). Datasheets for datasets. Communications of the ACM, 64(12), 86–92.
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399.
Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11), 501–507.
Ruttenberg, D. (2025). Mitigating autistic adults' sensory sensitivity: A multi-sensory assistive wearable technology approach [Doctoral dissertation, University College London]. UCL Discovery. https://discovery.ucl.ac.uk/id/eprint/10210135/
Comments