top of page

The 5 Stages of AI Ethics Evolution: The Human Impact Ladder


Most organizations believe they’re doing AI ethics. They’ve checked the boxes, signed the policies, updated the handbook. But compliance isn’t the same as flourishing—and the gap between them is where safety, trust, and long-term performance live.

If you’re a CXO, a school leader, or a services director, you’ve probably asked: Are we doing enough? The answer depends on where you sit on The Human Impact Ladder: a five-stage journey from “meets the minimum” to “helps people thrive”. Not jargon. Not vibes. Not virtue-signaling. A practical way to see what’s working, what’s missing, and what to do next (Jobin et al., 2019; Morley et al., 2020).

Let’s walk through it. Stage 1: Compliance (Legal Minimums)

This is where most organizations begin—and many stay. Stage 1 is about avoiding lawsuits. It’s the “check-the-box” phase: GDPR consent forms, non-discrimination clauses, vendor contracts that promise ethical AI without defining what that means (Jobin et al., 2019).

What it looks like in practice: A school district adopts an AI-powered student monitoring tool. Legal reviews the privacy policy. IT confirms data encryption. Leadership signs off. The tool launches—and three months later, neurodivergent students are flagged disproportionately for “disengagement” because the algorithm interprets sensory regulation strategies (stimming, gaze aversion) as behavioral risk.

Stage 1 asks: Is it legal? But it doesn’t ask: Is it safe? Is it fair? Does it work for real humans—including neurodivergent humans?

Stage 2: Alignment (Basic Values)

Stage 2 organizations recognize that legal compliance isn’t enough. They begin aligning AI deployment with stated organizational values: transparency, fairness, accountability (Morley et al., 2020). This stage introduces ethics committees, value statements, and “responsible AI” training.

What changes: The same school district now requires all AI tools to undergo an ethics review. The committee asks: Does this tool reflect our commitment to equity? They look for demographic bias in training data and request explainability documentation from vendors.

The limitation: Values are aspirational. Without clear, testable checks that reflect how brains actually work—especially neurodivergent brains—Stage 2 organizations can still deploy tools that harm the very people they intend to serve. Alignment without Human Factors Safety is like building a ramp without knowing wheelchair dimensions.

Organizational leaders reviewing AI ethics frameworks and data visualizations during strategic planning session

Stage 3: Human Factors Safety (Neuro-Inclusion & Sensory Awareness)

This is where the paradigm shifts. Stage 3 integrates Human Factors Safety—the science of how humans interact with systems, grounded in bio-cognitive response research (Parasuraman & Riley, 1997). Organizations at this stage understand that ethical AI must account for sensory processing, executive function variability, attentional differences, and developmental stages.

What it looks like in practice: The district's ethics committee now includes an occupational therapist and a neuroscientist. They audit the student monitoring tool for sensory triggers: Does it send push notifications during class transitions (high-stress moments for autistic students)? Does it use red alert colors that escalate anxiety? Does it assume neurotypical gaze patterns equal engagement?

They redesign the interface. Alerts are customizable. Visual contrast is adjustable. The algorithm is retrained to recognize neurodivergent self-regulation behaviors as adaptive, not disengaged (Ruttenberg, 2025). 

The shift: Stage 3 organizations stop asking, Is this tool fair in theory? and start asking, Is this tool safe for the actual humans using it?

Stage 4: DAD Integration (Developmentally Aligned Design)

Stage 4 embeds Developmentally Aligned Design (DAD): a framework that tailors AI systems to the cognitive, emotional, and sensory capacities of users at specific life stages and neurotypes. 

What changes: The district doesn’t just audit existing tools—they co-design new ones. They partner with autistic students, parents, and clinicians to prototype an AI study coach that adapts to sensory profiles. For hyper-sensitive students, it reduces notification frequency and uses soft color palettes. For under-sensitive students, it builds in stronger cues and clearer pacing. For The Under-Sensitive Child (Sensory Seekers), it incorporates movement breaks and tactile feedback prompts.

The tool doesn't surveil; it supports. It doesn't categorize students as "at risk"; it identifies when a student might benefit from a check-in. It doesn't replace human judgment: it augments educator capacity to respond to individual needs. 

The competitive advantage: Stage 4 organizations aren't just mitigating harm. They're building systems that enhance human potential. Their tools attract partnerships, grant funding, and trust from communities who've been burned by extractive tech.

Split view showing student stressed by aggressive AI interface versus calm with neurodivergent-friendly design

Stage 5: Sovereign Flourishing (Human-AI Synergy)

Stage 5 is rare: and transformative. These organizations have achieved human-AI synergy, where technology amplifies human agency rather than constraining it. Users maintain sovereignty over their data, their choices, and their cognitive bandwidth. AI becomes a collaborator, not a overseer.

What it looks like in practice: The district's AI study coach evolves into a personalized learning ecosystem. Students control their data sharing preferences. Neurodiverse students can flag algorithm errors ("This suggestion doesn't work for my brain") and the system learns. Teachers receive AI-generated insights: but final decisions stay human.

The tool tracks outcomes: Are neurodivergent students reporting less burnout? Are they meeting self-defined goals? Is executive function improving without overreliance on scaffolding?

The defining characteristic: Stage 5 organizations measure success not by efficiency gains, but by human flourishing. They ask: Are people thriving? Do they feel more capable? Is their autonomy expanding?

This is AI ethics as a growth strategy, not a compliance burden.

Why Most Organizations Stall at Stage 1 or 2

Three reasons:

  1. Lack of neuro-informed expertise. Traditional ethics committees rarely include neuroscientists, occupational therapists, or lived-experience advocates (Fjeld et al., 2020).

  2. Vendor-driven deployment. Organizations adopt tools selected by procurement, not co-designed with end users.

  3. Reactive rather than proactive culture. Ethics reviews happen after harm, not before deployment.

Moving from Stage 1 to Stage 5 requires more than good intentions. It takes real expertise in neurodiversity, sensory science, and Human Factors Safety (including the bio-cognitive response dimension): because if a system “works on paper” but fails real people under real pressure, it doesn’t really work.

What Leaders Should Do Next

If you're responsible for AI strategy: whether you're a CRO, an educator, a healthcare administrator, or an agency lead: ask yourself:

  • Where does our organization sit on this map?

  • What expertise are we missing at the table?

  • Are we measuring compliance, or are we measuring human outcomes?

Then take three actions:

  1. Audit your current tools through a Human Factors Safety lens. Identify sensory triggers, cognitive load demands, and power asymmetries.

  2. Build or partner with neuro-informed consultants. You wouldn't design accessible architecture without structural engineers. Don't design neuro-inclusive AI without neuroscientists.

  3. Shift your success metrics. Replace "user engagement" with "user autonomy." Replace "efficiency" with "flourishing."

Neuro-informed team of neuroscientist, educator, and therapist collaborating on human-centered AI development

The Bottom Line

AI ethics isn’t static. It’s not a policy you write once and file away. It’s a journey up The Human Impact Ladder—and where you land on that ladder determines whether your AI systems create harm, maintain the status quo, or unlock human potential. 

Stage 1 keeps you out of court. Stage 5 makes you a leader in your field.

The question isn’t whether to evolve. It’s whether you’ll do it before your competitors—or before your community demands it.

Call to action: If you want help benchmarking where you are on The Human Impact Ladder (and what it would take to climb one rung safely), reach out via https://www.davidrutttenberg.com.

About the Author

Dr David Ruttenberg PhD, FRSA, FIoHE, AFHEA, HSRF is a neuroscientist, autism advocate, Fulbright Specialist Awardee, and Senior Research Fellow dedicated to advancing ethical artificial intelligence, neurodiversity accommodation, and transparent science communication. With a background spanning music production to cutting-edge wearable technology, Dr Ruttenberg combines science and compassion to empower individuals and communities to thrive. Inspired daily by their brilliant autistic daughter and family, Dr Ruttenberg strives to break barriers and foster a more inclusive, understanding world.

References

Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, M. (2020). Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center Research Publication (2020-1). https://doi.org/10.2139/ssrn.3518482

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2

Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2020). From what to how: An initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Science and Engineering Ethics, 26(4), 2141–2168. https://doi.org/10.1007/s11948-019-00165-5

Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39(2), 230–253. https://doi.org/10.1518/001872097778543886

Ruttenberg, D. (2025). Mitigating sensory sensitivity in autistic adults through multi-sensory assistive wearable technology [Doctoral thesis, University College London]. UCL Discovery. https://discovery.ucl.ac.uk/id/eprint/10210135/

 
 
 

Comments


bottom of page