top of page

Built Without Us: The AI and Autism Ethics Gap Nobody Is Closing

By Dr David Ruttenberg | April 2026 | < 5-minute read

A split editorial composition showing a cold electric-blue AI circuit blueprint on the right and a warm amber silhouette of a person reaching toward it on the left, separated by a translucent digital barrier suggesting exclusion. Bold white text reads "BUILT WITHOUT US" against a deep charcoal background. Hero image for the blog post: Built Without Us — The AI and Autism Ethics Gap Nobody Is Closing, by Dr David Ruttenberg.
Most AI tools designed for autism were built without a single autistic person in the room.

Most AI tools designed for autism were built without a single autistic person in the room.


That sentence should stop you. Not because it's provocative — but because the evidence supports it. Across decades of assistive technology research, autistic adults have been studied far more than they have been consulted. Their data has been extracted far more often than their input has been sought. And the tools built in their name have repeatedly been optimised for the comfort of the institutions serving them — not for the autonomy, dignity, or cognitive liberty of the autistic individuals themselves.


This is the autism AI ethics gap. And as AI-powered tools become central to how autistic people navigate education, work, healthcare, and daily life, the stakes of getting this wrong have never been higher.


Who's Actually in the Room?

The problem begins at the design table.

Assistive AI for autism has historically been built around behavioural compliance and neurotypical productivity metrics — reducing stimming, improving eye contact, increasing standardised social responsiveness (Spiel et al., 2019). These outcomes reflect the priorities of clinicians, educators, and funders. They rarely reflect the priorities of autistic adults themselves, who consistently rank autonomy, sensory comfort, and mental health — not social conformity — as their primary concerns.

My doctoral research confirmed this directly. In participatory consultations with autistic adults, the vocabulary they used to describe their own experiences centred on sensory overload, anxiety, fatigue, and the desire for technologies that helped them navigate their environments on their own terms (Ruttenberg, 2025). Not tools that corrected them. Tools that supported them.

That distinction is not semantic. It is the difference between a technology designed to serve neurodivergent lives — and a technology designed to make those lives more legible to neurotypical systems.

What the S²MHD Model Reveals

My Sensory Sensitivity–Mental Health–Distractibility model (S²MHD) was developed through mixed-methods research with autistic adults, and its central finding is one that most AI tools for autism completely ignore.

Sensory overload does not directly impair attention. Instead, overload increases anxiety and fatigue — and it is those mediating mental health states that degrade attentional performance (Ruttenberg, 2025). Path analysis across N = 175 autistic participants demonstrated these relationships with high statistical significance:


Sensory input → Anxiety & Fatigue  (β = 0.68, p < .001) Anxiety & Fatigue → Attentional degradation  (β = 0.52, p < .001)

This matters enormously for AI design. If you build a tool that monitors sensory input but ignores the anxiety and fatigue it produces, you have missed the point — and potentially made things worse.


An AI that detects a loud environment and does nothing about the resulting anxiety is not an accommodation. It is surveillance with extra steps.

Yet this is precisely how most current systems operate: reactive to observable inputs, blind to the internal, mediating states that actually determine an autistic person's capacity to function.


What Ethical Design Actually Looks Like

My preprint on ethical wearable accommodations (Ruttenberg, 2026) articulates eight principles that should govern any AI system designed for autistic or neurodivergent users. The first — and the most frequently violated — is user-centred co-design.


This is not consultation. It is not asking autistic people to review a completed product. It means placing neurodivergent individuals at every stage of design, development, and evaluation as co-designers with genuine decision-making authority — and veto power over features that contradict their interests.


The reasons are not just ethical. They are empirical. Pilot data from the S²MHD programme (N = 24) showed that personalised sensory mediations reduced anxiety and fatigue, which in turn improved attentional performance. The improvements were only possible because participants co-designed the mediations themselves — calibrating thresholds, selecting feedback types, and retaining full control over when and how the system intervened. When autistic adults are authors of the technology rather than subjects of it, the technology actually works.


The other seven principles — graduated consent, cognitive liberty safeguards, data minimisation, transparency, non-coercive intervention logic, interoperability, and community governance — each address a specific failure mode in current AI design for autism. Together, they form a framework not for fixing autistic people, but for fixing the environments and technologies they are asked to navigate.


The Autism AI Ethics Problem Nobody Is Naming

Here is where the ethics conversation gets uncomfortable.


Many AI tools marketed as autism supports — behaviour-tracking apps, social skills coaches, attention monitors — are structurally indistinguishable from surveillance systems. They collect continuous physiological, behavioural, and environmental data on autistic individuals, typically without granular consent, and often share that data with parents, educators, employers, or clinical providers.


Neurorights scholarship is clear: cognitive liberty — the right to mental self-determination free from coercive interference — is a foundational human right that AI systems must preserve (Ienca & Andorno, 2017). It is violated any time a system imposes neurotypical behavioural norms, suppresses neurodivergent traits, or reports behavioural data to third parties without explicit, revocable, context-specific user consent.


We are currently building AI systems for autism that routinely do all three of these things — and doing it faster than the ethical frameworks needed to govern them can be developed or deployed. The neurodiversity paradigm is explicit: autistic differences are not deficits to be optimised away (Pellicano & den Houting, 2022). But many of the AI systems we are deploying in classrooms, therapy clinics, and workplaces are built around exactly that premise.


A Question Worth Asking

There is a version of AI for autism that is genuinely transformative: personalised, co-designed, neurorights-aligned, and built around what autistic people actually say they need. The S²MHD model and the ethical framework in my preprint offer a concrete pathway toward that version.

But that version requires something the current landscape is consistently reluctant to provide: power sharing.


Designing ethical AI for autism means accepting that autistic adults are not the recipients of the technology. They are its co-authors. It means neurodivergent communities hold authority over acceptable use policies, model validation standards, and ethical review processes. It means that the community most affected by these systems has a meaningful vote over the systems themselves.


We are a long way from that. But the research makes the path clear — if we are willing to take it.


When was the last time an autistic adult held veto power over the design of a tool being built for them — and when do we start treating that as a minimum standard, rather than a stretch goal?


References

Ienca, M., & Andorno, R. (2017). Towards new human rights in the age of neuroscience and neurotechnology. Life Sciences, Society and Policy, 13(1), Article 5. https://doi.org/10.1186/s40504-017-0050-1


Pellicano, E., & den Houting, J. (2022). Annual research review: Shifting from 'normal science' to neurodiversity in autism science. Journal of Child Psychology and Psychiatry, 63(4), 381–396. https://doi.org/10.1111/jcpp.13534


Ruttenberg, D. (2025). Towards technologically enhanced mitigation of autistic adults' sensory sensitivity experiences and attentional and mental wellbeing disturbances [Doctoral dissertation, University College London]. UCL Discovery. https://discovery.ucl.ac.uk/id/eprint/10212804


Ruttenberg, D. (2026). Ethical wearable accommodations for neurodivergent adults: A framework integrating the S²MHD model, just-in-time adaptive interventions, and neurorights principles [Preprint]. https://www.davidruttenberg.com


Spiel, K., Frauenberger, C., Keyes, O., & Fitzpatrick, G. (2019). Agency of autistic children in technology research: A critical literature review. ACM Transactions on Computer-Human Interaction, 26(6), Article 38. https://doi.org/10.1145/3344919


About the Author

Dr David Ruttenberg PhD, FRSA, FIoHE, AFHEA, HSRF is a neuroscientist, autism advocate, Fulbright Specialist Awardee, and Senior Research Fellow dedicated to advancing ethical artificial intelligence, neurodiversity accommodation, and transparent science communication. With a background spanning music production to cutting-edge wearable technology, Dr Ruttenberg combines science and compassion to empower individuals and communities to thrive. Inspired daily by their brilliant autistic daughter and family, Dr Ruttenberg strives to break barriers and foster a more inclusive, understanding world.

© 2018–2026 by Dr David Ruttenberg. All rights reserved.

bottom of page