I Patented a Wearable for Sensory Overload. Here’s What It Taught Me About Ethical AI.
- David Ruttenberg
- 4 days ago
- 5 min read
By Dr David Ruttenberg | April 2026 | ~1,050 words · approx. 4.5-minute read

I Patented a Wearable for Sensory Overload
A few years ago, I patented a wearable device designed to help neurodivergent people manage sensory overload. It sits quietly on the body, listening to the environment and to you — tracking noise, light, motion, and subtle physiological signals — and then nudging the world to change, not the person (Ruttenberg et al., 2022).
It became the foundation for what we built at Phoeb‑X.
On paper, it’s an “AI‑powered, multisensory assistive wearable” with environmental sensors, machine‑learning models, and a cloud pipeline (Ruttenberg et al., 2022). In practice, it’s one simple promise:
When the world becomes too loud, too bright, or too much — you shouldn’t have to push yourself past your limits to stay in the room.
Building that device forced me to answer a harder question than any patent examiner asked:
What does ethical AI actually look like when it’s strapped to a human nervous system?
From S²MHD to Hardware: Designing for the Real Crisis
My S²MHD model — the Sensory‑to‑Mental Health Deterioration pathway — shows that sensory overload doesn’t directly “break” attention. It first spikes anxiety and fatigue, and those are what collapse focus and performance (Ruttenberg et al., 2026).
Most autism “wearables” on the market are not built around that pathway. They’re built around what I call behavioral snapshots: heart rate spikes, movement patterns, eye contact metrics, compliance scores. They watch the person and send data to the adults.
When we started designing at Phoeb‑X, we flipped the core question:
Instead of “How do we track the child?” we asked:
How do we track the environment that’s attacking the child?
That single shift changed everything about the architecture.
The primary “patient” became the room, not the wearer.
The primary output became environmental recommendations, not behavior corrections.
The success metric became fewer overload events and anxiety spikes, not “improved eye contact” or “quiet hands.”
This is where my DAD Framework came in.
The DAD Framework: Data, Autonomy, Dignity
In my research and product work, I frame ethical AI wearables around three pillars I call the DAD Framework: Data, Autonomy, Dignity (Ruttenberg, 2024). They sound abstract. They’re not.
Data: Who It Serves, Not Just What It Captures
The wearable can track sound levels, light changes, movement, and heart‑rate variability. The critical question is not what the device can see, but who the data is actually for.
In many autism tools, the answer is clinicians, researchers, or administrators. The person wearing the device becomes a data source.
In our designs, the rule was simple:
The primary beneficiary of the data is always the wearer.
Aggregated insights can support caregivers and clinicians, but they don’t get raw feeds that can be used to discipline or monitor.
That means no real‑time dashboards for employers. No hidden “compliance scores.” No behavior graphs you can weaponize in an IEP meeting.
Autonomy: The Right to Say “No,” “Not Now,” or “Not Like That”
Ethical AI in a wearable has to stop when you tell it to stop.
We baked this into the interaction design:
The wearer can mute alerts, snooze them, or switch the device into “information only” mode.
They can choose which signals matter today — noise, light, movement, or internal stress markers.
They can decide who else sees pattern‑level summaries and what stays private.
For autistic adults we worked with, this wasn’t a nice‑to‑have. It was the line between “accommodation” and “leash” (Crompton et al., 2023).
If a wearable can sound alarms about your state but you can’t control who hears them or what they mean, it’s not assistive tech.
It’s bio‑surveillance.
Dignity: Never Treat the Person as the Problem
The dignity test is blunt:
Does the system ever imply that the wearer would be better if they were more like everyone else?
If the answer is yes, we change the design.
In practical terms, that meant:
No goals framed as “reducing stimming,” “increasing eye contact,” or “normalizing behavior.”
Goals framed as reducing distress, protecting energy, and increasing self‑chosen participation.
Avoiding interface language that pathologizes difference — no “abnormal pattern” flags for being autistic.
We weren’t trying to engineer someone closer to a neurotypical template.
We were trying to protect their right to stay who they are in environments that weren’t built for them.
What This Ethical AI Wearable Taught Me
When you spend years turning research into something people actually wear, your definitions get sharper.
Here’s what I now believe ethical AI wearables must do — especially in neurodivergent contexts:
Monitor environments before people, and treat the environment as the first thing to change.
Make the wearer more powerful, not more legible to institutions.
Be willing to collect less data if that’s what dignity, autonomy, and safety require.
Build in real veto power, where neurodivergent users (and their advocates) can say, “This data stream or alert should not exist.”
At Phoeb‑X, our most powerful conversations with partners weren’t about model accuracy or battery life. They were about letting go of the fantasy that “smart” tech should fix people.
Once organizations accepted that our ethical AI wearable existed to transform environments, not children, the business model changed.
So did the ethical stakes.
The Question I Ask Every Time I See “AI Wearable” in a Pitch Deck
The next time someone pitches you an “AI‑powered wearable for autism, ADHD, or mental health,” try this filter:
If this device became perfectly accurate and widely adopted, would neurodivergent people feel more free — or more watched?
If the honest answer is “more watched,” then the problem isn’t the accuracy.
It’s the ethics.
And you don’t fix that with a better sensor.
You fix it with a different question.
References
Bastani, H., Bastani, O., & Sinha, A. (2025). Generative AI without guardrails can harm learning: Evidence from high school mathematics. Proceedings of the National Academy of Sciences, 122(26), e2422633122.
Crompton, C. J., Michael, C., Dawson, M., Fletcher-Watson, S., & Crane, L. (2023). Participatory methods to engage autistic people in the design of research. Autism, 27(4), 1030–1042.
Ruttenberg, D. (2024, June 30). 3 essential wearable designs for neurodivergent people: How AI alerts, sensory filters, and gentle guidance are transforming employment, education and social scenarios. DavidRuttenberg.com.
Ruttenberg, D. (2026). S²MHD and the DAD Framework: A participatory model for ethical AI accommodations in autism. [Manuscript in preparation].
Ruttenberg, D., et al. (2022). Multisensory, assistive wearable technology and method for providing sensory relief therewith (EP4396842A4). European Patent Office.
About the Author
Dr David Ruttenberg PhD, FRSA, FIoHE, AFHEA, HSRF is a neuroscientist, autism advocate, Fulbright Specialist Awardee, and Senior Research Fellow dedicated to advancing ethical artificial intelligence, neurodiversity accommodation, and transparent science communication. With a background spanning music production to cutting-edge wearable technology, Dr Ruttenberg combines science and compassion to empower individuals and communities to thrive. Inspired daily by their brilliant autistic daughter and family, Dr Ruttenberg strives to break barriers and foster a more inclusive, understanding world. #EthicalAIWearable #PhoebX #Neurodiversity #Neurodivergent #Autism #ADHD #SensoryOverload #WearableTech #AIinHealthcare #InclusiveTech #S2MHD #CognitiveLiberty #HumanCenteredAI #NothingAboutUsWithoutUs