Designing for Dignity: Why Human-Centered AI is the Only Ethical Choice
- David Ruttenberg
- 3 days ago
- 5 min read
I’m a doctor and a scientist, but I’m a parent first.
Here’s the question that keeps me up at night: Are we building AI systems that serve people, or are we reshaping people to serve AI systems?
For me, that isn’t an abstract debate. It’s personal. Our daughter Phoebe is 23. She’s autistic, has ADHD, and lives with epilepsy. I’ve watched her endure every therapy imaginable, countless ER visits for seizures, and two craniotomies. When your family has lived inside hospital corridors and waiting rooms, words like “efficiency” and “optimization” start to sound a little different.
That’s why I keep coming back to dignity. Not as a slogan, not as a box to tick, not as a feature you bolt on later, but as the point.
In AI, dignity means creating a world where Phoebe and others like her can navigate life without being forced into a box that wasn’t built for them.
What Is Human-Centered AI, Really?
Let’s cut through the buzzwords. Human-centered AI is an approach where technology amplifies human capabilities rather than replacing human judgment (Shneiderman, 2022). It’s AI that treats people as collaborators, not data points, not checkboxes, not “edge cases.” It’s built with people, tested with people, improved for people.
The core idea is simple: people should understand, control, and influence the systems that affect their lives.
That matters even more when the “system” isn’t a shopping recommendation, but a gate to care, education, benefits, work. Dignity starts when a person can ask, “Why did this happen?” and get a real answer (Floridi et al., 2018). When they can correct what is wrong. When they can opt out without being punished for it. That’s not anti-innovation; that’s pro-human.
Here’s the antimetabole I come back to when I’m tired and tempted to accept the black box: we shouldn’t make people fit the system; we should make the system fit people.
And the polyptoton is the warning label: when we optimize for metrics, we end up optimizing people. Phoebe isn’t something to be optimized. She’s a person who deserves tools that adapt to her, not the other way around.

The Dignity Problem With Current AI
Here’s where things get uncomfortable.
Much of the AI being deployed today wasn’t designed with dignity in mind. It was designed for speed. For scale. For efficiency. And efficiency, while valuable, can become a weapon when it steamrolls over the humans it’s supposed to serve.
Consider hiring algorithms that screen out qualified candidates based on patterns they can’t explain. Consider healthcare systems that prioritize patients based on data that reflects historical inequities rather than current needs. Consider educational tools that label children before giving them a chance to grow.
In my own research on sensory sensitivity and assistive technology, I’ve seen firsthand how technology designed without user input can fail the very people it claims to help (Ruttenberg, 2025). If a tool ignores sensory needs, it doesn’t just “miss the mark” – it creates a new barrier. And for families like mine, new barriers aren’t theoretical. They mean more calls, more forms, more “prove it again,” more exhaustion.
When we design for people instead of designing people out, everything changes. The system stops being a gatekeeper and starts being a gateway.
That’s the shift human-centered AI demands: we don’t design humans for AI; we design AI for humans.
The Five Pillars of Dignity-Respecting AI
So what does ethical AI actually look like in practice? Based on current research and emerging AI policy frameworks, five principles stand out:
1. Transparency and Explainability
If people can’t understand an AI’s decision, they can’t trust it: and they shouldn’t have to. Explainability isn’t a nice-to-have. It’s a moral requirement (Jobin et al., 2019). Black boxes belong in airplanes, not in systems that determine people’s futures.
2. Human Agency and Control
Human-centered AI augments intelligence; it doesn’t replace judgment. Users need the ability to intervene, correct, and override. Technology should serve dignity, not require dignity to serve technology.
3. Participatory Design
The people closest to a problem should help solve it. That means end users: including those with disabilities, neurodivergent individuals, and marginalized communities: must be involved from the start, not consulted as an afterthought (Birhane, 2021).

4. Fairness and Accountability
AI systems must be continuously evaluated for bias and uneven performance. When discrimination happens, it’s a system failure that demands correction: not a statistical anomaly to be explained away.
5. Accessibility and Inclusivity
Dignity extends to everyone. Designing for the broadest possible range of users isn’t charity; it’s justice. As I noted in my work on invisible disabilities, accommodations that help some often benefit all (Ruttenberg, 2023).
Why AI Policy Must Catch Up
Here’s the uncomfortable truth: principles without policy are just suggestions.
Right now, AI governance is playing catch-up. The technology moves fast. Regulation moves slow. And in that gap, harm happens. People get hurt by systems that were never properly vetted, never adequately tested, never genuinely inclusive.
Good AI policy doesn’t stifle innovation: it channels it. It creates guardrails that protect people while still leaving room for progress. It demands accountability from developers, transparency from deployers, and recourse for users.
The European Union’s AI Act is one model. The NIST AI Risk Management Framework is another (Shneiderman, 2022). But policy alone isn’t enough. We need a cultural shift in how we think about AI development. We need to stop asking “Can we build this?” and start asking “Should we build this, and for whom?”
The Human in Human-Centered
Let me be direct: human-centered AI isn’t a marketing slogan. It’s a commitment.
It’s a commitment to building systems that respect autonomy. That enable understanding. That distribute power rather than concentrating it. It means accepting that efficiency isn’t the only value worth optimizing for: and that some things are more important than speed.
When we humanize our technology, we don’t make it weaker. We make it worthy. We create systems that people can trust because those systems were built to be trustworthy. We design for dignity because dignity is the design.
And that’s not just ethical. That’s essential.
What You Can Do
Whether you're a developer, a policymaker, or simply someone who uses AI-powered tools every day, you have a role to play:
Ask questions. Demand to know how AI systems make decisions that affect you.
Advocate for inclusion. Push for diverse voices in AI development, especially from communities most affected by algorithmic harms.
Support good policy. Back regulations that hold AI systems accountable while enabling responsible innovation.
Stay informed. The landscape changes fast. Keep learning.
Want to dive deeper into how technology can serve human dignity? Explore more at davidruttenberg.com.
Dr David Ruttenberg PhD, FRSA, FIoHE, AFHEA, HSRF is a neuroscientist, autism advocate, Fulbright Specialist Awardee, and Senior Research Fellow dedicated to advancing ethical artificial intelligence, neurodiversity accommodation, and transparent science communication. With a background spanning music production to cutting-edge wearable technology, Dr Ruttenberg combines science and compassion to empower individuals and communities to thrive. Inspired daily by their brilliant autistic daughter and family, Dr Ruttenberg strives to break barriers and foster a more inclusive, understanding world.
References
Birhane, A. (2021). Algorithmic injustice: A relational ethics approach. Patterns, 2(2), 100205. https://doi.org/10.1016/j.patter.2021.100205
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., ... & Vayena, E. (2018). AI4People: An ethical framework for a good AI society. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2
Ruttenberg, D. (2023). Safeguarding autistic adults. Local Government Association.
Ruttenberg, D. (2025). Towards technologically enhanced mitigation of autistic adults’ sensory sensitivity experiences and attentional, and mental wellbeing disturbances [Doctoral thesis, University College London]. https://discovery.ucl.ac.uk/id/eprint/10210135/
Shneiderman, B. (2022). Human-centered AI. Oxford University Press.
Comments