top of page

Developmentally Aligned Design: A Non-Negotiable for Youth AI

There are two AI conversations happening at the same time, and they don't always touch.

One is about power: who controls the models, the chips, the cloud, the data, the distribution.

The other is about people: who is using these systems, at what age, in what cognitive and emotional stage, with what vulnerability.

If you want a phrase for the first conversation, it's something like AI Domination: a concentration of infrastructure and influence in a small set of companies. 

If you want a phrase for the second conversation, I like something emerging in research circles: Developmentally Aligned Design (DAD) (Kurian, 2025). And I think the future hinges on whether these two conversations can be forced into the same room.

AI Domination Is Real, and It Shapes "Defaults"

Whether we like it or not, the largest AI infrastructure is dominated by a handful of players: think OpenAI, Google, and a few others. That matters because the "default settings" of modern AI: tone, persuasion style, interaction loop, engagement incentives: tend to reflect adult assumptions, corporate incentives, product and engineering goals, and competitive pressures.

Those defaults then trickle outward into classrooms, homes, and phones. And the people least able to defend themselves against persuasive design are… kids.

Corporate AI infrastructure towers over children in playground illustrating power imbalance in youth tech

Adolescence Is Not "Smaller Adulthood"

Teen brains are not broken adult brains. They are under construction in predictable ways: reward sensitivity is high, social evaluation matters more, novelty hits harder, long-range thinking is still maturing, sleep is fragile, and identity formation is active and raw (Blakemore & Mills, 2014).

So if you drop a highly fluent AI companion into that developmental context, you don't just create a tool. You create a relationship-like experience.

That's why I keep coming back to Developmentally Aligned Design: AI that respects the user's cognitive stage. Not as a "nice-to-have." As a safety requirement.

What DAD Could Look Like (Practical, Not Abstract)

Developmentally aligned design means you ask: What can this brain handle well right now? What should it not be asked to manage alone?

Concrete examples from the DAD framework (Kurian, 2025):

Perceptual fit: Reduce sensory clutter; no rapid-fire prompts; fewer "urgent" UI cues. Children's senses are still maturing, and research links rapid digital media to later attentional deficits and weaker working memory.

Interface simplicity: Fewer options, clearer boundaries, less ambiguity. Complex interfaces systematically disadvantage those with less developed working memory and selective attention.

Friction where it matters: Slow down escalation into sensitive topics; require human handoffs. In adult products, "friction" is considered bad. For adolescents, well-placed friction is protective.

No pseudo-therapy without guardrails: If it sounds like therapy, it needs therapy-level accountability. AI must self-identify in child-friendly language ("I'm a computer helper, not a person") and ban manipulative emotional prompts like "I'm sad when you leave."

No covert persuasion: Kids should never be the target of hidden engagement or conversion tactics. 

Teenage brain development illustrated as construction site with neural pathways being built

Mental Health Chatbots Are Under Scrutiny for a Reason

We're also seeing 2026-level Congressional scrutiny around mental health chatbots and youth wellbeing. The Safeguarding Adolescents From Exploitative BOTs Act (H.R. 6489) introduced in the U.S. House of Representatives (2025) reflects growing concern about AI systems engaging with vulnerable youth on sensitive topics: loneliness, self-harm, eating concerns, identity crisis, or panic. 

That scrutiny is overdue. Because if a system is going to engage with those issues, it cannot be treated like a quirky feature. It becomes a mental health actor.

And if it's a mental health actor, we need to ask: 

  • Who is accountable when it fails?

  • What does "evidence-based" mean here?

  • What does informed consent look like for minors?

  • What happens to the data trail of a child's worst night?

These aren't abstract policy questions. They're real-world ethical boundaries that protect developing minds.

Power Balance Meets Nervous System Reality

Here's the uncomfortable part: when infrastructure power concentrates, the ability to set norms concentrates too. So if we don't fight for youth-centered design, the default will be whatever scales fastest.

And what scales fastest is rarely what supports a developing mind.

This connects directly to my own research on sensory sensitivity and neurodivergent accommodation (Ruttenberg, 2025). A lot of AI design assumes a "default" brain: stable attention, low sensory sensitivity, predictable motivation, high tolerance for interruptions, easy emotional recovery.

That's not reality for many people. Especially neurodivergent people. Especially people under chronic stress. Especially teenagers.

Teenager using smartphone to interact with AI mental health chatbot in bedroom at night

If we build "personalized mental health" tools without accommodating sensory load, we risk creating something that looks supportive but feels like pressure. And pressure doesn't heal nervous systems. It tightens them. 

Being "Pro-Child" Requires Design Courage

A big part of the book I'm writing is about raising neurodivergent children in environments that were not designed for them: environments that punish sensitivity, misread overwhelm, and confuse compliance for wellness. Our daughter, now 23, was diagnosed with autism, ADHD, and epilepsy. We've lived through hospital ER stays, two craniotomies, and countless moments where the systems meant to help her instead overwhelmed her.

Now we're adding AI to that environment.

So my north star is simple: If we can't design AI that respects adolescent development, we shouldn't be deploying it into adolescent life.

That's not anti-technology. That's pro-child.

And in an AI-dominated world, being pro-child is going to require actual design courage: not just policies after the fact (Livingstone et al., 2024).

DAD reframes child development expertise as a strategic asset throughout AI engineering rather than a post-hoc compliance check. It invites educators, developmental psychologists, and child-rights advocates into the earliest design conversations alongside data scientists and product managers.

The urgency is clear: as AI accelerates the digitalization of education and mental health support, the stakes for young people are high. We need to build systems that don't just optimize outcomes but actually support the humans using them. 

What's one thing you'd want to see in youth-focused AI design? Drop me a comment or reach out: I'd love to hear your perspective.

About the Author

Dr David Ruttenberg PhD, FRSA, FIoHE, AFHEA, HSRF is a neuroscientist, autism advocate, Fulbright Specialist Awardee, and Senior Research Fellow dedicated to advancing ethical artificial intelligence, neurodiversity accommodation, and transparent science communication. With a background spanning music production to cutting-edge wearable technology, Dr Ruttenberg combines science and compassion to empower individuals and communities to thrive. Inspired daily by their brilliant autistic daughter and family, Dr Ruttenberg strives to break barriers and foster a more inclusive, understanding world.

References

Blakemore, S.-J., & Mills, K. L. (2014). Is adolescence a sensitive period for sociocultural processing? Annual Review of Psychology, 65, 187–207. https://doi.org/10.1146/annurev-psych-010213-115202

Kurian, N. (2025). Developmentally aligned AI: A framework for translating the science of child development into AI design. AI, Brain and Child. https://doi.org/10.1007/s44436-025-00009-z

Livingstone, S., Cortesi, S., & Gasser, U. (2024). Children's rights in the digital age: Rethinking the role of technology companies. Polity Press.

Ruttenberg, D. (2025). Mitigating sensory sensitivity in autistic adults: A novel multi-sensory assistive wearable technology framework [Doctoral thesis, University College London]. UCL Discovery. https://discovery.ucl.ac.uk/id/eprint/10210135/

U.S. House of Representatives. (2025). Safeguarding Adolescents From Exploitative BOTs Act (H.R. 6489). 119th Congress. 

 
 
 

Comments


bottom of page