top of page

7 Mistakes You Are Making with AI Risk Management (and How to Fix Them)

Let me be blunt with you: most organizations are getting AI risk management wrong. And it's not because they lack smart people or good intentions.


It's because they're missing the human element entirely.


After spending years conducting clinical trials, developing theories that resulted in 10 patents, and working extensively with neurodivergent populations, I've seen firsthand how AI systems can either support or completely derail the people they're supposed to serve. My research focuses on sensory sensitivity and how it affects focus, attention, and distractibility—factors that are tightly linked to mental health outcomes like anxiety and fatigue in neurodivergent populations (Ruttenberg, 2025), including individuals who identify as autistic, have ADHD, epilepsy, or other neurodivergent conditions.


When organizations deploy AI without considering these human factors, they're not just creating compliance headaches. They're actively harming their workforce and undermining their own goals.


Here are seven mistakes I see CXOs, academicians, leaders, and government agencies making with AI risk management, and how to fix them.

Mistake #1: Treating AI Risk as a Purely Technical Problem

The Problem: Most AI risk frameworks focus exclusively on cybersecurity vulnerabilities, data breaches, and model accuracy. These are important, sure. But they represent maybe half the picture.


What about the employee with sensory processing differences who can't function when your AI-powered workplace monitoring system creates constant visual or auditory notifications? What about the student whose anxiety spikes because your AI assessment tool doesn't account for attention variability?


When you treat AI risk as purely technical, you ignore the biological and neurological realities of the humans interacting with these systems daily. And it can create legal and reputational risk fast: AI-driven hiring, assessment, and employee monitoring tools can inadvertently discriminate against neurodivergent sensory profiles (e.g., penalizing atypical eye contact, movement, pacing, or attention patterns), which runs directly into disability rights expectations under ADA/DOJ guidance.


The Fix: Expand your risk framework to include human-centered metrics. Assess how AI deployments affect focus, sensory load, and mental health outcomes. Build cross-functional teams that include neuroscience expertise, not just IT and legal.

A glowing human brain with neural pathways and circuit patterns, representing the intersection of AI risk management and neuroscience.

Mistake #2: Deploying AI Without Neurodiversity Impact Analysis

The Problem: Roughly 15-20% of the global population is neurodivergent (Doyle, 2020). That's not a niche demographic, that's a significant portion of your workforce, your students, and your constituents.


Yet most AI systems are designed for neurotypical users. Ethical wearables that monitor productivity? They often misinterpret the work patterns of someone with ADHD. AI-driven open office environments with automated lighting and sound? Nightmare fuel for someone with sensory sensitivity.


When you deploy AI without analyzing its impact on neurodivergent users, you create systems that exclude, exhaust, and ultimately fail a substantial portion of your people. You also miss a core governance advantage: neurodivergent perspectives tend to "humanize" AI governance by forcing policies and controls to reflect how real people think, sense, and work—not an idealized user (Olusunle, 2025).


The Fix: Conduct neurodiversity impact assessments before deploying any AI system that affects workplace conditions, monitoring, or performance evaluation. Use inclusive-tech frameworks to make this repeatable (Korada et al., 2024). Include individuals with lived neurodivergent experience in your testing phases—and don't assume "advanced users" will be automatically served by generative AI; research shows neurodivergent "power users" still hit real accessibility hurdles (Glazko et al., 2025). This isn't just ethical, it's smart risk management.

Mistake #3: Ignoring Sensory Sensitivity in AI-Driven Environments

The Problem: Here's something most leaders don't realize: sensory sensitivity isn't just about discomfort. It directly impacts cognitive performance.


When AI systems introduce additional sensory inputs—think real-time dashboards, notification sounds, visual alerts, or ambient monitoring indicators—they add cognitive load. For someone with heightened sensory sensitivity, this can trigger a cascade: increased distractibility, reduced focus, elevated anxiety, and eventually, fatigue and burnout (Ruttenberg, 2025).


My clinical research has documented this pattern repeatedly. The environments we create with AI either support attention regulation or they sabotage it.


The Fix: Audit your AI-integrated environments for sensory impact. How many notifications does your system generate? What's the visual complexity of AI-driven interfaces? Can users customize or reduce sensory inputs? Design with sensory sensitivity as a core consideration, not an afterthought.

Diverse office workers with colored auras highlight neurodiversity and the impact of AI on focus, attention, and sensory sensitivity.

Mistake #4: Granting Excessive, Overprovisioned Access

The Problem: This one's well-documented in cybersecurity circles, but the human implications are often missed. When internal AI chatbots and monitoring systems have excessive access to corporate data, you're not just risking data breaches, you're eroding trust.


For neurodivergent employees who may already experience workplace anxiety, knowing that AI systems have broad access to their communications, performance data, and behavioral patterns can be genuinely distressing. This isn't paranoia. It's a reasonable response to surveillance.


Overprovisioned access creates both security vulnerabilities and psychological harm.


The Fix: Apply strict, role-based access controls and the principle of least privilege. But go further: communicate transparently with all employees—and especially those who identify as neurodivergent—about what data AI systems can and cannot access. Reduce the ambient anxiety and subsequent fatigue that comes from uncertainty. Regular access reviews should be standard practice.

Mistake #5: Lacking AI Inventory and Oversight

The Problem: You can't manage what you can't see. Many organizations have lost track of which AI systems they're actually running. Shadow AI, tools adopted by departments or individual employees without IT's knowledge, is everywhere.


This is a compliance nightmare, obviously. But it's also a human factors nightmare. If you don't know what AI systems are operating in your organization, you can't assess their impact on employee wellbeing. That productivity tracker someone installed? It might be generating stress responses you're completely unaware of.


The Fix: Create and maintain a comprehensive AI inventory. Form cross-functional governance teams that include HR, IT, legal, and, critically, someone who understands human factors and neuroscience. Establish clear policies for AI adoption and use risk dashboards for real-time tracking.

Split workspace shows chaotic digital overload versus calm focus, illustrating the need for human-centered AI environments.

Mistake #6: Skipping Red Team Testing

The Problem: According to recent data, 89% of organizations deploy AI systems without proper red teaming protocols (i.e., conducted by a group of ethical hackers or security professionals who simulate real-world attacks against your organization's systems, people, and physical defenses to test and improve its overall security posture, acting as adversaries to uncover vulnerabilities before malicious attackers do. Red teaming's goal is to think and act like a real threat actor, finding weaknesses in technology, cognitive and other processes, and human sensitivities and awareness, then provide insights to strengthen defenses, often working alongside a defensive Blue Team).


Essentially, most companies and organizations are operating from a place where they hope nothing goes wrong.


But red teaming shouldn't just test for technical vulnerabilities. It should test for human vulnerabilities too. How does your AI system perform when users are fatigued? When they're distracted? When they have processing differences that affect how they interpret AI outputs?


Systems that work perfectly in lab conditions often fail catastrophically when real humans, with all their neurological diversity, start using them.


The Fix: Implement comprehensive red team exercises that include human factors scenarios. Test with diverse user groups, including neurodivergent individuals. Simulate real-world conditions where attention, fatigue, and sensory load are variables, not constants.

Mistake #7: Failing to Monitor and Realign Continuously

The Problem: AI systems drift. Their performance changes over time as real-world data diverges from training data. Most organizations know this.


What fewer organizations track is how human responses to AI systems change over time. Initial tolerance for AI monitoring can erode into chronic stress. Sensory loads that seemed manageable at deployment can accumulate into burnout after months of exposure.


Without continuous monitoring of both system performance and human outcomes, you're flying blind.


The Fix: Implement comprehensive monitoring that tracks human-centered metrics alongside technical ones. Conduct regular wellbeing assessments for employees interacting with AI systems. Adopt frameworks like the NIST AI Risk Management Framework, but extend them to include human factors. And please: don't wait until systems are live to start thinking about this.

Corporate building with glowing AI nodes reveals complex AI systems and hidden risks in organizational risk management.

The Bottom Line

AI risk management isn't just about protecting your organization from hackers and compliance violations. It's about protecting your people from systems that ignore their neurological and cognitive realities.


As someone who has spent a career researching how sensory sensitivity affects attention, focus, and mental health, particularly in neurodivergent populations, I can tell you that the human costs of poorly managed AI are real and measurable. Anxiety. Fatigue. Reduced performance. Turnover.


These aren't soft concerns. They're hard business risks.


The organizations that get AI risk management right will be the ones that understand this: every AI system operates within a human context. Ignore that context, and you're not managing risk. You're creating it.


Ready to rethink your approach? Start by auditing your current AI deployments through a human-centered lens. The vulnerabilities you find might surprise you.

References

Doyle, N. (2020). Neurodiversity at work: a biopsychosocial model and the role of the individual, organisation and society.British Medical Bulletin, 135(1), 108-118.

Glazko, K., Cha, J., Lewis, A., Kosa, B., Wimer, B. L., Zheng, A., & Zheng, Y. (2025). Autoethnographic insights from neurodivergent GAI "power users." Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 274, 1–19. https://doi.org/10.1145/3706598.3713670

Korada, L., Sikha, V. K., & Siramgari, D. (2024). AI & accessibility: A conceptual framework for inclusive technology. International Journal of Intelligent Systems and Applications in Engineering, 12(23s), 983–992.

Olusunle, A. (2025, July). How neurodivergent minds could humanize AI governance. World Economic Forum.

Ruttenberg, D. P. (2025). Towards Technologically Enhanced Mitigation of Autistic Adults' Sensory Sensitivity Experiences and Attentional, and Mental Wellbeing Disturbances (Doctoral dissertation, University of London, University College London (United Kingdom)).


Dr David Ruttenberg PhD, FRSA, FIoHE, AFHEA, HSRF is a neuroscientist, autism advocate, Fulbright Specialist Awardee, and Senior Research Fellow dedicated to advancing ethical artificial intelligence, neurodiversity accommodation, and transparent science communication. With a background spanning music production to cutting-edge wearable technology, Dr Ruttenberg combines science and compassion to empower individuals and communities to thrive. Inspired daily by their brilliant autistic daughter and family, Dr Ruttenberg strives to break barriers and foster a more inclusive, understanding world.


 
 
 

Comments


bottom of page