
A growing body of research suggests that artificial intelligence tools are giving neurodiverse professionals — including those with ADHD, autism, and dyslexia — a stronger footing in the modern workplace. As AI agents become more sophisticated in 2025, workers report that features such as automated note-taking, time management, and communication support are helping them overcome barriers in environments not originally designed for diverse cognitive styles.
A recent study by the UK’s Department for Business and Trade found that neurodiverse employees were 25% more satisfied with AI assistants and more likely to recommend them than neurotypical counterparts.
For Tara DeZao, senior director of product marketing at enterprise software firm Pega, AI tools have been transformative. Diagnosed with combination-type ADHD, DeZao says note-taking AI has changed how she participates in meetings. “Standing up and walking around during a meeting means I’m not taking notes, but now AI can synthesize the entire meeting into a transcript and pick out top-level themes,” she said. “I’ve white-knuckled my way through the business world, but these tools help so much.”
AI systems have proved especially effective at supporting executive function, time management, and communication, which are common challenges for neurodiverse individuals. Note-taking bots, scheduling assistants, and summarization tools now act as real-time supports, allowing users to focus on higher-value tasks.
Studies also show that inclusion efforts benefit companies. According to research cited by Kristi Boyd, an AI specialist at SAS’s data ethics practice, organizations that invest in accessibility and ethical guardrails around AI are 1.6 times more likely to double their return on investment. “Investing in ethical guardrails that protect and aid neurodivergent workers is not just the right thing to do — it’s a smart way to make good on your AI investments,” Boyd said.
However, Boyd cautioned that businesses must address three primary risks: competing needs, algorithmic bias, and inappropriate disclosure. For example, tools designed for one neurodiverse group may conflict with the needs of another — such as document readers that help dyslexic workers but may overstimulate those with bipolar disorder. Boyd emphasized the importance of choice-based frameworks that balance competing needs and protect privacy.
Bias in AI remains another challenge. Duke University research has shown that algorithms can unintentionally associate neurodivergence with danger or dysfunction. Given persistent workplace stigma, Boyd said it is crucial for companies to offer secure, anonymous mechanisms for feedback and issue reporting.
Some organizations are addressing inclusivity through innovation initiatives. The nonprofit Humane Intelligence launched its Bias Bounty Challenge in October, inviting participants to identify biases in AI systems to build more inclusive communication platforms for users with cognitive differences, sensory sensitivities, or alternative communication styles.
Emotion-recognition AI, for instance, can help individuals who struggle to read facial expressions during video calls. But as Boyd noted, these tools must be trained to recognize diverse communication patterns fairly and without embedding harmful assumptions.
For DeZao, AI has also reduced distractions in her day-to-day workflow. “One of the most difficult pieces of our hyper-connected world is that we’re all expected to multitask,” she said. “If I’m working on something and a new request comes in over Slack or Teams, it completely knocks me off my thought process. Being able to take that request, outsource it to AI, and keep working has been a godsend.”
Featured image credits: Freepik
For more stories like it, click the +Follow button at the top of this page to follow us.
