In the chapter “Automating Autism,” Keyes (2023) offers a daring critique of how Artificial Intelligence (AI) treats disability—especially autism. By highlighting how current AI ethics debates often disregard or misinterpret autistic voices, Keyes raises an important question: Are we truly building ethical AI if it only serves people who fit “normal” communication standards?
The blind spot in AI ethics
AI ethics discussions frequently revolve around principles like fairness, accountability, and transparency. However, these frameworks often assume that everyone communicates in typical ways or has the ability to protest if something goes wrong. For autistic individuals, especially those who communicate differently, this assumption can result in exclusion and misrepresentation.
Additionally, societal biases about autism—such as viewing it solely as a “deficit”—influence how AI is developed. For example, models built on outdated stereotypes may label autistic individuals as unemotional or “robotic.” These harmful perceptions not only reinforce stigma but also shape the way technologies are designed and implemented.
Diagnosing Autism with AI: The risks
AI tools for diagnosing autism are becoming increasingly popular, using machine learning and computer vision to analyze eye contact, facial expressions, and repetitive behaviours. While these tools promise quicker diagnoses, they raise important concerns:
- Marginalizing autistic voices
- These tools often leave autistic individuals out of the development process, treating them as subjects rather than active contributors.
- Bias in diagnosis
- Framing autistic behaviours as “problems” to be fixed reinforces negative stereotypes and ignores the diversity of autistic experiences.
By prioritizing input from clinicians and designers over lived experiences, these technologies risk perpetuating outdated views of autism rather than fostering inclusion.

Employment, exploitation, and missing voices
Startups that employ autistic workers for data-labelling tasks often highlight their ability to focus on repetitive tasks. While these jobs provide opportunities, the marketing narratives frequently reduce autistic individuals to their perceived “machine-like” qualities.
Key concerns:
- Stereotypes: Describing autistic workers as uniquely suited to tedious work reinforces harmful tropes.
- Lack of representation: The voices of autistic employees are rarely included when discussing workplace satisfaction, job growth, or inclusion in meaningful roles.
Instead of celebrating neurodiverse contributions holistically, these narratives risk dehumanizing individuals by focusing solely on their utility.
A Feminist lens: Challenging power and redefining “normal”
What makes Keyes’s analysis feminist is its bold focus on questioning who holds power and whose voices are valued. Feminism isn’t just about gender—it’s about exposing and dismantling systems that exclude those deemed “different” or “less capable.” Autistic people are often pushed to the margins because society fails to recognise their unique ways of communicating and engaging with the world. Instead of seeing neurodiverse communication as valid and meaningful, it’s treated as a problem to be fixed. A shift is needed in how AI is developed, one that begins with including autistic voices and valuing their lived experiences. Including autistic voices and valuing their lived experiences is essential to challenging outdated notions of “normal” and creating truly inclusive, representative technologies.

Building better AI through inclusion
Ultimately, “Automating Autism” delivers a powerful message: ethical AI can’t be achieved by ticking boxes if entire populations are pushed to the margins. Ethical AI can’t just check boxes – it must actively include those who are often ignored.
How to build inclusive AI:
- Engage autistic communities: Include autistic voices at every stage of development, from research to deployment.
- Challenge stereotypes: Avoid framing autism as a deficit and focus on representing neurodiverse experiences authentically.
- Redefine success metrics: Move beyond conventional definitions of “fairness” and design AI systems that respect alternative communication styles.
By adopting a feminist approach, we can design AI that values neurodiversity and benefits all individuals, not just those who fit traditional norms.
Conclusion
AI’s potential to revolutionize society is undeniable, but its impact depends on who it serves—and who it leaves behind. From diagnosis tools to workplace applications, failing to include autistic voices risks perpetuating harmful biases and stereotypes.
A feminist perspective pushes us to rethink “normal” and advocate for technologies that reflect the full spectrum of human diversity. By including autistic voices and respecting their unique contributions, we can build ethical AI that empowers rather than marginalizes. After all, true progress isn’t about fitting everyone into one mould – it’s about embracing the differences that make us human.