Why Robots Are Not Effective Tools for Supporting Autistic People
Despite a push from investors, new research shows robot-human interaction does not deliver the help that autistic people need.

Get stories like this delivered straight to your inbox. Sign up for 麻豆精品 Newsletter
Even as the education technology industry rushes to develop robots that can deliver therapy to autistic children, research shows the devices are ineffective and unwanted, according to a new study released by researchers at the University of California Jacobs School of Engineering.
An autistic PhD candidate in computer science, Naba Rizvi is the lead author of published between 2016 and 2022 that focused on robots鈥 interactions with autistic people. She and her colleagues found that almost all of the research excludes the perspectives of the autistic subjects, pathologizes them by using an outdated understanding of the neurotype, and contains little, if any, evidence that therapies delivered by robots are effective.
More than 93% of the studies start with the now-controversial stance that autism is a condition that can and should be cured. Nearly all test the use of robots to diagnose the condition or to teach autistic children to interact in ways that make them seem more neurotypical, such as making eye contact.
While most research on human-robot interaction starts by asking the subjects what their needs are, nearly 90% of the researchers in Rizvi鈥檚 sample did not ask autistic people whether they want the technology. Fewer than 3% included autistic people in framing the theory being investigated, and just 5% incorporated their perspectives in designing research.
鈥淓ven clinicians are not convinced of their effectiveness, and minimal progress has been made in making such robots clinically useful,鈥 Rizvi writes. 鈥淚n fact, research even suggests that this use of robots may be counterproductive and negatively impact the skills they are designed to hone in autistic end-users.鈥
Proponents reason that robots can not only deliver behavior therapy more cheaply but will appeal more to children than human therapists. Investors forecast the technology could become the centerpiece of a market that may soon be worth . Not yet common in special education classrooms, robots programmed to intervene with autistic children are being marketed to schools and even families.
Some of the early research the robotics industry has recently relied on in designing its experiments described autistic children as less human than chimpanzees, Rizvi adds: 鈥淭hese systems promote the idea that autistic people are 鈥榙eficient鈥 in their humanness, and that robots can teach them how to be more human-like. This echoes foundational work that has questioned the humanity of autistic people, and proposed non-human entities such as animals may be more human than them.鈥
Most of the research the team reviewed was published in robotics journals, not autism reviews. Seventy-six of the studies used anthropomorphic or humanoid robots to teach social skills, while 15 relied on devices designed to look like animals. One used a robot to diagnose 鈥渁bnormal鈥 social interactions.

Researchers leaned on harmful tropes that describe autistic people as robot-like 鈥 and robots as intrinsically autistic. Many of the papers reviewed also accept an old and controversial premise that autistic people are not motivated to interact socially with others. Less than 10% included representative samples of girls, whose autistic 鈥渂ehaviors鈥 are more likely to show up as depression and other mental health conditions.
The report comes as a rift is widening between proponents of using behavioral therapy and autistic adults who say the intervention, commonly called applied behavior analysis, is inhumane. A growing body of research suggests that efforts to train autistic children to act and appear more like their neurotypical, or non-autistic, peers are ineffective and often traumatizing.
In applied behavior analysis, a therapist uses positive and negative reinforcement to attempt to 鈥渆xtinguish鈥 mannerisms perceived as undesirable and to replace them with behaviors considered 鈥渘ormal.鈥 Therapists work one-on-one with a child, often 10 to40 hours a week. It is repetitive and expensive.
Many autistic adults who have undergone the therapy note that some of the mannerisms it attempts to eliminate, such as hand-flapping or rocking, are harmless ways to compensate for overstimulation or to express positive emotions. Nonetheless, the therapy is widely considered the 鈥済old standard鈥 of autism interventions.
Rizvi says she鈥檚 dismayed but not surprised by the push to develop automated therapists. The use of robotics in medicine is exploding, and almost all of the researchers in her sample framed their work using what advocates call the 鈥渕edical model鈥 of disability. Historically, disabilities have been seen as medically diagnosable deficits to be treated or cured.
Over the past couple of decades, however, people with disabilities have increasingly pushed for the adoption of a 鈥渟ocial model,鈥 which holds that a lack of inclusion in all realms of public life is the central issue. Autistic adults have advocated for better representation in research, so that more studies are geared toward making education, employment, housing and other sectors of society more accommodating.
Just 6% of the papers Rizvi and her colleagues reviewed start from a social model. This is problematic, they say, because many autistic people have needs that can be addressed by improved technology. Non-verbal students, for example, benefit from evolving 鈥渁ugmentative and assistive communication鈥 鈥 devices families often struggle to get schools to provide.
Rizvi鈥檚 main research focus is on the development of ethical artificial intelligence. Because the datasets AI is 鈥渢rained鈥 on , so are the resulting algorithms, she explains. Research has shown, for example, that resumes that mention jobs in disability agencies or support capacities are automatically scored lower by AI than those that don鈥檛.
Another example is AI-enabled online content moderation. Social media posts and comments that mention disability-related topics are often rejected as toxic, Rizvi says.
鈥淲hen it comes to content moderation the data sets don’t always represent the perspectives of the communities,鈥 she says. 鈥淎nd they do this thing where, say, if you have three people trying to agree on whether or not a sentence is ableist, the automatic assumption is that the majority vote is the right one.鈥
鈥淎re Robots Ready to Deliver Autism Inclusion? A Critical Review鈥 was presented at a recent . The presentation includes suggestions for ensuring research is inclusive and avoids harmful stereotypes and historical misrepresentations, which are on Rizvi鈥檚 own website.
Get stories like these delivered straight to your inbox. Sign up for 麻豆精品 Newsletter