[en] Dialogue agents become more engaging through recipient design, which needs user-specific information. However, a user’s identification with marginalized communities, such as migration or disability background, can elicit biased language. This study compares LLM responses to neurodivergent user personas with disclosed vs. masked neurodivergent identities. A dataset built from public Instagram comments was used to evaluate four open-source models on story generation, dialogue generation, and retrieval-augmented question answering. Our analyses show biases in user’s identity construction across all models and tasks. Binary classifiers trained on each model can distinguish between language generated for prompts with or without self-disclosures, with stronger biases linked to more explicit disclosures. Some models’ safety mechanisms result in denial of service behaviors. LLM’s recipient design to neurodivergent identities relies on stereotypes tied to neurodivergence.
Disciplines :
Computer science
Author, co-author :
HOEHN, Sviatlana ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > IRiSC
PHILIPPY, Fred ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > TruX
André, Elisabeth; University of Augsburg
External co-authors :
yes
Language :
English
Title :
On Speakers’ Identities, Autism Self-Disclosures and LLM-Powered Robots
Publication date :
August 2025
Event name :
26th Annual Meeting of the Special Interest Group on Discourse and Dialogue