Abstract :
[en] Growing requirements for ethical statements in AI research have not translated into meaningful engagement, with many researchers treating them as a bureaucratic burden. Prior analyses of NLP conferences, including EMNLP 2022 and 2023, show that while the number of ethical consideration statements (ECSs) has slightly increased, they remain superficial, often repetitive, and largely limited to issues such as privacy, bias, or annotator compensation. Industry track papers, which are potentially closest to deployment, rarely conduct substantive ethical analysis, frequently offloading responsibility to 'humans in the loop'. This article argues for the necessity of an apriori ethical and risk assessment, particularly for sensitive applications such as robot-assisted education for neurodivergent children. We evaluate the design of a platform combining social robots with large language models (LLMs) to support caregivers in creating educational materials. Using EU Guidelines for Trustworthy AI and risk analysis frameworks, expert interviews and literature evidence, we identify 68 perceived risks and opportunities for children and caregivers in this context. Based on the risk analysis, we formulate an action plan including communication plan, assessment plan and system design plan. Our findings demonstrate that integrating ethics early in system design can improve trustworthiness and societal value while reducing costs. However, our findings disclose essential methodological gaps, and as a consequence, lack of empirical evidence to assess major risks appropriately.