No document available.
Abstract :
[en] Educational Natural Language Generation (EduQG) aims to automatically generate educational questions from textual content, which is crucial for the expansion of online education. Prior research in EduQG has predominantly relied on cross-entropy loss for training, which can lead to issues such as exposure bias and inconsistencies between training and testing metrics. To mitigate this issue, we propose a reinforcement learning (RL) based large language model (LLM) for educational question generation. In particular, we fine-tune the Google FLAN-T5 model using a mixed objective function that combines cross-entropy and RL losses to ensure the generation of questions that are syntactically and semantically accurate. The experimental results on the SciQ question generation dataset show that the proposed method is competitive with current state-of-the-art systems in terms of predictive performance and linguistic quality.
Scopus citations®
without self-citations
3