References of "Lothritz, Cedric 50035213"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailComparing MultiLingual and Multiple MonoLingual Models for Intent Classification and Slot Filling
Lothritz, Cedric UL; Allix, Kevin UL; Lebichot, Bertrand UL et al

in 26th International Conference on Applications of Natural Language to Information Systems (2021, June 25)

With the momentum of conversational AI for enhancing client-to-business interactions, chatbots are sought in various domains, including FinTech where they can automatically handle requests for opening ... [more ▼]

With the momentum of conversational AI for enhancing client-to-business interactions, chatbots are sought in various domains, including FinTech where they can automatically handle requests for opening/closing bank accounts or issuing/terminating credit cards. Since they are expected to replace emails and phone calls, chatbots must be capable to deal with diversities of client populations. In this work, we focus on the variety of languages, in particular in multilingual countries. Specifically, we investigate the strategies for training deep learning models of chatbots with multilingual data. We perform experiments for the specific tasks of Intent Classification and Slot Filling in financial domain chatbots and assess the performance of mBERT multilingual model vs multiple monolingual models. [less ▲]

Detailed reference viewed: 58 (8 UL)
Full Text
Peer Reviewed
See detailA Comparison of Pre-Trained Language Models for Multi-Class Text Classification in the Financial Domain
Arslan, Yusuf UL; Allix, Kevin UL; Veiber, Lisa UL et al

in Companion Proceedings of the Web Conference 2021 (WWW '21 Companion), April 19--23, 2021, Ljubljana, Slovenia (2021, April 19)

Detailed reference viewed: 76 (14 UL)
Full Text
Peer Reviewed
See detailEvaluating Pretrained Transformer-based Models on the Task of Fine-Grained Named Entity Recognition
Lothritz, Cedric UL; Allix, Kevin UL; Veiber, Lisa UL et al

in Proceedings of the 28th International Conference on Computational Linguistics (2020, December)

Named Entity Recognition (NER) is a fundamental Natural Language Processing (NLP) task and has remained an active research field. In recent years, transformer models and more specifically the BERT model ... [more ▼]

Named Entity Recognition (NER) is a fundamental Natural Language Processing (NLP) task and has remained an active research field. In recent years, transformer models and more specifically the BERT model developed at Google revolutionised the field of NLP. While the performance of transformer-based approaches such as BERT has been studied for NER, there has not yet been a study for the fine-grained Named Entity Recognition (FG-NER) task. In this paper, we compare three transformer-based models (BERT, RoBERTa, and XLNet) to two non-transformer-based models (CRF and BiLSTM-CNN-CRF). Furthermore, we apply each model to a multitude of distinct domains. We find that transformer-based models incrementally outperform the studied non-transformer-based models in most domains with respect to the F1 score. Furthermore, we find that the choice of domains significantly influenced the performance regardless of the respective data size or the model chosen. [less ▲]

Detailed reference viewed: 170 (14 UL)