Domain Modeling; Large Language Models; Tree of thoughts
Abstract :
[en] Domain modeling is typically an iterative process where modeling experts interact with domain experts to complete and refine the model. Recently, we have seen several attempts to assist, or even replace, the modeler with a Large Language Model (LLM). Several LLM prompting strategies have been attempted, but with limited success. In this paper, we advocate for the adoption of a Tree-of-Thoughts (ToT) strategy to overcome the limitations of current approaches based on simpler prompting strategies. With a ToT strategy, we can decompose the modeling process into several sub-steps using for each step a specialized set of generators and evaluators prompts to optimize the quality of the LLM output. As part of our adaptation, we provide a Domain-Specific Language (DSL) to facilitate the formalization of the ToT process for domain modeling. Our approach is implemented as part of an open source tool available on GitHub.
Disciplines :
Computer science
Author, co-author :
SILVA MERCADO, Jonathan ✱; University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
MA, Qin ; University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
CABOT, Jordi ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > PI Cabot ; Luxembourg Institute of Science and Technology, Esch-sur-Alzette, Luxembourg
KELSEN, Pierre ; University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
Proper, Henderik A. ; TU Wien, Vienna, Austria
✱ These authors have contributed equally to this work.
External co-authors :
yes
Language :
English
Title :
Application of the Tree-of-Thoughts Framework to LLM-Enabled Domain Modeling
Publication date :
21 October 2024
Event name :
International Conference on Conceptual Modeling 2024
Event place :
Pittsburg, Usa
Event date :
28-10-2024 => 31-10-2024
Main work title :
Conceptual Modeling - 43rd International Conference, ER 2024, Proceedings
Editor :
Maass, Wolfgang
Publisher :
Springer Science and Business Media Deutschland GmbH
UML Specifications. (2015). https://www.omg.org/spec/UML/2.5/PDF/. Accessed 3 May 2024
Almonte, L., Guerra, E., Cantador, I., De Lara, J.: Recommender systems in model-driven engineering: a systematic mapping review. Softw. Syst. Model. 21(1), 249–280 (2022) https://doi.org/10.1007/s10270-021-00905-x
Arora, C., Sabetzadeh, M., Briand, L., Zimmer, F.: Extracting domain models from natural-language requirements: approach and industrial evaluation. In: Proceedings of the ACM/IEEE 19th International Conference on Model Driven Engineering Languages and Systems, pp. 250–260. ACM, Saint-Malo (2016). https://doi.org/10.1145/2976767.2976769
Brown, T., et al.: Language models are few-shot learners. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 1877–1901. Curran Associates, Inc. (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf
Chaaben, M.B., Burgueño, L., Sahraoui, H.: Towards using few-shot prompt learning for automating model completion. In: 2023 IEEE/ACM 45th International Conference on Software Engineering: New Ideas and Emerging Results (ICSE-NIER), pp. 7–12. IEEE, Melbourne (2023). https://doi.org/10.1109/ICSE-NIER58687.2023.00008
Chen, K., Yang, Y., Chen, B., Hernández López, J.A., Mussbacher, G., Varró, D.: Automated domain modeling with large language models: a comparative study. In: 2023 ACM/IEEE 26th International Conference on Model Driven Engineering Languages and Systems (MODELS), pp. 162–172. IEEE, Västerås (2023). https://doi.org/10.1109/MODELS58315.2023.00037
Chen, P.P.S.: The entity-relationship model-toward a unified view of data. ACM Trans. Datab. Syst. 1(1), 9–36 (1976). https://doi.org/10.1145/320434.320440
Cámara, J., Troya, J., Burgueño, L., Vallecillo, A.: On the assessment of generative AI in modeling tasks: an experience report with ChatGPT and UML. Softw. Syst. Model. 22(3), 781–793 (2023). https://doi.org/10.1007/s10270-023-01105-5
Dejanović, I., Vaderna, R., Milosavljević, G.: Vuković,: Textx: a python tool for domain-specific languages implementation. Knowl.-Based Syst. 115, 1–4 (2017). https://doi.org/10.1016/j.knosys.2016.10.023
Feltus, C., Ma, Q., Proper, H.A., Kelsen, P.: Towards AI assisted domain modeling. In: Reinhartz-Berger, I., Sadiq, S. (eds.) Advances in Conceptual Modeling. LNCS, vol. 13012, pp. 75–89. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-88358-4_7
Fill, H.G., Fettke, P., Köpke, J.: Conceptual modeling and large language models: impressions from first experiments with ChatGPT. In: Enterprise Modelling and Information Systems Architectures (EMISAJ), pp. 3:1-:15 (2023). https://doi.org/10.18417/EMISA.18.3, https://emisa-journal.org/emisa/article/view/318. Artwork Size: 3:1–15 Pages Publisher: Enterprise Modelling and Information Systems Architectures (EMISAJ)
Frederiks, P.J.M., van der Weide, T.P.: Information modeling: the process and the required competencies of its participants. Data Knowl. Eng. 58(1), 4–20 (2006). https://doi.org/10.1016/j.datak.2005.05.007
Mussbacher, G., et al.: Opportunities in intelligent modeling assistance. Softw. Syst. Model. 19(5), 1045–1053 (2020). https://doi.org/10.1007/s10270-020-00814-5
Saini, R., Mussbacher, G., Guo, J.L.C., Kienzle, J.: DoMoBOT: a bot for automated and interactive domain modelling. In: Proceedings of the 23rd ACM/IEEE International Conference on Model Driven Engineering Languages and Systems: Companion Proceedings, pp. 1–10. ACM, Virtual Event Canada (2020). https://doi.org/10.1145/3417990.3421385
Wei, J., et al.: Chain-of-thought prompting elicits reasoning in large language models. In: Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., Oh, A. (eds.) Advances in Neural Information Processing Systems, vol. 35, pp. 24824–24837. Curran Associates, Inc. (2022). https://proceedings.neurips.cc/paper_files/paper/2022/file/9d5609613524ecf4f15af0f7b31abca4-Paper-Conference.pdf
Wilmont, I., Hengeveld, S., Barendsen, E., Hoppenbrouwers, S.: Cognitive mechanisms of conceptual modelling. In: Ng, W., Storey, V.C., Trujillo, J.C. (eds.) Conceptual Modeling, pp. 74–87. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-41924-9_7
Yao, S., et al.: Tree of thoughts: deliberate problem solving with large language models. In: Oh, A., Naumann, T., Globerson, A., Saenko, K., Hardt, M., Levine, S. (eds.) Advances in Neural Information Processing Systems, vol. 36, pp. 11809–11822. Curran Associates, Inc. (2023). https://proceedings.neurips.cc/paper_files/paper/2023/file/271db9922b8d1f4dd7aaef84ed5ac703-Paper-Conference.pdf