No full text
Paper published in a book (Scientific congresses, symposiums and conference proceedings)
OptLLM: Optimal Assignment of Queries to Large Language Models
Liu, Yueyue; Zhang, Hongyu; Miao, Yuantian et al.
2024In Chang, Rong N. (Ed.) Proceedings - 2024 IEEE International Conference on Web Services, ICWS 2024
Peer reviewed
 

Files


Full Text
No document available.

Send to



Details



Keywords :
Cost-performance Tradeoff; Large Language Models; Multi-objective Optimization; Performance Prediction; Query Assignment; Cost performance; Cost-performance tradeoff; Language model; Large language model; Multi-objectives optimization; Optimal assignment; Performance; Performance prediction; Performance tradeoff; Query assignment; Artificial Intelligence; Computer Networks and Communications; Computer Science Applications; Information Systems; Information Systems and Management
Abstract :
[en] Large Language Models (LLMs) have garnered considerable attention owing to their remarkable capabilities, leading to an increasing number of companies offering LLMs as services. Different LLMs achieve different performance at different costs. A challenge for users lies in choosing the LLMs that best fit their needs, balancing cost and performance. In this paper, we propose a framework for addressing the cost-effective query allocation problem for LLMs. Given a set of input queries and candidate LLMs, our framework, named OptLLM, provides users with a range of optimal solutions to choose from, aligning with their budget constraints and performance preferences, including options for maximizing accuracy and minimizing cost. OptLLM predicts the performance of candidate LLMs on each query using a multi-label classification model with uncertainty estimation and then iteratively generates a set of non-dominated solutions by destructing and reconstructing the current solution. To evaluate the effectiveness of OptLLM, we conduct extensive experiments on various types of tasks, including text classification, question answering, sentiment analysis, reasoning, and log parsing. Our experimental results demonstrate that OptLLM substantially reduces costs by 2.40% to 49.18% while achieving the same accuracy as the best LLM. Compared to other multi-objective optimization algorithms, OptLLM improves accuracy by 2.94% to 69.05% at the same cost or saves costs by 8.79% and 95.87% while maintaining the highest attainable accuracy.
Disciplines :
Computer science
Author, co-author :
Liu, Yueyue;  The University of Newcastle, School of Information and Physical Sciences, Newcastle, Australia
Zhang, Hongyu;  Chongqing University, School of Big Data and Software Engineering, Chongqing, China
Miao, Yuantian;  The University of Newcastle, School of Information and Physical Sciences, Newcastle, Australia
LE, Van Hoang  ;  University of Newcastle, Australia
Li, Zhiqiang;  Shaanxi Normal University, School of Computer Science, China
External co-authors :
yes
Language :
English
Title :
OptLLM: Optimal Assignment of Queries to Large Language Models
Publication date :
2024
Event name :
2024 IEEE International Conference on Web Services (ICWS)
Event place :
Hybrid, Shenzhen, Chn
Event date :
07-07-2024 => 13-07-2024
Main work title :
Proceedings - 2024 IEEE International Conference on Web Services, ICWS 2024
Editor :
Chang, Rong N.
Publisher :
Institute of Electrical and Electronics Engineers Inc.
ISBN/EAN :
9798350368550
Peer reviewed :
Peer reviewed
Available on ORBilu :
since 26 January 2026

Statistics


Number of views
1 (0 by Unilu)
Number of downloads
0 (0 by Unilu)

Scopus citations®
 
1
Scopus citations®
without self-citations
1
OpenCitations
 
0
OpenAlex citations
 
6

Bibliography


Similar publications



Contact ORBilu