[en] Most log-based anomaly detectors assume logs are stable, though logs are
often unstable due to software or environmental changes. Anomaly detection on
unstable logs (ULAD) is therefore a more realistic, yet under-investigated
challenge. Current approaches predominantly employ machine learning (ML)
models, which often require extensive labeled data for training. To mitigate
data insufficiency, we propose FlexLog, a novel hybrid approach for ULAD that
combines ML models -- decision tree, k-nearest neighbors, and a feedforward
neural network -- with a Large Language Model (Mistral) through ensemble
learning. FlexLog also incorporates a cache and retrieval-augmented generation
(RAG) to further enhance efficiency and effectiveness. To evaluate FlexLog, we
configured four datasets for \task, namely ADFA-U, LOGEVOL-U, SynHDFS-U, and
SYNEVOL-U. FlexLog outperforms all baselines by at least 1.2 percentage points
(pp) in F1 score while using much less labeled data (62.87 pp reduction). When
trained on the same amount of data as the baselines, FlexLog achieves up to a
13 pp increase in F1 score on ADFA-U across varying training dataset sizes.
Additionally, FlexLog maintains inference time under one second per log
sequence, making it suitable for most applications, except latency-sensitive
systems. Further analysis reveals the positive impact of FlexLog's key
components: cache, RAG and ensemble learning.
Research center :
Interdisciplinary Centre for Security, Reliability and Trust (SnT) > SVV - Software Verification and Validation
Disciplines :
Computer science
Author, co-author :
Hadadi, Fatemeh ; University of Ottawa, Canada
Xu, Qinghua ; Research Ireland Lero Centre for Software and University of Limerick, Ireland
The replication package of "LLM meets ML: Data-efficient Anomaly Detection on Unstable Logs" includes implementation, datasets, and scripts used for the evaluation.