Profil

DYRMISHI Salijona

University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SerVal

Main Referenced Co-authors
CORDY, Maxime  (4)
GHAMIZI, Salah  (3)
Cătălina Stoian, Mihaela (2)
Giunchiglia, Eleonora (2)
LE TRAON, Yves  (2)
Main Referenced Keywords
Computer Science - Learning (2); Computer Science - Artificial Intelligence (1); Computer Vision: Adversarial learning, adversarial attack and defense methods (1); Constraint Satisfaction and Optimization: Constraint Optimization (1); Constraint Satisfaction and Optimization: Constraint Satisfaction (1);
Main Referenced Unit & Research Centers
Interdisciplinary Centre for Security, Reliability and Trust (SnT) > Other (4)
Interdisciplinary Centre for Security, Reliability and Trust (SnT) (1)
ULHPC - University of Luxembourg: High Performance Computing (1)
Main Referenced Disciplines
Computer science (6)

Publications (total 6)

The most downloaded
130 downloads
DYRMISHI, S. (2024). Enhancing Machine Learning Security: The Significance of Realistic Adversarial Examples [Doctoral thesis, Unilu - University of Luxembourg]. ORBilu-University of Luxembourg. https://orbilu.uni.lu/handle/10993/61526 https://hdl.handle.net/10993/61526

The most cited

6 citations (Scopus®)

DYRMISHI, S., GHAMIZI, S., & CORDY, M. (2023). How do humans perceive adversarial text? A reality check on the validity and naturalness of word-based adversarial attacks. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. https://hdl.handle.net/10993/55777

DYRMISHI, S. (2024). Enhancing Machine Learning Security: The Significance of Realistic Adversarial Examples [Doctoral thesis, Unilu - University of Luxembourg]. ORBilu-University of Luxembourg. https://orbilu.uni.lu/handle/10993/61526

DYRMISHI, S., Cătălina Stoian, M., Giunchiglia, E., & Cordy, M. (2024). Deep generative models as an adversarial attack strategy for tabular machine learning [Paper presentation]. International Conference on Machine Learning and Cybernetics.
Peer reviewed

Cătălina Stoian, M., DYRMISHI, S., CORDY, M., Lukasiewicz, T., & Giunchiglia, E. (2024). How Realistic Is Your Synthetic Data? Constraining Deep Generative Models for Tabular Data [Paper presentation]. International Conference on Learning Representations (ICLR).
Peer reviewed

DYRMISHI, S., GHAMIZI, S., & CORDY, M. (2023). How do humans perceive adversarial text? A reality check on the validity and naturalness of word-based adversarial attacks. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics.
Peer reviewed

DYRMISHI, S., GHAMIZI, S., SIMONETTO, T. J. A., LE TRAON, Y., & CORDY, M. (2023). On the empirical effectiveness of unrealistic adversarial hardening against realistic adversarial attacks. In Conference Proceedings 2023 IEEE Symposium on Security and Privacy (SP) (pp. 1384-1400). IEEE. doi:10.1109/SP46215.2023.00049
Peer reviewed

SIMONETTO, T. J. A., DYRMISHI, S., GHAMIZI, S., CORDY, M., & LE TRAON, Y. (2022). A Unified Framework for Adversarial Attack and Defense in Constrained Feature Space. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22 (pp. 1313-1319). International Joint Conferences on Artificial Intelligence Organization. doi:10.24963/ijcai.2022/183
Peer reviewed

Contact ORBilu