DYRMISHI, S. (2024). Enhancing Machine Learning Security: The Significance of Realistic Adversarial Examples [Doctoral thesis, Unilu - University of Luxembourg]. ORBilu-University of Luxembourg. https://orbilu.uni.lu/handle/10993/61526 |
DYRMISHI, S., Cătălina Stoian, M., Giunchiglia, E., & Cordy, M. (2024). Deep generative models as an adversarial attack strategy for tabular machine learning [Paper presentation]. International Conference on Machine Learning and Cybernetics. Peer reviewed |
Cătălina Stoian, M., DYRMISHI, S., CORDY, M., Lukasiewicz, T., & Giunchiglia, E. (2024). How Realistic Is Your Synthetic Data? Constraining Deep Generative Models for Tabular Data [Paper presentation]. International Conference on Learning Representations (ICLR). Peer reviewed |
DYRMISHI, S., GHAMIZI, S., & CORDY, M. (2023). How do humans perceive adversarial text? A reality check on the validity and naturalness of word-based adversarial attacks. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. Peer reviewed |
DYRMISHI, S., GHAMIZI, S., SIMONETTO, T. J. A., LE TRAON, Y., & CORDY, M. (2023). On the empirical effectiveness of unrealistic adversarial hardening against realistic adversarial attacks. In Conference Proceedings 2023 IEEE Symposium on Security and Privacy (SP) (pp. 1384-1400). IEEE. doi:10.1109/SP46215.2023.00049 Peer reviewed |
SIMONETTO, T. J. A., DYRMISHI, S., GHAMIZI, S., CORDY, M., & LE TRAON, Y. (2022). A Unified Framework for Adversarial Attack and Defense in Constrained Feature Space. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22 (pp. 1313-1319). International Joint Conferences on Artificial Intelligence Organization. doi:10.24963/ijcai.2022/183 Peer reviewed |