dark patterns; deceptive design patterns; LLMs; multimodal LLMs; Dark pattern; Deceptive design pattern; Design Patterns; Intelligence models; Language model; LLM; Machine-learning; Multi-modal; Multimodal LLM; Pattern detection; Business, Management and Accounting (all)
Abstract :
[en] To detect deceptive design patterns on UIs, traditional artificial intelligence models, such as machine learning, have limited coverage and a lack of multimodality. In contrast, the capabilities of Multimodal Large Language Model (MM-LLM) can achieve wider coverage with superior performance in the detection, while providing reasoning behind each decision. We propose and implement an MM-LLM-based approach (DeceptiLens) that analyzes UIs and assesses the presence of deceptive design patterns. We utilize Retrieval Augmented Generation (RAG) process in our design and task the model with capturing the deceptive patterns, classifying its category, e.g., false hierarchy, confirmshaming, etc., and explaining the reasoning behind the classifications by employing recent prompt engineering techniques, such as Chain-of-Thought (CoT). We first create a dataset by collecting UI screenshots from the literature and web sources and quantify the agreement between the model's outputs and a few experts' opinions. We additionally ask experts to gauge the transparency of the system's explanations for its classifications in terms of recognized metrics of clarity, correctness, completeness, and verifiability. The results indicate that our approach is capable of capturing the deceptive patterns in UIs with high accuracy while providing clear, correct, complete, and verifiable justifications for its decisions. We additionally release two curated datasets, one with expert-labeled UIs with deceptive design patterns, and one with AI-based generated explanations. Lastly, we propose recommendations for future improvement of the approach in various contexts of use.
Disciplines :
Computer science
Author, co-author :
KOCYIGIT, Emre ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > IRiSC
Rossi, Arianna ; Scuola Superiore sant'Anna, Pisa, Italy
SERGEEVA, Anastasia ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > IRiSC
Negri Ribalta, Claudia ; University of Luxembourg, Luxembourg, Luxembourg
FARJAMI, Ali ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > PI VDT
LENZINI, Gabriele ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > IRiSC
External co-authors :
yes
Language :
English
Title :
DeceptiLens: An Approach supporting Transparency in Deceptive Pattern Detection based on a Multimodal Large Language Model
Publication date :
23 June 2025
Event name :
Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency
Event place :
Athens, Grc
Event date :
23-06-2025 => 26-06-2025
Main work title :
ACMF AccT 2025 - Proceedings of the 2025 ACM Conference on Fairness, Accountability,and Transparency
Fonds National de la Recherche Luxembourg Fonds De La Recherche Scientifique - FNRS Ministero dell'Università e della Ricerca
Funding text :
This work was funded as part of the Decepticon project Decepticon (grant no. IS/14717072) supported by the Luxembourg National Research Fund (FNR). It was also funded by REMEDIS project \"National Fund for Scientific Research (FNRS)\" and the PNRR/Next Generation EU project \"Biorobotics Research and Innovation Engineering Facilities \"IR0000036\" CUPJ13C22000400007\".We thank the dark pattern experts that participated in the study including Kerstin Bongard-Blanchy, Cristiana Teixeira Santos, Silvia De Conca, Estelle Harry, Gunes Acar, Marie Potel, Lorena Sanchez Chamorro, Thomas Mildner, Joanna Strycharz, and others who contributed their time and insights. A special thanks goes to Irina Carnat for our fruitful discussion on the application of the AI Act to this use case.
ALLEA. 2023. The European Code of Conduct for Research Integrity-Revised Edition 2023. ALLEA-All European Academies, DE. https://doi.org/10.26356/ECoC
Saleema Amershi, Maya Cakmak, William Bradley Knox, and Todd Kulesza. 2014. Power to the people: The role of humans in interactive machine learning. AI magazine 35, 4 (2014), 105-120.
Pooria Babaei and Julita Vassileva. 2024. Drivers and persuasive strategies to influence user intention to learn about manipulative design. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (Rio de Janeiro, Brazil) (FAccT 24). Association for Computing Machinery, New York, NY, USA, 2421-2431. https://doi.org/10.1145/3630106.3659046
Gagan Bansal, TongshuangWu, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Tulio Ribeiro, and DanielWeld. 2021. Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI 21). Association for Computing Machinery, New York, NY, USA, Article 81, 16 pages. https://doi.org/10.1145/3411764.3445717
BEUC. 2022. "Dark patterns" and the EU consumer law acquisition. https://www.beuc.eu/sites/default/files/publications/beuc-x-2022-013-dark-patters-paper.pdf Last accessed 9 January 2023.
Nataliia Bielova, Cristiana Santos, and Colin M. Gray. 2024. Two worlds apart! Closing the gap between regulating EU consent and user studies. Harvard Journal of Law & Technology 37 (2024), 1295-1333. https://jolt.law.harvard.edu/assets/articlePDFs/v37/Symposium-12-Bielova-Santos-Gray-Two-Worlds-Apart-Closing-The-Gap-Between-Regulating-EU-Consent-And-User-Studies.pdf
European Data Protection Board. 2023. Guidelines 03/2022 on deceptive design patterns in social media platform interfaces: how to recognise and avoid them. Version 2.0. European Data Protection Board, Brussels. https://edpb.europa.eu/our-work-Tools/our-documents/guidelines/guidelines-032022-deceptive-design-patterns-social-media-en
European Data Protection Board. 2023. Report of the work undertaken by the Cookie Banner Taskforce. European Data Protection Board, Brussels. https://edpb.europa.eu/system/files/2023-01/edpb-20230118-report-cookie-banner-Taskforce-en.pdf
Kerstin Bongard-Blanchy, Arianna Rossi, Salvador Rivas, Sophie Doublet, Vincent Koenig, and Gabriele Lenzini. 2021. 'I am Definitely Manipulated, Even When I am Aware of it. Its Ridiculous!?-Dark Patterns from the End-User Perspective. In Proceedings of the 2021 ACM Designing Interactive Systems Conference (Virtual Event, USA) (DIS 21). Association for Computing Machinery, New York, NY, USA, 763-776. https://doi.org/10.1145/3461778.3462086
Christoph Bosch, Benjamin Erb, Frank Kargl, Henning Kopp, and Stefan Pfattheicher. 2016. Tales from the Dark Side: Privacy Dark Strategies and Privacy Dark Patterns. Proc. Priv. Enhancing Technol. 2016, 4 (2016), 237-254.
Ahmed Bouhoula, Karel Kubicek, Amit Zac, Carlos Cotrini, and David Basin. 2024. Automated Large-Scale Analysis of Cookie Notice Compliance. In Proceedings of the 33rd USENIX Security Symposium. USENIX Association, Philadelphia, US, 1723-1739. https://www.usenix.org/conference/usenixsecurity24/presentation/bouhoula
Harry Brignull. 2022. Deceptive patterns. https://www.deceptive.design Last accessed 30 October 2022.
Harry Brignull, Mark Leiser, Cristiana Santos, and Kosha Doshi. 2023. Deceptive Patterns Database of Legal cases. https://www.deceptive.design/cases
Zana Bucinca, Maja Barbara Malaya, and Krzysztof Z. Gajos. 2021. To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-Assisted Decision-making. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1 (April 2021), 1-21. https://doi.org/10.1145/3449287
Dongping Chen, Ruoxi Chen, Shilin Zhang, YaochenWang, Yinuo Liu, Huichi Zhou, Qihui Zhang, YaoWan, Pan Zhou, and Lichao Sun. 2025. MLLM-As-A-Judge: Assessing multimodal LLM-As-A-Judge with vision-language benchmark. In Proceedings of the 41st International Conference on Machine Learning (Vienna, Austria) (ICML24). JMLR.org, Article 254, 34 pages.
Jieshan Chen, Jiamou Sun, Sidong Feng, Zhenchang Xing, Qinghua Lu, Xiwei Xu, and Chunyang Chen. 2023. Unveiling the Tricks: Automated Detection of Dark Patterns in Mobile Applications. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology (San Francisco, CA, USA) (UIST 23). Association for Computing Machinery, New York, NY, USA, Article 114, 20 pages. https://doi.org/10.1145/3586183.3606783
Paul F. Christiano, Jan Leike, Tom B. Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. In Proceedings of the 31st International Conference on Neural Information Processing Systems (Long Beach, California, USA) (NIPS17). Curran Associates Inc., Red Hook, NY, USA, 4302-4310.
Miruna-Adriana Clinciu, Arash Eshghi, and Helen Hastie. 2021. A Study of Automatic Metrics for the Evaluation of Natural Language Explanations. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, Paola Merlo, Jorg Tiedemann, and Reut Tsarfaty (Eds.). Association for Computational Linguistics, Online, 2376-2387. https://doi.org/10.18653/v1/2021.eacl-main.202
CPRA. 2020. The California Privacy Rights Act of 2020. https://vig.cdn.sos.ca. gov/2020/general/pdf/topl-prop24.pdf Last accessed 9 January 2023.
Mary L Cummings. 2017. Automation bias in intelligent time critical decision support systems. In Decision making in aviation. Routledge, Abingdon, Oxfordshire, UK, 289-294.
Valdemar Danry, Pat Pataranutaporn, Yaoli Mao, and Pattie Maes. 2023. Dont Just Tell Me, Ask Me: AI Systems that Intelligently Frame Explanations as Questions Improve Human Logical Discernment Accuracy over Causal AI explanations. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI 23). Association for Computing Machinery, New York, NY, USA, Article 352, 13 pages. https://doi.org/10.1145/3544548.3580672
Verena Distler, Gabriele Lenzini, Carine Lallemand, and Vincent Koenig. 2020. The Framework of Security-Enhancing Friction: How UX Can Help Users Behave More Securely. In New Security Paradigms Workshop 2020. ACM, Online USA, 45-58. https://doi.org/10.1145/3442167.3442173
Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017).
European Parliament and Council of the European Union. 2022. REGULATION (EU) 2022/1925 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 14 September 2022 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act). https://eur-lex.europa.eu/eli/reg/2022/1925/oj/eng Official Journal of the European Union, L 265/1, 12.10.2022.
European Parliament and Council of the European Union. 2022. REGULATION (EU) 2022/2065 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32022R2065 Official Journal of the European Union, L 277, 27.10.2022, p. 1-102.
European Parliament and Council of the European Union. 2023. Regulation (EU) 2023/2854 of the European Parliament and of the Council of 13 December 2023 on harmonised rules on fair access to and use of data and amending Regulation (EU) 2017/2394 and Directive (EU) 2020/1828 (Data Act). https://eur-lex.europa.eu/eli/reg/2023/2854/oj/eng Official Journal of the European Union, L 2023/2854, 22.12.2023.
Joseph L Fleiss and Jacob Cohen. 1973. The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability. Educational and psychological measurement 33, 3 (1973), 613-619.
Raymond Fok and Daniel S.Weld. 2023. In Search of Verifiability: Explanations Rarely Enable Complementary Performance in AI-Advised Decision Making. AI Mag. 45 (2023), 317-332. https://api.semanticscholar.org/CorpusID:258686588
Directorate-General for Justice, Consumers (European Commission), Francisco Lupianez-Villanueva, Alba Boluda, Francesco Bogliacino, Giovanni Liva, Lucie Lechardoy, and Teresa Rodriguez de las Heras Ballell. 2022. Behavioural study on unfair commercial practices in the digital environment: dark patterns and manipulative personalisation: final report. Publications Office of the European Union, LU. https://data.europa.eu/doi/10.2838/859030
European Commission Directorate General for Research and Innovation. 2024. Living guidelines on the responsible use of generative AI in research. European Commission, Brussels. https://research-And-innovation.ec.europa.eu/document/2b6cf7e5-36ac-41cb-Aab5-0d32050143dc-en
FTC. 2022. Bringing Dark Patterns to Light. https://www.ftc.gov/reports/bringingdark-patterns-light
Sadaf Ghaffari and Nikhil Krishnaswamy. 2024. Exploring Failure Cases in Multimodal Reasoning About Physical Dynamics. Proceedings of the AAAI Symposium Series 3, 1 (May 2024), 105-114. https://doi.org/10.1609/aaaiss.v3i1.31189
Louie Giray. 2023. Prompt engineering with ChatGPT: A guide for academic writers. Annals of biomedical engineering 51, 12 (2023), 2629-2633.
Colin M. Gray, Cristiana Santos, and Nataliia Bielova. 2023. Towards a Preliminary Ontology of Dark Patterns Knowledge. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI EA 23). Association for Computing Machinery, New York, NY, USA, Article 286, 9 pages. https://doi.org/10.1145/3544549.3585676
Colin M. Gray, Cristiana Teixeira Santos, Nataliia Bielova, and Thomas Mildner. 2024. An Ontology of Dark Patterns Knowledge: Foundations, Definitions, and a Pathway for Shared Knowledge-Building. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI 24). Association for Computing Machinery, New York, NY, USA, Article 289, 22 pages. https://doi.org/10.1145/3613904.3642436
Paul Grabl, Hanna Schraffenberger, Frederik Zuiderveen Borgesius, and Moniek Buijzen. 2021. Dark and Bright Patterns in Cookie Consent Requests. Journal of Digital Social Research 3, 1 (2021), 1-38.
Johanna Gunawan, Cristiana Santos, and Irene Kamara. 2022. Redress for Dark Patterns Privacy Harms? A Case Study on Consent Interactions. In Proceedings of the 2022 Symposium on Computer Science and Law. ACM, Washington DC USA, 181-194. https://doi.org/10.1145/3511265.3550448
Robert R. Hoffman, Gary Klein, and ShaneT. Mueller. 2018. Explaining Explanation For 'Explainable Ai?. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 62, 1 (2018), 197-201. https://doi.org/10.1177/1541931218621047 arXiv:https://doi.org/10.1177/1541931218621047
Robert R. Hoffman, Shane T. Mueller, Gary Klein, and Jordan Litman. 2018. Metrics for Explainable AI: Challenges and Prospects. ArXiv abs/1812.04608 (2018). https://api.semanticscholar.org/CorpusID:54577009
Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, AkilaWelihinda, Alan Hayes, Alec Radford, et al. 2024. Gpt-4o system card. arXiv preprint arXiv:2410.21276 (2024).
Gautier Izacard and Edouard Grave. 2021. Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, Paola Merlo, Jorg Tiedemann, and Reut Tsarfaty (Eds.). Association for Computational Linguistics, Online, 874-880.
Daniel Kahneman. 2011. Thinking, fast and slow. Macmillan.
Sunnie S. Y. Kim, Q. Vera Liao, Mihaela Vorvoreanu, Stephanie Ballard, and JenniferWortman Vaughan. 2024. "Im Not Sure, But.": Examining the Impact of Large Language Models Uncertainty Expression on User Reliance and Trust. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (Rio de Janeiro, Brazil) (FAccT 24). Association for Computing Machinery, New York, NY, USA, 822-835. https://doi.org/10.1145/3630106.3658941
Daniel Kirkman, Kami Vaniea, and Daniel W. Woods. 2023. DarkDialogs: Automated detection of 10 dark patterns on cookie dialogs. In 2023 IEEE 8th European Symposium on Security and Privacy. IEEE, Delft, NL, 847-867. https://doi.org/10.1109/EuroSP57164.2023.00055
Agnieszka Kitkowska. 2023. The Hows and Whys of Dark Patterns: Categorizations and Privacy. Springer International Publishing, Cham, 173-198. https://doi.org/10.1007/978-3-031-28643-8-9
Emre Kocyigit, Arianna Rossi, and Gabriele Lenzini. 2023. Towards Assessing Features of Dark Patterns in Cookie Consent Processes. In Privacy and Identity Management (IFIP Advances in Information and Communication Technology), Felix Bieker, Joachim Meyer, Sebastian Pape, Ina Schiering, and Andreas Weich (Eds.). Springer Nature Switzerland, Cham, 165-183. https://doi.org/10.1007/978-3-031-31971-6-13
Emre Kocyigit, Arianna Rossi, and Gabriele Lenzini. 2024. A Systematic Approach for A Reliable Detection of Deceptive Design Patterns Through Measurable HCI Features. In Proceedings of the 2024 European Symposium on Usable Security (EuroUSEC 24). Association for Computing Machinery, New York, NY, USA, 290-308. https://doi.org/10.1145/3688459.3688475
Udo Kuckartz and Stefan Radiker. 2019. Analyzing qualitative data with MAXQDA. Springer.
Jinqi Lai, Wensheng Gan, Jiayang Wu, Zhenlian Qi, and Philip S. Yu. 2024. Large language models in law: A survey. AI Open 5 (2024), 181-196. https://doi.org/10.1016/j.aiopen.2024.09.002
Kathryn Ann Lambe, Gary OReilly, Brendan D. Kelly, and Sarah Curristan. 2016. Dual-process cognitive interventions to enhance diagnostic reasoning: A systematic review. BMJ quality & safety 25, 10 (2016), 808-820. https://qualitysafety.bmj.com/content/25/10/808.short
M. R. Leiser and Cristiana Santos. 2024. Dark Patterns, Enforcement, and the emerging Digital Design Acquis: Manipulation beneath the Interface. European Journal of Law and Technology 15, 1 (2024). https://ejlt.org/index.php/ejlt/article/view/990
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen-Tau Yih, Tim Rocktaschel, et al. 2020. Retrieval-Augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems 33 (2020), 9459-9474.
Xinyi Li, SaiWang, Siqi Zeng, YuWu, and Yi Yang. 2024. A survey on LLM-based multi-Agent systems: workflow, infrastructure, and challenges. Vicinagearth 1, 1 (2024), 9.
Zijing Liang, Yanjie Xu, Yifan Hong, Penghui Shang, Qi Wang, Qiang Fu, and Ke Liu. 2024. A Survey of Multimodel Large Language Models. In Proceedings of the 3rd International Conference on Computer, Artificial Intelligence and Control Engineering (Xi an, China) (CAICE 24). Association for Computing Machinery, New York, NY, USA, 405-409. https://doi.org/10.1145/3672758.3672824
Q. Vera Liao and Jennifer Wortman Vaughan. 2024. AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap. Harvard Data Science Review Special Issue 5, Special Issue 5 (may 31 2024). https://hdsr.mitpress.mit.edu/pub/aelql9qy.
Xialing Lin, Patric R Spence, and Kenneth A Lachlan. 2016. Social media and credibility indicators: The effect of influence cues. Computers in human behavior 63 (2016), 264-271.
Yuwen Lu, Chao Zhang, Yuewen Yang, Yaxing Yao, and Toby Jia-Jun Li. 2024. From awareness to action: Exploring end-user empowerment interventions for dark patterns in ux. Proceedings of the ACM on Human-Computer Interaction 8, CSCW1 (2024), 1-41.
S M Hasan Mansur, Sabiha Salma, Damilola Awofisayo, and Kevin Moran. 2023. AidUI: Toward Automated Recognition of Dark Patterns in User Interfaces. In Proceedings of the 45th International Conference on Software Engineering (Melbourne, Victoria, Australia) (ICSE 23). IEEE Press, 1958-1970. https://doi.org/10.1109/ICSE48619.2023.00166
Arunesh Mathur, Gunes Acar, Michael J Friedman, Eli Lucherini, Jonathan Mayer, Marshini Chetty, and Arvind Narayanan. 2019. Dark patterns at scale: Findings from a crawl of 11K shopping websites. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (2019), 1-32.
Arunesh Mathur, Mihir Kshirsagar, and Jonathan Mayer. 2021. What Makes a Dark Pattern. Dark? Design Attributes, Normative Considerations, and Measurement Methods. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI 21). Association for Computing Machinery, New York, NY, USA, Article 360, 18 pages. https://doi.org/10.1145/3411764.3445610
Celestin Matte, Nataliia Bielova, and Cristiana Santos. 2020. Do Cookie Banners Respect my Choice?: Measuring Legal Compliance of Banners from IAB Europes Transparency and Consent Framework. In 2020 IEEE Symposium on Security and Privacy (SP). IEEE, 791-809.
Thomas Mildner, Gian-Luca Savino, Philip R. Doyle, Benjamin R. Cowan, and Rainer Malaka. 2023. About Engaging and Governing Strategies: A Thematic Analysis of Dark Patterns in Social Networking Services. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI 23). Association for Computing Machinery, New York, NY, USA, Article 192, 15 pages. https://doi.org/10.1145/3544548.3580695
Siddharth Nandagopal. 2025. Securing Retrieval-Augmented Generation Pipelines: A Comprehensive Framework. Journal of Computer Science and Technology Studies 7, 1 (2025), 17-29.
Dmitry Nazarov and Yerkebulan Baimukhambetov. 2022. Clustering of Dark Patterns in the User Interfaces ofWebsites and Online Trading Portals (E-Commerce). Mathematics 10, 18 (2022). https://doi.org/10.3390/math10183219
NCC. 2018. Deceived by design, how tech companies use dark patterns to discourage us from exercising our rights to privacy. Norwegian Consumer Council Report (2018).
Liming Nie, Yangyang Zhao, Chenglin Li, Xuqiong Luo, and Yang Liu. 2024. Shadows in the Interface: A Comprehensive Study on Dark Patterns. Proc. ACM Softw. Eng. 1, FSE, Article 10 (jul 2024), 22 pages. https://doi.org/10.1145/3643736
High-Level Expert Group on Artificial Intelligence. 2019. Ethics Guidelines for Trustworthy AI. European Commission, Brussels. https://ec.europa.eu/newsroom/dae/document.cfm?doc-id=60419
Rajvardhan Patil and Venkat Gudivada. 2024. A Review of Current Trends, Techniques, and Challenges in Large Language Models (LLMs). Applied Sciences 14, 5 (2024). https://doi.org/10.3390/app14052074
Marie Potel-Saville and Mathilde Da Rocha. 2023. From Dark Patterns to Fair Patterns? Usable Taxonomy to Contribute Solving the Issue with Countermeasures. In Privacy Technologies and Policy: 11th Annual Privacy Forum, APF 2023, Lyon, France, June 1-2, 2023, Proceedings (Lyon, France). Springer-Verlag, Berlin, Heidelberg, 145-165. https://doi.org/10.1007/978-3-031-61089-9-7
Snehal Prabhudesai, Leyao Yang, Sumit Asthana, Xun Huan, Q. Vera Liao, and Nikola Banovic. 2023. Understanding Uncertainty: How Lay Decision-makers Perceive and Interpret Uncertainty in Human-AI Decision Making. In Proceedings of the 28th International Conference on Intelligent User Interfaces (Sydney, NSW, Australia) (IUI 23). Association for Computing Machinery, New York, NY, USA, 379-396. https://doi.org/10.1145/3581641.3584033
Shuhan Qi, Zhengying Cao, Jun Rao, LeiWang, Jing Xiao, and XuanWang. 2023. What is the limitation of multimodal LLMs? A deeper look into multimodal LLMs through prompt probing. Information Processing & Management 60, 6 (2023), 103510. https://doi.org/10.1016/j.ipm.2023.103510
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning. PMLR, 8748-8763.
Professor Christine Riefa and Liz Coll. 2024. The transformative potential of Enforcement Technology (EnfTech) in Consumer Law.
Hauke Sandhaus. 2023. Promoting Bright Patterns. In CHI 23 Workshop: Designing Technology and Policy Simultaneously. arXiv, Hamburg, DE. https://doi.org/10.48550/arXiv.2304.01157 arXiv:2304.01157 [cs].
Cristiana Santos and Arianna Rossi. 2023. The emergence of dark patterns as a legal concept in case law. Internet Policy Review (July 2023). https://policyreview. info/articles/news/emergence-of-dark-patterns-As-A-legal-concept
Yasin Sazid, Mridha Md. Nafis Fuad, and Kazi Sakib. 2023. Automated Detection of Dark Patterns Using In-Context Learning Capabilities of GPT-3. In 2023 30th Asia-Pacific Software Engineering Conference (APSEC). 569-573. https://doi.org/10.1109/APSEC60848.2023.00072
Rene Schafer, Paul Miles Preuschoff, Rene Ropke, Sarah Sahabi, and Jan Borchers. 2024. Fighting Malicious Designs: Towards Visual Countermeasures Against Dark Patterns. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI 24).Association for Computing Machinery, New York, NY, USA, Article 296, 13 pages. https://doi.org/10.1145/3613904.3642661
British Psychological Society and thics Committee of the British Psychological Society. 2018. Code of ethics and conduct. The British Psychological Society, Leicester.
European Data Protection Supervisor. 2023. TechDispatch #2/2023: Explainable Artificial Intelligence. https://www.edps.europa.eu/data-protection/ourwork/publications/techdispatch/2023-11-16-Techdispatch-22023-explainableartificial-intelligence-en Accessed: 2024-12-23.
Arun James Thirunavukarasu, Darren Shu Jeng Ting, Kabilan Elangovan, Laura Gutierrez, Ting Fang Tan, and Daniel ShuWei Ting. 2023. Large language models in medicine. Nature medicine 29, 8 (2023), 1930-1940.
Shengbang Tong, Zhuang Liu, Yuexiang Zhai, Yi Ma, Yann LeCun, and Saining Xie. 2024. Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 9568-9578.
Matthijs J Warrens. 2015. Five ways to look at Cohens kappa. Journal of Psychology & Psychotherapy 5 (2015).
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed Chi, Quoc V Le, and Denny Zhou. 2022. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. In Advances in Neural Information Processing Systems, S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (Eds.), Vol. 35. Curran Associates, Inc., 24824-24837. https://proceedings.neurips.cc/paper-files/paper/2022/file/9d5609613524ecf4f15af0f7b31abca4-Paper-Conference.pdf
J. Wu, W. Gan, Z. Chen, S. Wan, and P. S. Yu. 2023. Multimodal Large Language Models: A Survey. In 2023 IEEE International Conference on Big Data (BigData). IEEE Computer Society, Los Alamitos, CA, USA, 2247-2256. https://doi.org/10.1109/BigData59044.2023.10386743
Jiayang Wu, Wensheng Gan, Zefeng Chen, Shicheng Wan, and Philip S. Yu. 2023. Multimodal Large Language Models: A Survey. In 2023 IEEE International Conference on Big Data (BigData). 2247-2256. https://doi.org/10.1109/BigData59044.2023.10386743
Jingfeng Yang, Hongye Jin, Ruixiang Tang, Xiaotian Han, Qizhang Feng, Haoming Jiang, Shaochen Zhong, Bing Yin, and Xia Hu. 2024. Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond. ACM Trans. Knowl. Discov. Data 18, 6, Article 160 (April 2024), 32 pages. https://doi.org/10.1145/3649506
Jose P. Zagal, Staffan Bjork, and Chris Lewis. 2013. Dark Patterns in the Design of Games. In Foundations of Digital Games 2013. RISE, Swedish ICT, RISE, Swedish ICT, Interactive Institute. GAME., 7. http://urn.kb.se/resolve?urn=urn:nbn:se: ri:diva-24252 Conference paper, Refereed. Available from: 2016-10-31.
Yunfeng Zhang, Q. Vera Liao, and Rachel K. E. Bellamy. 2020. Effect of confidence and explanation on accuracy and trust calibration in AI-Assisted decision making. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (Barcelona, Spain) (FAT 20). Association for Computing Machinery, New York, NY, USA, 295-305. https://doi.org/10.1145/3351095.3372852
Zhuosheng Zhang, Aston Zhang, Mu Li, hai zhao, George Karypis, and Alex Smola. 2024. Multimodal Chain-of-Thought Reasoning in Language Models. Transactions on Machine Learning Research (2024). https://openreview.net/forum?id=y1pPWFVfvR
Jianlong Zhou, Amir H. Gandomi, Fang Chen, and Andreas Holzinger. 2021. Evaluating the Quality of Machine Learning Explanations: A Survey on Methods and Metrics. Electronics 10, 5 (2021). https://doi.org/10.3390/electronics10050593