No full text
Doctoral thesis (Dissertations and theses)
Algorithmic Enforcement and Social Media Platforms: Artificial Intelligence and Online Platforms in EU Law
FERNANDEZ, Angelica
2024
 

Files


Full Text
No document available.

Send to



Details



Keywords :
AI, Artificial Intelligence, Content Moderation, Digital Services Act, Artificial Intelligence Act, Content Recognition Technologies, 2019 EU Copyright Directive, Algorithmic Enforcement, Online Enforcement, Platform governance, Platform regulation, Social Media Platforms, Third-party online illegal content, harmful online content, protection of online fundamental rights,
Abstract :
[en] Social media platforms are an inescapable feature of the 21st century. They serve as primary channels for individuals to receive and impart information. Their role in mediating expression has raised legal questions, particularly regarding platform responsibilities and liabilities for third-party illegal content. The algorithmic tools social media platforms employ can significantly impact individuals' rights and freedoms, making their governance a critical concern for modern digital regulation. This PhD dissertation examines algorithmic enforcement in online platforms, defined as the use of algorithmic tools to comply with the law by online platforms. These tools have structured a unique online regulatory environment that, while appearing to match offline rules, often diverged from them. The emergence of sophisticated content recognition technologies, many of them based on artificial intelligence, has fundamentally transformed how platforms moderate content and enforce legal obligations. This research focuses on artificial intelligence (AI) elements in social media content moderation, specifically examining content recognition technologies like fingerprinting, hashing, and predictive algorithms, including large language models (LLM). The term "algorithmic tools" is used deliberately to encompass the broad spectrum of available content moderation technologies, reflecting the complex technological landscape platforms navigate and have at their disposal. The research's starting point is an analysis of copyright law online enforcement and the European Union Court of Justice case law. Article 17(4)of Directive 2019/790 on copyright and related rights in the Digital Single Market sparked crucial discussions on algorithmic enforcement safeguards before expanding to other content areas. This evolution demonstrates the growing complexity of content moderation requirements and their technical implementation. This thesis analyzes algorithmic enforcement within EU law through several key instruments: Directive 2019/790 on copyright and related rights in the Digital Single Market; the Code on illegal hate speech (2016); the Audiovisual Media Services Directive (2018); the Regulation 2021/784 on Terrorist Content Online; the Proposal to prevent and combat Child Sexual Abuse online (2022); Digital Services Act (DSA)(2022) and the Artificial Intelligence Act (AI Act) (2023). The DSA and AI Act are the primary horizontal frameworks examined, linking platform regulation and algorithmic governance. The research reveals consistent language patterns across legal provisions in the legal instruments studied concerning algorithmic tools for countering illegal content online. Moreover, these patterns are also visible in the voluntary commitments in the Codes of Practice on Disinformation (2018, 2022) in regard to harmful content. These patterns that can be characterized as normative trends incentivize platforms of all sizes to adopt algorithmic tools across different content areas, effectively making their use mandatory. The research identifies a problematic "one-size-fits-all" approach to content moderation, where identical tools are applied to various content types, disregarding their unique characteristics. This standardization across the legal landscape creates regulatory challenges, exemplified by platforms using algorithmic detection of "toxic speech" - a non-legal category - to identify hate speech. A key challenge identified in this research is determining the granularity of information needed for effective regulatory oversight, particularly regarding decision explainability and algorithmic accountability. The research also highlights the emergence of new actors for whom content moderation is ancillary, alongside the rapid adoption of LLM-based tools, as requiring scholarly attention. The thesis explores the interplay between intermediary liability under the DSA and E-Commerce Directive, and product liability frameworks, including the revised Product Liability Directive (2023) and the AI Liability Directive Proposal (2022). A significant contribution of this research is advocating for complementarity between the DSA and AI Act frameworks to establish comprehensive data-sharing mechanisms for trustworthy algorithmic tools. Furthermore, this thesis highlights the emergence of a new landscape of content moderation catalyzed through AI-based technologies. New content moderation actors (AI developers, model marketplaces, AI intermediaries) operate under different rules and incentives than traditional social media platforms, necessitating a broader discourse on liability frameworks. This research takes initial steps toward addressing this evolving landscape in platform regulation scholarship, mainly focusing on the challenges posed by emerging technologies and diverse regulatory approaches. The findings suggest a need for more nuanced regulatory frameworks to accommodate both traditional content challenges and emerging technological capabilities while ensuring adequate protection of fundamental rights.
Disciplines :
European & international law
Author, co-author :
FERNANDEZ, Angelica ;  University of Luxembourg > Faculty of Law, Economics and Finance > Department of Law > Team Mark David COLE
Language :
English
Title :
Algorithmic Enforcement and Social Media Platforms: Artificial Intelligence and Online Platforms in EU Law
Defense date :
14 May 2024
Institution :
Unilu - University of Luxembourg [Faculty of Law, Economics and Finance (FDEF)], Luxembourg, Luxembourg
Degree :
Docteur en Droit (DIP_DOC_0007_B)
Promotor :
COLE, Mark D.  ;  University of Luxembourg > Faculty of Law, Economics and Finance (FDEF) > Department of Law (DL)
President :
RATTI, Luca  ;  University of Luxembourg > Faculty of Law, Economics and Finance (FDEF) > Department of Law (DL)
Secretary :
Jutte, Justin
Jury member :
Quintais, Joao
Kuczerawy, Aleksandra
Focus Area :
Law / European Law
FnR Project :
FNR12251371 - Enforcement In Multi-level Regulatory Systems, 2017 (01/01/2019-30/06/2025) - Joana Mendes
Funding text :
Luxembourg National Research Fund (FNR) (PRIDE17/12251371)
Available on ORBilu :
since 20 November 2024

Statistics


Number of views
255 (11 by Unilu)
Number of downloads
0 (0 by Unilu)

Bibliography


Similar publications



Contact ORBilu