Reference : Making Audits Meaningful: Overseeing the Use of AI in Content Moderation
Reports : Other
Law, criminology & political science : European & international law
Law / European Law
Making Audits Meaningful: Overseeing the Use of AI in Content Moderation
Bloch-Weba, Hannah [Texas A&M University > School of Law]
Fernandez, Angelica mailto [University of Luxembourg > Faculty of Law, Economics and Finance (FDEF) > Department of Law (DL) >]
Morar, David [George Washington University > Elliott School of International Affairs]
Research Sprint on AI and Content Moderation
[en] Audits ; AI ; Content Moderation
[en] While platforms use increasingly sophisticated technology to make content-related decisions that affect public discourse, firms are tight-lipped about exactly how the technologies of content moderation function. The laconic nature of industry disclosure relating to their use of algorithmic content moderation is thoroughly unacceptable, considering that regulators need to understand the platform ecosystem in order to design evidence-based regulations and monitor risks associated with the use of AI in content moderation. This white paper sets out to explain how and why audits, a specific type of transparency measure, should be mandated by law within the four clear principles of independence, access, publicity, and resources. We go on to unpack the types of transparency, and then contextualize audits in this framework while also describing risks and benefits. The white paper concludes with the explanation of the four principles, as they are derived from the previous sections.
Researchers ; Professionals ; Students ; General public

File(s) associated to this reference

Fulltext file(s):

Open access
[EoD] Policy Paper Auditing AI (1).pdfPublisher postprint248.77 kBView/Open

Bookmark and Share SFX Query

All documents in ORBilu are protected by a user license.