Results 1-4 of 4.
((uid:50027449))

Bookmark and Share    
Full Text
Peer Reviewed
See detailEuropean Union ∙ The Data Act: The Next Step in Moving Forward to a European Data Space
Fernandez, Angelica UL

in European Data Protection Law Review (2022), (1), 108-114

Detailed reference viewed: 26 (1 UL)
Full Text
Peer Reviewed
See detail“Deep fakes”: disentangling terms in the proposed EU Artificial Intelligence Act
Fernandez, Angelica UL

in UFITA - Archiv für Medienrecht und Medienwissenschaft (2021), (2), 392-433

Since 2018, deep fakes technology has been one of the areas in which artificial intelligence has evolved dramatically and thus, deep fakes are primarily seen by governments as an emerging threat. In ... [more ▼]

Since 2018, deep fakes technology has been one of the areas in which artificial intelligence has evolved dramatically and thus, deep fakes are primarily seen by governments as an emerging threat. In particular, regulators are increasingly concerned by the developments and applications of this technology in two main areas: image-based sexual abuse and disinformation. Despite its increasing popularity, there are challenges in defining what deep fakes are and what ought to be regulated when it comes to deep fake phenomena. The following article aims to analyze the EU-level regulatory approach to deep fakes in relation to AI regulation. This choice is motivated by the inclusion of deep fakes in the proposed EU Artificial Intelligence Act and the nature of the provisions that apply to deep fake technology within the Act. The first part will analyze the issues and challenges of adopting a legal definition for deep fakes to highlight consensus and differences among scholars and industry players. Getting the scope of the definition right is essential to address appropriately the distinct harm profile stemming from deep fake technology, specifically in relation to image-based sexual abuse and disinformation. A survey of different views shows a consensus over two elements that define deep fakes: the use of AI-based technology and the intent of the creator. However, there are practical challenges to this seemingly consensual definition, particularly when it comes to drawing boundaries between deep fakes and lower AV manipulation (i.e.: cheap fakes) and to co-opting the term to discredit audiovisual content, and casting doubts on the veracity of AV content presented as evidence. The second part focuses on the transparency requirements for deep fakes under the proposed EU Artificial Intelligence Act proposal. This obligation will be examined in light of disclosure and labelling obligations already tested in disinformation strategies, particularly in the implementation of the EU Code of Practice on Disinformation 2018, which will likely include deep fakes in its new iteration to be published in spring 2022. Among the main lessons from the application of the Code of Practice on Disinformation, it is clear that labels alone are not an effective measure to counter disinformation or deter its creation and dissemination. Moreover, if users are to rely on labels to weigh whether they are interacting with manipulated media, more research is needed into effective design since newer forms of enhancing transparency are available but not necessarily implemented by companies. This is particularly relevant in the context of the proposed Artificial Intelligence Act, since scholars have serious concerns of its enforcement architecture. Finally, the third part of this article illustrates a brief comparative approach between the United States and the United Kingdom regulatory responses to deep fakes to assess further the current EU response to this phenomenon. In contrast to the EU response, which so far is based on minimal transparency requirements, other jurisdictions' trend has been primarily to criminalize the malicious use of deep fakes which is often assimilated to revenge pornography even though these are two different phenomena. For all of these reasons, deep fakes are at the intersection of different possible regulatory frameworks providing an interesting case to explore the regulatory challenges of AI in the context of the European Union. [less ▲]

Detailed reference viewed: 23 (0 UL)
Full Text
See detailMaking Audits Meaningful: Overseeing the Use of AI in Content Moderation
Bloch-Weba, Hannah; Fernandez, Angelica UL; Morar, David

Report (2020)

While platforms use increasingly sophisticated technology to make content-related decisions that affect public discourse, firms are tight-lipped about exactly how the technologies of content moderation ... [more ▼]

While platforms use increasingly sophisticated technology to make content-related decisions that affect public discourse, firms are tight-lipped about exactly how the technologies of content moderation function. The laconic nature of industry disclosure relating to their use of algorithmic content moderation is thoroughly unacceptable, considering that regulators need to understand the platform ecosystem in order to design evidence-based regulations and monitor risks associated with the use of AI in content moderation. This white paper sets out to explain how and why audits, a specific type of transparency measure, should be mandated by law within the four clear principles of independence, access, publicity, and resources. We go on to unpack the types of transparency, and then contextualize audits in this framework while also describing risks and benefits. The white paper concludes with the explanation of the four principles, as they are derived from the previous sections. [less ▲]

Detailed reference viewed: 17 (0 UL)
Full Text
Peer Reviewed
See detailEDPB Opinion 14/2019 on Standard Contractual Clauses for Processors under Article 28(8) GDPR
Fernandez, Angelica UL

in European Data Protection Law Review (2019), 5(4), 523527

Detailed reference viewed: 12 (0 UL)