Reference : “Deep fakes”: disentangling terms in the proposed EU Artificial Intelligence Act
Scientific journals : Article
Law, criminology & political science : European & international law
Law / European Law
http://hdl.handle.net/10993/52496
“Deep fakes”: disentangling terms in the proposed EU Artificial Intelligence Act
English
Fernandez, Angelica mailto [University of Luxembourg > Faculty of Law, Economics and Finance (FDEF) > Department of Law (DL) >]
2021
UFITA - Archiv für Medienrecht und Medienwissenschaft
Nomos
2
392-433
Yes
2568-9185
0976-5697
Baden-Baden
Germany
[en] deepfakes ; AI
[en] Since 2018, deep fakes technology has been one of the areas in which artificial intelligence
has evolved dramatically and thus, deep fakes are primarily seen by governments as an
emerging threat. In particular, regulators are increasingly concerned by the developments
and applications of this technology in two main areas: image-based sexual abuse and disinformation.
Despite its increasing popularity, there are challenges in defining what deep
fakes are and what ought to be regulated when it comes to deep fake phenomena.
The following article aims to analyze the EU-level regulatory approach to deep fakes in relation
to AI regulation. This choice is motivated by the inclusion of deep fakes in the proposed
EU Artificial Intelligence Act and the nature of the provisions that apply to deep fake
technology within the Act. The first part will analyze the issues and challenges of adopting
a legal definition for deep fakes to highlight consensus and differences among scholars and
industry players. Getting the scope of the definition right is essential to address appropriately
the distinct harm profile stemming from deep fake technology, specifically in relation to
image-based sexual abuse and disinformation. A survey of different views shows a consensus
over two elements that define deep fakes: the use of AI-based technology and the intent
of the creator. However, there are practical challenges to this seemingly consensual definition,
particularly when it comes to drawing boundaries between deep fakes and lower AV
manipulation (i.e.: cheap fakes) and to co-opting the term to discredit audiovisual content,
and casting doubts on the veracity of AV content presented as evidence.
The second part focuses on the transparency requirements for deep fakes under the proposed
EU Artificial Intelligence Act proposal. This obligation will be examined in light of disclosure
and labelling obligations already tested in disinformation strategies, particularly in
the implementation of the EU Code of Practice on Disinformation 2018, which will likely
include deep fakes in its new iteration to be published in spring 2022. Among the main lessons
from the application of the Code of Practice on Disinformation, it is clear that labels
alone are not an effective measure to counter disinformation or deter its creation and dissemination.
Moreover, if users are to rely on labels to weigh whether they are interacting with
manipulated media, more research is needed into effective design since newer forms of enhancing
transparency are available but not necessarily implemented by companies. This is
particularly relevant in the context of the proposed Artificial Intelligence Act, since scholars
have serious concerns of its enforcement architecture.
Finally, the third part of this article illustrates a brief comparative approach between the
United States and the United Kingdom regulatory responses to deep fakes to assess further
the current EU response to this phenomenon. In contrast to the EU response, which so far
is based on minimal transparency requirements, other jurisdictions' trend has been primarily
to criminalize the malicious use of deep fakes which is often assimilated to revenge pornography
even though these are two different phenomena. For all of these reasons, deep
fakes are at the intersection of different possible regulatory frameworks providing an interesting
case to explore the regulatory challenges of AI in the context of the European Union.
Fonds National de la Recherche - FnR
Researchers ; Professionals ; Students ; General public
http://hdl.handle.net/10993/52496
10.5771/2568-9185-2021-2-392
https://www.nomos-elibrary.de/10.5771/2568-9185-2021-2-392/deep-fakes-disentangling-terms-in-the-proposed-eu-artificial-intelligence-act-jahrgang-85-2021-heft-2?page=1
FnR ; FNR12251371 > Joana Mendes > DTU-REMS-II > Enforcement In Multi-level Regulatory Systems > 01/01/2019 > 30/06/2025 > 2017

File(s) associated to this reference

Fulltext file(s):

FileCommentaryVersionSizeAccess
Limited access
UFITA_article_AFernandez.pdfAuthor postprint360.44 kBRequest a copy

Bookmark and Share SFX Query

All documents in ORBilu are protected by a user license.