European Commission, ‘White Paper On Artificial Intelligence, A European approach to excellence and trust’, available at < https://commission.europa.eu/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en#related-linksBrussels > p. 2: «AI is a collection of technologies that combine data, algorithms and computing power».
European Parliament, ‘State of the art and future of artificial intelligence’, available at: < https://www.europarl.europa.eu/RegData/etudes/BRIE/2019/631051/IPOL_BRI(2019)631051_EN.pdf: >, February 2019: ‘Artificial intelligence is a branch of science and as such it can be defined as a set of computational technologies that are inspired by the ways people use their nervous systems and bodies to sense, learn, reason, and take action’; CEPEJ report, ‘European Ethical Charter on the use of Artificial Intelligence in judicial systems and their environment’, December 2018, available at < https://rm.coe.int/ethical-charter-en-for-publication-4-december-2018/16808f699c >, gives a lesser technical and more functional definition: ‘A set of scientific methods, theories and techniques whose aim is to reproduce, by a machine, the cognitive abilities of human beings. Current developments seek to have machines perform complex tasks previously carried out by humans.’ High-Level Expert Group, ‘Ethics guidelines for trustworthy AI’, 8 April 2019 available at < https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai>: ‘Software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. (…) As a scientific discipline, AI includes several approaches and techniques, such as machine learning (of which deep learning and reinforcement learning are specific examples), machine reasoning (which includes planning, scheduling, knowledge representation and reasoning, search, and optimization), and robotics (which includes control, perception, sensors and actuators, as well as the integration of all other techniques into cyber-physical systems)’.
Proposal for a Regulation of the European parliament and of the Council laying down harmonized rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain union legislative acts, COM/2021/206 final, Art. 3 (1). Annex I lists three categories: ‘machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning; logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; statistical approaches, Bayesian estimation, search and optimization methods.’
For the definitions of ‘highly and fully automated driving functions (in the German legal system) and ‘vehicle with driver delegation’ (in the French legal system), see Susanne Beck, Simon Gerndt, ‘German Report on Traditional Criminal Law Categories and AI’ [2023] 1 RIDP 183; Marion Lacaze, Julien Walther, ‘French Report on Traditional Criminal Law Categories and AI’ [2023] 1 RIDP 142.
Alice Giannini, ‘US Report on Traditional Criminal Law Categories and AI’.
H. R. 6580.
John S. McCain, ‘National Defense Authorization Act for Fiscal Year 2019’, sec. 238 (g): ‘the term ‘artificial intelligence’ includes the following: (1) Any artificial system that performs tasks under varying unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets. (2) An artificial system developed in computer software, physical hardware, or other context that solves tasks requiring human-like perception, cognition, planning, learning, communication, or physical action. (3) An artificial system designed to think or act like a human, including cognitive architectures and neural networks. (4) A set of techniques, including machine learning that is designed to approximate a cognitive task. (5) An artificial system designed to act rationally, including an intelligent software agent or embodied robot that achieves goals using perception, planning, reasoning, learning, communicating, decision-making, and acting.’
National Artificial Intelligence Initiative Act of 2021, art. 3(3): ‘Artificial Intelligence. The term artificial intelligence means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to (A) perceive real and virtual environments; (B) abstract such perceptions into models through analysis in an automated manner; and (C) use model inference to formulate options for information or action.’
S. 1260, 117th Cong. (2021), Sec. 4203 (4): ‘The term artificial intelligence system- (A) means any data system, software, application, tool, or utility that operates in whole or in part using dynamic or static machine learning algorithms or other forms of artificial intelligence, whether- (i) the data system, software, application, tool, or utility is established primarily for the purpose of researching, developing, or implementing artificial intelligence technology; or (ii) artificial intelligence capability is integrated into another system or agency business process, operational activity, or technology system; and (B) does not include any common commercial product within which artificial intelligence is embedded, such as a word processor or map navigation system.’
State of Nevada, see NEV. REV. STAT. 482A.020 (repealed 2013). State of Louisiana, see LA. STAT. ANN. § 9:2602 (2018).
Takeyoshi Imai, ‘Japanese Report on Traditional Criminal Law Categories and AI’ [2023] 1 RIDP 289.
Mykola Karchevskyi, Oleksandr Radutniy, ‘Ukrainian Report on Traditional Criminal Law Categories and AI’ [2023] 1 RIDP 347.
See paragraph 3.
Evert F. Stamhuis, ‘Dutch Report on Traditional Criminal Law Categories and AI’ [2023] 1 RIDP 123.
Maria Kaiafa Gbandi, Athina Sachoulidou, Dafni Lima, ‘Greek Report on Traditional Criminal Law Categories and AI’ [2023] 1 RIDP 211.
Beck, Gerndt, (n 5).
Definition by art. 11 of the Interpretation on the Application of Law in the Handling of Criminal Cases about Endangering the Security of Computer Information Systems issued by the Supreme People’s Court and the Supreme People’s Procuratorate of China in 2011. See Xiumei Wang, Xue Zhang, ‘Chinese Report on Traditional Criminal Law Categories and AI’ [2023] 1 RIDP 101.
Definition by art. 2 of the Regulations on Protecting the Safety of Computer Information Systems issued by the State Council of China in 1994 and revised in 2011. See Wang, Zhang (n 18).
Art. 10 of the Interpretation on Several Issues concerning the Specific Application of Law in the Handling of Defamation through Information Networks and Other Criminal Cases issued by the Supreme People’s Court and the Supreme People’s Procuratorate of China in 2013.
See paragraph 3.
Lacaze, Walther, (n 5).
Kaiafa, Sachoulidou, Lima, (n 16).
Stamhuis, (n 15).
Isabelle Dianne Gibson Pereira, Tatiana Lourenço Emmerich de Souza, ‘Brazilian Report on Traditional Criminal Law Categories and AI’ [2023] 1 RIDP 84.
Krisztina Karsai, Barna Miskolczi, Mónika Nogel, ‘Hungarian Report on Traditional Criminal Law Categories and AI’ [2023] 1 RIDP 240.
Karchevskyi, Radutniy, (n 13).
Imai, (n 12).
Art. 215 Croatian CC: ‘Whoever, by any general dangerous act or general dangerous means, endangers the life or body of people or property of a greater extent, and in para 2 there is part of incrimination regarding conduct that does not generally comply with the regulations or technical rules on protective measures and thus causes danger.’ See Barbara Herceg Pakšić, ‘Croatian report on Traditional Criminal Law Categories and AI’.
Karsai, Miskolczi, Nogel, (n 26).
Beck, Gerndt, (n 5).
Gibson Pereira, Lourenço Emmerich de Souza, (n 25).
See Olimpia Barresi, ‘Italian Report on Traditional Criminal Law Categories and AI’ [2023] 1 RIDP 255, where the potential is highlighted for direct incidence of the HFT on quotations of securities in realizing the conduct incriminated by the law of: (i) dissemination of false information; (ii) simulated transactions; (iii) all other acts which are likely to cause a significant alteration in the price of financial instruments.
Beck, Gerndt, (n 5).
Maria Filatova, ‘Russian Report on Traditional Criminal Law Categories and AI’ [2023] 1 RIDP 315.
Karchevskyi, Radutniy, (n 13).
Tomáš Gřivna, ‘Czech Republic report on Traditional Criminal Law Categories and AI’.
Vanheule, Verbruggen, ‘Belgian report’.
Stamhuis, (n 15).
Kaiafa, Sachoulidou, Lima, (n 16).
Gřivna, ‘Czech Republic report’.
Karsai, Miskolczi, Nogel, (n 26).
Cherñavsky, Riquert, ‘Argentinian report’.
Beck, Gerndt, (n 5).
Soledad Krause, Jean Pierre Matus, Gonzalo Rodríguez, Juan Carlos Manríquez, ‘Chilean report on Traditional Criminal Law Categories and AI’.
Wang, Zhang, (n 18).
Herceg Pakšić, ‘Croatian report’.
Beck, Gerndt, (n 5).
See paragraph 4.
Report elaborated by the China Academy of Information and Communication Technology (CAICT), in conjunction with JD Explore Academy.
Wang, Zhang, (n 18).
Filatova, (n 36).
Gürkan Özocak, Sinan Altunc, Baran Kızılırmak, A. Kemal Kumkumoğlu, ‘Turkish report on Traditional Criminal law Categories and AI’.
Lacaze, Walther, (n 5).
It states: ‘Human agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches.’
Gřivna, ‘Czech Republic report’.
Patrick Günsberg, Hanne Hirvonen, ‘Finnish report on Traditional Criminal Law Categories and AI’.
Giannini, ‘US report’.
Nora Alicia Cherñavsky, Marcelo Alfredo Riquert, ‘Argentinian report on Traditional Criminal Law Categories and AI’.
Beck, Gerndt, (n 5).
Lacaze, Walther, (n 5).
Vanheule, Verbruggen, ‘Belgian report’.
Giannini, ‘US report’.
Susana Aires de Sousa, ‘Portuguese Report on Traditional Criminal Law Categories and AI’ [2023] 1 RIDP 304.
A Royal Decree from 18 March 2018 on tests with automated vehicles (Belgian Official Journal 19 April 2018) introduced an article 59/1 in the Royal Decree providing a General Regulation on the Police of Road Traffic and the Use of Public Roads. Vanheule, Verbruggen, ‘Belgian report’.
Beck, Gerndt, (n 5).
Wang, Zhang, (n 18).
Vanheule, Verbruggen, ‘Belgian report’.
Stockholms universitet,’Remiss: Vägen till självkörande fordon- introduktion’ August 24, 2018; See 3 kap. Section 7 and 8 of the Swedish Penal Code; Some Swedish cases are interesting in terms of criminal law responsibility and driverless cars. In one case, a military commander had required that the driving lights of three military vehicles that advanced in a line be turned off. The military commander was on board as a passenger on the first vehicle. As the three vehicles came to a halt at a railway crossing another driver of a vehicle failed to notice the stationary vehicles and fatally collided with the last one. The military commander was prosecuted in the capacity of being a road user. The case addressed the question whether he was guilty of negligence in traffic which was confirmed by the Supreme Court. Thus, in the capacity of a road user a person may be criminally responsible for several vehicles simultaneously without engaging in dynamic driving tasks, SOU 2018:16, pp. 568-569. See Günsberg, Hirvonen, ‘Finnish report’.
See paragraph 4.2.
Beck, Gerndt, (n 5).
Filatova, (n 36).
See Wang, Zhang, (n 18).
Herceg Pakšić, ‘Croatian report’.
Günsberg, Hirvonen, ‘Finnish report’.
South Savo district court, tuomiolauselma 21/115548, asianumero R20/968, 13.04.2021. See Günsberg, Hirvonen, ‘Finnish report’.
The district court ruled that it was not shown beyond reasonable doubt that the defendant had completely failed to observe the traffic and how his vehicle was proceeding in the traffic which would have justified ruling that the conduct was either intentional or grossly negligent, see South Savo district court, tuomiolauselma 21/115548, asianumero R20/968, 13.04.2021, p. 10.
Case No. (2018) Zhejiang 0602 Criminal First Instance No. 101. See Wang, Zhang, (n 19).
Case No. (2019) Jilin 04 Criminal Final Instance No. 50.
Case No. 1-112/2020 (№ 1-440/2019), The Leninsky District Court of Smolensk, 08.06.2020; Case № 1-443/2020, The Petrozavodsk City Court of the Republic of Karelia, 03.09.2020. See Filatova, (n 38).
Сase No. 1-86/2021, The Balakhtinsky District Court of the Krasnoyarsk Region, 24.05.2021. See Filatova, (n 36).
Ciro Grandi, ‘Positive Obligations (Garantestellung) Grounding Criminal Responsibility for not having Avoided an Illegal Result Connected to the AI Functioning’ (2023) 1 RIDP 59.
Imai, (n 12).
Karsai, Miskolczi, Nogel, (n 26).
Lacaze, Walther, (n 5).
Giannini, ‘US report’.
Grandi (n 87); Vincenzo Mongillo, ‘Corporate Criminal Liability for AI-related Crimes: Possible Legal Techniques and Obstacles’ (2023) 1 RIDP 69.
Gřivna, ‘Czech Republic report’.
Barresi, (n 35).
Mongillo (n 92).
Stamhuis, (n 15).
Aires de Sousa, (n 65).
Beck, Gerndt, (n 5).
Karsai, Miskolczi, Nogel, (n 26).
Barresi, (n 34).
See paragraph 4.
Kaiafa, Sachoulidou, Lima, (n 16).
Barresi, (n 34).
Wang, Zhang, (n 19).
Kaiafa, Sachoulidou, Lima, (n 16).
A Bayesian Network (BN) is a probabilistic graphical model for representing knowledge about an uncertain domain where each node corresponds to a random variable and each edge represents the conditional probability for the corresponding random variables. See Imai, (n 13).
Özocak, Altunc, Baran Kızılırmak, Kemal Kumkumoğlu, ‘Turkish report’.
Beck, Gerndt, (n 5).
Wang, Zhang, (n 18).
Sec. 31 par. 2 of the Czech Criminal Code recognizes which activities cannot be considered under the defense of admittable risk: Admissible risk is not concerned if such activity imperils the life or health of a person without their consent given in accordance with another legal regulation, or if the result to which it leads evidently does not correspond to the degree of the risk, or if the performance of the activity clearly contravenes the requirements of another legal regulation, public interest, principles of humanity or if it contravenes good morals. See Gřivna, ‘Czech Republic report’.
Filatova, (n 36).
Lacaze, Walther, (n 5).
See paragraph 4.1.
Barresi, (n 34).
Vanheule, Verbruggen, ‘Belgian report’.
Lacaze, Walther, (n 5).
Wang, Zhang, (n 18).
Lacaze, Walther, (n 5).
Beck, Gerndt, (n 5).
Aires de Sousa, (n 65).
Lacaze, Walther, (n 5).
Kaiafa, Sachoulidou, Lima, (n 16).
Gřivna, ‘Czech Republic report’.
Grandi (n 87).
Kaiafa, Sachoulidou, Lima, (n 16).
See paragraph 2.1.2.
It should be stressed that the draft for a new Criminal Code being debated in Parliament opts for a single, broad notion of perpetration (including basically any who contributed to the offence with the necessary mens rea or participatory intent) and basically abandons the traditional approach under which the liability of a series of persons depended (partially) on perpetration of the offence by the material perpetrator. The proposals do not seem completely thought through with regard to perpetration liability of all persons involved and might still be adjusted somewhat in the course of the parliamentary debate. Some of the solutions suggested above, such as the analogy between AI systems or AA and animals, children or mentally disturbed persons, or the liability of instigators when the actual perpetration differs from the criminal conspiracy, will have to be reconsidered. See Vanheule, Verbruggen, ‘Belgian report’.
The Court of Cassation of Belgium has defined the indirect perpetrator as follows: ‘Whereas the person who uses a third party as a mere instrument to commit a crime is himself the executor of that crime within the meaning of article 66, paragraph 2, of the Criminal Code and not its moral perpetrator by incitement, within the meaning of article 66, paragraph 4, of the Criminal Code’ (Cass. 9 March 1993, Arr.Cass. 1993, 268 and Pas. 1993, I, 260 and Cass. 22 January 2013, P.12.0625.N.). Vanheule, Verbruggen, ‘Belgian report’. 132 Stamhuis, (n 15).
Duvac Constantin, Voicescu Vlad Alexandru, ‘Romanian report on Traditional Criminal Law categories and AI’.
Filatova, (n 36).
Barresi, (n 34).
Such as the duty to inform the consumers and the administrative authorities about the harm the product may cause (Art. 7 (7a; 9) Act 2251/1994) and to take all the appropriate measures to protect health and safety of consumers, including withdrawal or recall of a dangerous product prior to any administrative instruction/intervention/warning (Art. 7 (7b) Act 2251/1994; cf. Art. 5 (1f); 25 Ministerial Decision 2810/2004). The obligation to take such measures may also be based on the fundamental principle of good faith (see Art. 288GrCivC) that refers to the directness and honesty every party is obliged to show in transactions. On this basis, the legal order may impose supplementary legal obligations on the parties, apart from the contractual ones or those arising from specific legal provisions. Kaiafa, Sachoulidou, Lima, (n 16).