top of page

Acerca de

pexels-pixabay-373543.jpg

Cybercrime as an Obstruction for the Deployment of AI into miscellaneous Transport Systems
(the Taxonomy of Criminal Liability
for the use of AI is included)

Polina Prianykova

International Human Rights Defender on AI,

Author of the First AI Constitution in World History,

Student of the Law Faculty & the Faculty of Economics

IMG_0809.JPG

Cybercrime as an Obstruction for the Deployment of AI into miscellaneous Transport Systems

(the Taxonomy of Criminal Liability for the use of AI is included)

on the basis of the European Union legal framework 

The creation and further deployment of Artificial Intelligence has marked crucial changes in the functioning of various spheres of the human activity. With such a situation accepted, the current law is prescribed to alter inevitably for the purpose of being relevant and efficient, pursuant to the aim set on it many years before – establishing certain standards to regulate different relations and ensure the protection of human rights and liberties. Taking into consideration the analysis of the legal framework which envisages the rule of law, in our research the special focus is given to the legislation of the EU.

Keywords: artificial intelligence, criminal liability, automated vehicles, drones, autonomous driving, electronic personhood, cybercrime, transport systems, ad-hoc regulation, legislation on AI.

The relevance of our scientific work is attributable to the fact of the occurrence of novel relations (intrinsically intertwined with the implementation and use of Artificial Intelligence (henceforth referred to as ‘AI’), inter alia being deployed into transport systems in forms of automated vehicles and drones) and the absence of the regulation thereof, particularly in criminal law. Hence, the liability for such activities has not been clarified yet; the life-threatening possibility of the enhancement of the cybercrime level is gaining in its significance, thereby entailing perilous and menacing implications for the guaranteeing and protection of fundamental human rights. 

Recent research and publication analysis. The studies we have taken into consideration to enlighten the subject of our scientific article are intrinsically intertwined with certain provisions and strategies formulated in terms of the EU legal scope, specifically integrated with the Concept on the development of Artificial Intelligence in Ukraine in accordance with the aims set for the eradication of cybercrime. Furthermore, we have given prominence to the initial works where ideas of conferring a special legal status on AI have been proposed, inter alia the law review of Lawrence B. Solum, Professor of Law and a famous American legal theorist. Furthermore, some admonitions concerning the unregulated AI expressed by Elon Musk, an entrepreneur and business magnate, who is contributing to the relevant development of such innovations, are drawn attention to.  

The primary purpose of our research is to highlight the importance of the establishment of criminal liability when AI is used, substantiate the problem and provide possible solutions to eliminate cybercrimes affiliated with AI in perspective.

Presentation of the main body of the article. First and foremost, in analogy to the words once said by a Stoic philosopher, Marcus Aurelius, we have to define the nature of Artificial Intelligence, the notion of the novelty which is developing in quantum jumps. Although the general term for AI has not been accepted all over the world, the definitions given by some organizations reflect the complexity of the technology in varying degrees. Taking into consideration the updated notion of the aforementioned modern phenomenon given by the European Commission’s High-Level Expert Group on Artificial Intelligence, ‘AI refers to systems designed by humans that, given a complex goal, act in the physical or digital world by perceiving their environment, interpreting the collected structured or unstructured data, reasoning on the knowledge derived from this data and deciding the best action(s) to take (according to pre-defined parameters) to achieve the given goal. AI systems can also be designed to learn to adapt their behavior by analyzing how the environment is affected by their previous actions’ [1]. Undoubtedly, to figure out how sophisticated the innovations with AI deployed are, the granular approach has to be applicable in order to keep up with advancements. Moreover, Artificial Intelligence can also be considered as a scientific discipline where techniques of the novelty’s integration are studied.

AI has been gradually growing in strength by taking place at the EU market and nowadays has already gathered its momentum in becoming a real cluster of the unprecedented revolution in systems of transport. The transformation many of us, perhaps, has not drawn attention to due to the COVID-19 pandemic that started. 

However, the Digital Day in April 2017, in Rome has provided an impetus for the intensification of the cooperation on testing connected and automated vehicles by creating ten 5G cross-border corridors according to the Letter of Intent, signed by 29 European countries, including not only the Members of the EU, but the European Economic Area,  with an aim to decrease the level of accidents on the road, streamline traffic efficiency and, hence, ensure the stable level of safety, considering the reduction in traffic congestion as well minimizing the emission of greenhouse gases [2]. 

Moreover, judging by the ‘View on 5G Architecture’, published by the 5GPPP Architecture Working Group in October 2021, the systems of 5G have already been implemented by now, being widely available in major urban areas and is going to reach less populated areas in the not-too-distant future [3].  This also implies the fact that such opportunities will make the automated vehicles grow in their favorability and thus, make the EU a definite world leader in the deployment of the transport mentioned above.

Notwithstanding the actuality that the EU is chasing noble and important for society objectives – achievement of ‘so-called Vision Zero, i.e., no road fatalities on European roads by 2050’ [4], the legislation of the ‘Acquis Communautaire’ still is not in a position to provide us even with a proper regulation on the liability for the use of automated vehicles. Consequently, many ‘grey areas’ in identification of breaches of law arise and the compliance with primary principles of fundamental human rights is becoming a challenging task for the legislators. It is noteworthy that even in the Communication from the European Commission ‘On the road to automated mobility’ it has been stipulated that ‘the long-term effects of driverless mobility on the transport system, the economy, the environment and on existing jobs are still largely unknown’ [4]. That is why we are bearing a strong necessity in a sector-specific approach which would help the government to assess the all-encompassing consequences of the aforementioned practices implemented, taking into account certain regulatory provisions which would be substantial rather than declarative, corresponding to contemporary realities and needs. 

Hence, it has to be emphasized that the actions of the legislatures pertaining to the law adaptation and its specific alteration on the recognition of AI should be well-conceived and prompt, starting with the general law on AI and specifying the subject with sectoral laws taking into account the fact that the field to regulate is vast and still developing. Moreover, it is significant to draw parallels between the forms of liability where we give prominence to the criminal liability in particular. 

Concerning the establishment of the criminal liability for the use of AI in the spheres of transport, it is important to note that this term refers to responsibility for a crime and the penalty imposed for it. The problem we touch upon is intrinsically intertwined with the challenges that arise while defining who is liable for the use of AI and the necessary recourse for the aggrieved parties. Thus, one of our aims is to state some policy actions proposed at EU level in more concrete terms, building and harmonizing a new liability regime for AI specifically deployed into transport systems, guaranteeing protection of people and ensuring preventative measures for new types of crimes [5].

In order to comply with such a state of affairs, Artificial Intelligence has to be recognized not only as an object of a crime, but as a subject thereof, meaning that it should be legally identified. Moreover, there have been ideas that suggest the possibility of a new term perpetuation – ‘electronic personhood’, the first attempts of venturing such opinions are even dating back to the last century [6]. However, nowadays the introduction of the term aforementioned has generated much more vigorous and sparking discussions whether such practice is prudent and efficient. Additionally, it has led to some claims that conferring the ‘electronic personhood’ on the phenomenon may equate the natural person with an artificial subject and, hence, AI would be endowed with the legal personality, raising a certain dissonance and the following extension in establishing the liability. Diversified novels in allegations may be indicative of a totally new age for legal systems throughout the world per se. 

In our research, we admit the possibility for the notion of ‘electronic personhood’ to become accepted, but only on the condition that it would not violate or impinge on the human rights. What is more, it does not prescribe the eventuality of granting the AI inventions with consciousness which is an inseparable part of the complexity of a human’s constitution. Such praxis may testify to the fact that AI technologies would become ‘tangible’, thus, identified by law. Besides, in modern realities natural or legal persons have to be responsible for the technological novelties with AI implemented as the stage of development of the latter is constantly being enhanced: the ‘joint liability’ among the human beings and cutting-edge machines envisages the foreseeable burden of those parties. We admit and support the right of the humankind to innovations, but on such a legal platform, where the guilty innovations may be reprogrammed, dismantled or annihilated in case the evidence proves that the technology is defective. 

It is important to accentuate on the expected fact that some inventions may be given a prospect of the further development of AI, able to gain sanity, mobility, room for learning, reasoning, formulating intentions and wishes. Although we are not the representatives of such theories’ adherents, in our scientific article we have to mention that, apparently, the scientists would not turn their back on an opportunity for conducting experiments with AI akin and hence, the introduction of intelligent and autonomous transport systems is predictable in particular. At any rate, the feasible foreseeability of the damage caused by such results of the technological genesis should be evaluated in order to recognize the exact inventions as sources of an increased danger. Nevertheless, the corresponding state of affairs gives rise to the question: “Would we and the specifically built law systems be able to handle with issues that may emerge as a consequence of the super intelligent inventions (we ceded more control to the latter once) that are given the right to decide about the life and death of human beings?’. That is why the society and the government have to define the red lines for the use of AI novelties in order the borders not to hinder the technological progress, but to become a stimulus for exercising control over the innovations with an aim to preclude the collapse and preserve the global order on the planet.

Specifying the liability for the use of various types of novel vehicles, it is significant to highlight the difference between these machines. First, we are going to take into consideration the glossary provided by the European Parliamentary Research Service in its Briefing where an automated vehicle is prescribed to be a particular type of the vehicle which is equipped with technology available to assist the driver so that elements of the driving task can be transferred to a computer system [7].

As in this case the human override is still required and feasible, the driver should be able to take the vehicle under control whenever the system identifies such a necessity or particular circumstances demand because, nevertheless, the driver carries an increased risk of harm to others. Hence, if the evidence proves that an accident with severe consequences was precipitated by the fault or negligence of the driver, that person has to be subject to the strict liability for the damages resulting from the vehicle’s use. 

However, the circle of responsible persons may expand provided that the AI-technology specialists assert, considering the junctures, the probability of the automated vehicle’s malfunction due to the fault of the manufacturer or the software developing companies. Moreover, as it is drawn attention to in the report ‘Liability for AI and other emerging digital technologies’ by the Expert Group on Liability and New Technologies, ‘strict liability of the producer should play a key role in indemnifying damage caused by defective products and their components, irrespective of whether they take a tangible or a digital form’ [8].

It is also vital to note, that the liability may usually be held by the party which has a greater control over the machine with AI deployed.

Meanwhile, an autonomous vehicle is capable to perform all driving functions eliminating any human intervention therein, being supposed to reach the highest known level of automation (also categorized by SAE as Level 5) [9]. 

Although nowadays these innovations are being tested, they could be on the market in a few years, judging by the constant acceleration of the technological progress. In this regard, some changes in establishing the criminal liability may arise as the driver would be superseded by the operator (in some cases, the operators), a person who is in control of the risk connected with the operation of emerging digital technologies and who benefits from such operation. There exists a certain taxonomy between the aforementioned: a fronted operator, a person, who decides on how to use the innovation in particular and mainly benefits from the use of it; a backend operator, who defines the features of the technology and bears a greater responsibility for exercising control over the operational risks [8].  

Hence, the traditional practice of defenses and some statutory exceptions from strict liability have to be reviewed and complemented by provisions on the new types of the possible liability holders. 

It has been stipulated that ‘in particular in the context of autonomous cars, that while the vast majority of accidents used to be caused by human error in the past, most accidents will be caused by the malfunctioning of technology in the future’ [8]. Thus, the manufacturer would bear higher risks by being liable for the accidents where its autonomous systems are involved. Besides, due to the complex structure of producers, the concept of the liability is still vague to define (whether it refers to the vicarious or strict) and has to be comprehensively studied. Notwithstanding the coherent fact that the producer is most likely to hold responsibility in case of fault in its vehicles, the driver/owner/operator decides how and when to use the autonomous transport, for which purposes and directly benefits from it. Hence, the joint and solidary liability has to be taken into consideration and given more prominence to.

Also, we suppose that in our scientific article some scenarios when an owner or the dealer-company may also be criminally liable for giving permission to use his or her transport system or to supply the vehicles (regardless of their level of automation), being aware of the fact that the people, they provide the AI-technologies to, intend to use these innovations for the purposes which may pose a threat to the society, inter alia including the illicit drug and arms trafficking, even homicides and terrorist attacks. Thus, the whole process of AI-implemented vehicles’ invention, further deployment, dealership and use thereof has to be supervised and given the closest attention to.

Considering the liability for the use of unmanned aerial vehicles, drones in particular, the aspects of the criminal liability have even a more sophisticated and unclarified taxonomy. Perhaps, the liability holders may be similar to the ones in cases pertaining to autonomous driving. Although drones are often associated with their part taken in military actions, nowadays they are also extensively used in realms of advertising and retail, telecommunications, development of the infrastructure, weather forecasting, journalism, gaming and live entertainment etc. 

It is evident that such technologies also continue to gain momentum and the ad-hoc provisions have not been proclaimed. Meanwhile, people are still not protected from the situations of the drone falling to the ground and, consequently, the ensuing damages to life, health and property. Hence, an owner or the operator should be strictly liable if someone decides to use the drone despite foreseeable adverse weather conditions which proscribe the use of the innovations; in addition, the person may hold the responsibility not only for the risks intertwined with operating drones, but also for the undertaken attempt to stop the use of the device during the storm. The manufacturer also can be liable for the damages in case the drone’s software is faulty or the design and the parts of the system do not enable the operator to perform responsibilities prescribed. 

Furthermore, the vehicles mentioned above can be additionally fitted with a function of connectivity, implying the fact that the machine would be able to establish a link with other devices or the infrastructure via the Internet. This means that the innovations would ‘communicate’ with each other, creating an intelligent and responsive composition of vehicles with low latent, smooth and secure data flows, where the traffic is intended to be administered and balanced. 

In the European Union, at least three large-scale projects are planned to become successful – L3PILOT, an extensive test of various automated driving functions for passengers; AUTOPILOT, a blueprint where real connected and automated eco-systems are envisaged; ENSEMBLE, a project aimed to save fuel and reduce the carbon footprint by virtue of compliance with a truck platooning strategy [4]. 

However, such status in quo specifies the inextricable interconnection between the vehicles the regulation thereof has not been stipulated at an adequate level per se. This ‘non liquet’ may entail serious consequences, particularly reflected in further liability shifting or cyberattacks which are acquiring even more nebulous features and origins. It is quite natural that for unscrupulous IT experts, cyberattacks on novel vehicles would be an actual ‘tidbit’ to launch. 

It is sufficient to imagine only the situation when a perpetrator infiltrates the data center of connected and automated mobility and sends a command to the entire cross-border corridor of vehicles to accelerate? To deploy an instruction to turn right or send a command to power the drones down? We cannot allow such a state of affairs to prevail as it may threaten the people’s lives and health, shake the foundations of stability and safety on the roads as well as in the air and lead to the evolution of novel forms of terrorist attacks.

It is also significant to put the emphasis on the fact that the criminals involved in the abovementioned terroristic activities have to be strictly liable for the crimes, moreover, such acts can be assessed to have an international nature as they pose a threat to the world order and hence, if the outlaw is trying to conceal in another country different from the one he committed a crime in, the criminal must be on the international wanted list, given a red notice and handed over to INTERPOL’s area of activity. It should be pointed out that during the Europol-INTERPOL Cybercrime Conference held in November, 2021 a common thread was to highlight the fact that ‘cybercrime is an urgent global security risk’ and ‘law enforcement and the private sector need to take strong, collective action’ [10]. Furthermore, the main goals stipulated in the Concept on the development of Artificial Intelligence in Ukraine are attuned to the tactics aforementioned [11]. 

That is why the mechanisms of monitoring the anomalous activities or disruptions in systems of transport have to be recorded and analyzed in order to prevent their further malfunctioning. There is also a great necessity to determine the number of places where certain technologies cannot be used (e.g., eliminating the use of drones above the crowded places, giving a particular focus to kindergartens, schools and parks; minimizing autonomous driving on the unmodernized roads). Moreover, special programs which may encourage IT experts to work on detecting the innovations’ defects have to be popularized and supported by the governmental organizations in order to improve the features of the machines, making their software indestructible in case of the external interference aimed to change the main principles of their functioning. 

Hence, if the automated vehicles do not have a certain immunity from hackers and the latter’s malicious software, the whole future of these technologies would be jeopardized and the questions arise whether, on conditions of the legislators’ inaction, our world has a necessity to continue implementation of technologies, faceless for the legislature. Elon Musk, CEO of the ‘Tesla’ company and founder of ‘SpaceX’ once accentuated, ‘AI is far more dangerous than nukes. Far. So why do we have no regulatory oversight? This is insane’ [12]. Although nuclear war is considered to be the most calamitous, the implications of the unregulated cutting-edge technologies with AI implemented could equally result in a massive catastrophe. The aforementioned challenges can be conquered and prevented by an exceptional solidary global response, inter alia in the forms of a comprehensive regulation on AI, deployment of the consequential world governmental security software for the innovations and a clear strategy how to eradicate the cases of cyberattacks. 

Summarizing the aforementioned, it can be acknowledged that we are standing on the threshold of novelties’ introduction that would prove – the world changes are inexorable. Consequently, the law system is undergoing a sea change and has to keep pace with the times. Unfortunately, the criminal liability for the use of the AI transport systems has not been established at an adequate level not only in the EU, but globally. Thus, solely under well-defined regulative provisions on AI, the fundamental human rights would be inviolable, the system of law – unambiguous, while a legislature/judicial authority – be able to see blindfolded and judge even in challenging circumstances as Themis, a symbol of justice, known and appreciated worldwide.

References

1. ‘A definition of AI: Main capabilities and scientific disciplines’, High-Level Expert Group on Artificial Intelligence – URL: https://ec.europa.eu/futurium/en/system/files/ged/ai_hleg_definition_of_ai_18_december_1.pdf, Accessed on 21 November 2021

2. ‘EU and EEA Member States sign up for cross border experiments on cooperative, connected and automated mobility’ – URL: https://wayback.archive-it.org/12090/20171013225916/https://ec.europa.eu/digital-single-market/en/news/eu-and-eea-member-states-sign-cross-border-experiments-cooperative-connected-and-automated, Accessed on 21 November 2021

3. ‘5G Architecture White Paper V4.0’ – URL: https://5g-ppp.eu/wp-content/uploads/2021/11/Architecture-WP-V4.0-final.pdf, Accessed on 21 November 2021

4. ‘On the road to automated mobility: An EU strategy for mobility of the future’, Communication from the European Commission – URL:  https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52018DC0283 , Accessed on 21 November 2021

5. ‘Artificial intelligence in road transport – Cost of non-Europe report’ – URL:  https://www.europarl.europa.eu/thinktank/en/document.html?reference=EPRS_STU(2021)654212, Accessed on 21 November 2021

6. ‘Legal Personhood for Artificial Intelligences’, Lawrence B. Solum– URL:  https://scholarship.law.unc.edu/cgi/viewcontent.cgi?article=3447&context=nclr, Accessed on 21 November 2021

7. ‘Automated vehicles in the EU’, European Parliamentary Research Service – URL: https://www.europarl.europa.eu/RegData/etudes/BRIE/2016/573902/EPRS_BRI(2016)573902_EN.pdf, Accessed on 21 November 2021

8. ‘Liability for artificial intelligence and other emerging digital technologies’, Expert Group on liability and New Technologies – URL:  https://op.europa.eu/en/publication-detail/-/publication/1c5e30be-1197-11ea-8c1f-01aa75ed71a1/language-en, Accessed on 21 November 2021

9. ‘Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles’, SAE International Website – URL: https://www.sae.org/standards/content/j3016_202104/, Accessed on 21 November 2021

10. ‘Innovation to beat cybercrime acceleration the theme of 2021 Europol-INTERPOL Cybercrime Conference’ – URL:  https://www.interpol.int/News-and-Events/News/2021/Innovation-to-beat-cybercrime-acceleration-the-theme-of-2021-Europol-INTERPOL-Cybercrime-Conference, Accessed on 21 November 2021

11. Concept on the development of Artificial Intelligence in Ukraine – URL:  https://prianykovabusiness.wixsite.com/defender/projecto-1, Accessed on 21 November 2021

12. ‘A.I. is far more dangerous than nukes’, Polina Prianykova – URL: https://prianykovabusiness.wixsite.com/defender/post/a-i-is-far-more-dangerous-than-nukes, Accessed on 21 November 2021

Officially Published in November 23-26, 2021, Athens, Greece (Table of Contents, № 35)

https://isg-konf.com/wp-content/uploads/2021/11/Science-foundations-of-modern-science-and-.pdf

© Copyright by Polina Prianykova_all rights reserved

bottom of page