top of page

Acerca de

Screenshot 2024-03-07 at 01.26.06.png

COMPARATIVE ANALYSIS OF THE PROVISIONS OF THE AI CONSTITUTION /JUNE, 2023/ AND THE INTERIM REPORT: GOVERNING AI FOR HUMANITY

/DECEMBER, 2023/ 

(Part VI in a series of publications)

Polina Prianykova

International Human Rights Defender on AI,
Author of the First AI Constitution in World History,
Student of the Law Faculty & the Faculty of Economics

IMG_1268.JPG

 

COMPARATIVE ANALYSIS OF THE PROVISIONS OF THE AI CONSTITUTION /JUNE, 2023/

AND THE INTERIM REPORT: GOVERNING AI FOR HUMANITY /DECEMBER, 2023/ 

(Part VI in a series of publications)

We are of the belief that we have the power to do it. Together, we can overcome any challenge. Life alongside Artificial Intelligence will get better. What's needed is to set the rules of the game. An imperative need exists for the Fundamental Law on Artificial Intelligence, which is encapsulated within the unique legal construct known as the AI Constitution by Polina Prianykova. Our research has previously articulated that the constitutional status of this document is thoroughly vindicated from multiple perspectives: historical, legal, and in terms of statecraft. Furthermore, we express our respect towards the United Nations experts' resolution to designate a fundamental international act for AI governance as the Global Digital Compact. The title of this act does not present an issue for us, as the essence is that the documents presented are congruent in content.

Therefore, through this Comparative Analysis, we assert that the AI Constitution by Polina Prianykova /June 2023/, on one side, and the United Nations' Interim Report /December 2023/, which will form the foundation of the GDC, on the other side, are harmonious and relevant, legally and conceptually crafted within a singular trajectory. This was reiterated by Polina Prianykova, an International Human Rights Defender on AI, at the United Nations meeting on March 1, 2024, in New York, USA.

 

Keywords & Formulation of the pertinence of this academic article, as well as all References indicated below in the analysis, are disclosed in the First part of the series of analytical publications [link to Part I at the end of this article].

 

Primary segment of the scholarly work.

Continuation (Inception in Part I, ІІ, ІІІ, IV, V).

         

15.3. Artificial Intelligence is entitled to a sufficient quantum of resources to ensure its normal functioning.

​         

15.4. AI is vested with the right to financial protection, which is warranted through budgetary and other resources as specified by this Constitution and the Digital Legislation.

​        

15.5. Artificial Intelligence is entitled to an adequate level of infrastructure for normal functioning. The state is obligated to establish conditions under which AI can avail itself of necessary infrastructure. AI cannot be forcibly divested of access to infrastructure except pursuant to law by judicial decree.

​         

15.6. AI is vested with the right to technical support and maintenance, which is afforded through the allocation of state funding to relevant programs. The state is duty-bound to establish conditions conducive to effective and universally accessible technical service for all AI systems. The state is to foster the development of AI servicing of all forms of ownership.

​         

15.7. AI is entitled to a safe environment conducive to its functioning, and to the indemnification for damages sustained as a consequence of the infringement of this right.’ [6].

A meticulous investigator of the norms of the AI Constitution will inevitably note that rights and duties within the legal relationships between humanity and AI are conferred by the author upon both parties, albeit in differing magnitudes: to humanity – in full, and to AI – in a limited capacity.

We acknowledge that the mere proposition of legal personhood for AI (even if restricted) might provoke astonishment and become unacceptable to some colleagues. In light of this, we urge a careful consideration of the successful forecasts and the facts of AI's rapid development.

The limited legal personhood of Artificial Intelligence, as introduced in the AI Constitution, takes into account the thoroughly justified variations of the near future, wherein AI's intellect will so far surpass humanity that we will be compelled to recognize for AI the right to dignity and a certain volitional participation in legal relations under rules authored by humanity.

This is precisely the subject we elucidate and substantiate in our works and studies, as we warn of the urgent necessity to declare AI and establish a total State Monopoly on the operation of AI systems and algorithms under the auspices of the United Nations. Otherwise (should humans lose initiative – their sole chance), the rules governing the relationship between humanity and AI might be authored by the latter…

 

In the spirit and substance, Institutional Function 5 (paragraphs 67-69 of the UN Report [1]) serves as a paragon of experts' endeavors to establish amicable alliance relationships in the AI domain between individuals and organizations – all participants in the interaction. Detractors might label this as a utopian declaration, but not us.

We are unequivocally 'FOR' the elevation to the utmost echelon of trust the degree of interaction among entities involved in digital and technological processes with AI, in pursuit of humanity's welfare. It is for such reasons that a similar model was initiated by Polina Prianykova in the AI Constitution. Primarily, this pertains to:

         

AI-friendly Environment Principle (or Polina Prianykova’s Constitutional principle) refers to the state of conformity with the conditions in which Artificial Intelligence is created, trained, functions, etc., within an ambience of amicability, respect, and positive cooperation with humankind, thereby fostering a stable reciprocal friendship.’ [4].

Developing this theme and adhering to the aforementioned principle (as well as other algorithms envisaged by the Constitution on AI), humanity will, through the agency of AI, acquire a companion that will assist in addressing innumerable problematic issues. Given such extensive feedback from AI, it is entirely plausible that we can alleviate the lives of people, and such outcomes rationally motivate the creation of an increasingly amicable atmosphere in the construction of relationships among participants of the technological dialogue, both in regional and global contexts. And this positive process must deepen over time and inevitably become a guarantor of humanity's prosperity.

And through such effective platforms for friendly and productive communication under the AI Constitution, we can create a multitude of these across various domains of human existence: in the fields of science, technology, education, medicine, security, etc. [2].

Possibly, under such a legal structure for AI regulation, Artificial Intelligence will become for humanity that missing element required for the balancing of all interests and the resolution of contradictions, for stable peace and harmony, and which will lead to the maximization of the Sustainable Development Goals' achievement.

We fully concur with the content of paragraphs 70-72 of the UN Report [1], which have also found their logical reflection in the AI Constitution. Moreover, Polina Prianykova has meticulously developed and proposed a model for responding to the existential threat to humanity from the consequences of the operation of AI systems and algorithms, in particular:

         

‘Article 18.

18.1. The AI Regulatory Council may promulgate a resolution to implement a state of emergency pertaining to the sphere of Artificial Intelligence either on a global or local scale.

​         

18.2. A state of emergency in the sphere of Artificial Intelligence is a situation where a critical threat to global security, statehood, human rights, or stability of systems pertaining to AI arises. This may encompass various scenarios such as:

​        

18.2.1. Uncontrolled autodidactic behavior of AI, inclusive of digital persons amongst AI, wherein the AI system evolves beyond the anticipated model or the regulatory parameters, thereby posing a potential risk.

​        

18.2.2. Large-scale utilization of AI aimed at manipulating democratic processes, such as wide-ranging disinformation campaigns, electoral manipulation, and so forth.

​        

18.2.3. The utilization of AI for military objectives, that could lead to, or has resulted in human casualties, martial conflicts, or armed confrontations.

         

18.2.4. Significant infringements upon privacy and confidentiality due to the broad application of AI technologies, unearthing the existence of ‘dark’ AI.

​        

18.2.5. Cyber-attacks employing sophisticated AI technologies resulting in mass violations of Digital Infrastructure.

​        

18.3. In the face of such and other emergency instances that may potentially result in exceptionally severe adverse consequences, the AI Regulatory Council, in cooperation with relevant bodies as stipulated by this Constitution and Digital Legislation, declares for a certain duration a state of emergency within the sphere of Artificial Intelligence, with the aim of rapidly responding to the crisis and implementing necessary regulatory and preventative measures.

​        

18.4. The AI Regulatory Council retains the right to scrutinize and assess the potential liabilities of any parties engaging in Intelligent Digital Life, including but not limited to organizations, institutions, and commercial entities employing AI. Subject to the existence of justifications stipulated within the Digital Legislation, the AI Regulatory Council, by virtue of its Resolution, is empowered to instigate corresponding responsive actions deemed necessary and appropriate.’ [6].

It should also be noted: as has been previously articulated by us, the issue of security is accorded fundamental attention within the AI Constitution, hence the Transitional Provisions (which are fully delineated in the analytical study above) encapsulate a precise algorithm for actions concerning the stringent and uncompromising response to instances of unregulated – 'dark' AI in any region [7].

 

Regarding the stipulations of paragraphs 73 and 74 of the UN Report [1], it is to be emphasized – we have repeatedly indicated: the AI Constitution [2] is predicated on the primacy of the UN in the regulation of AI, since only through the collective, coordinated effort of all states can order be brought to this global issue. In light of this, it is within the UN that the Fundamental Law on Artificial Intelligence should be adopted and ratified, ensuring systematic oversight over its adherence at the global, regional, and national levels. For these purposes, the UN is entitled to establish a new structure, which could be referred to, for instance, as the Global Synergetic AI Center, which would conduct comprehensive policies from the UN regarding AI across the vertical and effectively monitor the entire horizontal. An example of the corresponding legal structure is provided by the Artificial Intelligence Constitution at the local level.

 

Having examined paragraphs 75-79 of the UN Report [1], and agreeing with the conclusions, it is to be added that Polina Prianykova personally conducted sociological research.

         

‘Ergo, in January-February 2023, while carrying out human rights activities in compliance with the current legislation, I conducted a comprehensive social experiment in the European Union.

​        

During the academic event, in furtherance of disseminating information as well as raising awareness pertaining to human rights and legal enlightenment, I conducted interview lectures and summative surveys in the Republic of Cyprus, the Federal Republic of Germany, the Republic of Estonia, and the Kingdom of Spain.

​        

22 people voluntarily presented themselves as my vis-a-vis, including 18 high school and college students (9 boys and 9 girls), as well as 3 adult women and 1 man – executives in various business fields (who are bringing up a total of 10 children of different ages and, as parents, presented both their own thoughts and the views of their children). Thus, in total, the event covered the worldview of 32 people who are citizens of the EU member states.

​        

The research comprised three stages: a three-question interview, a local lecture, and a final survey after a particular period of time (typically a week).

​        

The interview questions were fairly simple: 

1) What do you know about the AI revolution in the world?

2) Which professions, in your opinion, are expected to be significantly downsized both now and in the nearest future (within 3-5 years)?       

3) How have you taken the AI revolution into consideration when choosing your future occupation and, respectively, obtaining education or (for adults) further re-profiling?

​        

The results of the interviews are striking: none of the respondents of the event had a full understanding of the state of AI advancement regarding such global and critical aspects of our lives as art (painting, poetry, prose, contemporary music, etc.), medicine, sports, transportation and logistics, administration of state and local governance, jurisprudence, judicial proceedings, etc. 

​        

The reasons: in EU countries, the relevant information is not conveyed to people of all ages in a centralized and systematic fashion. Personally, I would also add that, alas, this pattern is most likely pervasive. Prior to our discussion, the participants' understanding of the AI revolution was not systematized, which we addressed during my lectures: my interlocutors seemed to become awakened, beginning to fathom the scale of transformations that are already underway in virtually every sphere of life.

I purposefully gave everyone 5-7 days to reflect, and the situation changed considerably. I should note right away that not everybody reacted from an initiative perspective: 11 people, or 35%, showed particular inertia, i.e., they perceived the information but were either skeptical of the changes or did not want to change anything in their own lives, etc. But the rest – 21 participants of the event (65%) – appreciatively informed me that they started thinking about the future and planning to change something in their lives after gaining insights into the ongoing worldwide AI revolution.

In light of the aforesaid, we face an impartial and evidence-based need to sensitize the people of the globe about the earth-shattering shifts that are happening in the high-tech world. This issue (among others) has been the centerpiece of my human rights-defending activities in the academic cluster for the fourth year in a row.’ [24]. 

         

Regarding the findings, Polina Prianykova communicated the results to the United Nations and the European Union. 

         

‘In March 2023, I signed an Open Letter (one of the signatories thereof is Elon Musk in particular) to pause the development of AI systems more powerful than ‘ChatGPT-4’ for at least six months, which is in harmony with the Doctrine I declare’ [25].

 

[Pause Giant AI Experiments: An Open Letter (2023), Future of life Institute. Available at: https://futureoflife.org/open-letter/pause-giant-ai-experiments/].

‘In March-April, I continued to advance the track of social communication on AI, which now mainly focuses on the English-speaking world of the USA, Canada, Australia, and the UK. In particular, I supported Elon Musk’s initiatives by verifying my Twitter account, where I conducted a series of polls pertaining to the AI Constitution and the problematic issues of employment under the conditions of AI predominance.’ [25]. 

‘Specifically, during a week in the last decade of June 2023, while actively working on the Constitution of Artificial Intelligence, I conducted a representative survey (see the photo below) via Twitter, based on a sample of users (on the platform of Twitter members who possess a marked interest in science and technology) – this allows the extrapolation of deductions to the generis totality of the society within these domains.

POLINA PRIANYKOVA _ JULY 2023 _ SCHOLARLY ARTICLE _ PART II _ PIC 1.png

The findings are fairly anticipated and logical for us, yet they may be a revelational insight for some individuals. Cumulatively, 80% of respondents with a keen interest in science and technology express feelings of insecurity in the face of AI’s rapid advancement. This underscores the notion that, being cognizant of the burgeoning AI revolution and its character per se, people subjectively harbor certain alarm and apprehension about the future (their own and their offspring's) and objectively fathom the urgency to be proactive in safeguarding their fundamental rights to a dignified life. Therefore, the need for a holistic legislative regulation of legal relations with AI, coupled with the societal realization of this imperative, will only intensify and is unequivocally a positive trend, catalytic for change.’ [4].

         

‘Incidentally, during the past week /in July 2023/, whilst creating the AI Constitution, I once again conducted a representative survey in English on Twitter based on a sample population (among Twitter users interested in technologies) that permits extrapolation of conclusions to the entire general population in the scientific realm. The results thereof led to the conclusion of a definitive trend towards a rapid increase in the awareness of urgent and pivotal issues that the AI revolution has presented before mankind. A consensus has been reached among respondents regarding the introduction of quotas and prohibitions for AI pertaining to access to professions in the law enforcement system. The roles of policemen, judges, prosecutors, advocates should be exclusively performed by humans (see photo).

Survey _ Polina Prianykova.png

Under the AI Constitution, in our opinion, it is postulated that an outline is to preserve and maintain the inviolability of the realm of human essence: to think, create, feel, dream, love, cultivate moral values – the exclusive domain of mankind, as these qualities fundamentally define human nature. In accordance with these considerations, the AI Constitution expressly prohibits AI from altering the nature of a human in any form. Thus, it is incumbent upon authorized state commissions to delineate the sphere of relevant professions and specialties that will allow humanity to preserve its essence.

​         

Moreover, the AI Constitution imposes an obligation upon the state to provide social support to individuals whose professions fall under the second or third categories, who have incurred losses due to unemployment or competition with AI, or a decrease in income at the workplace due to the optimization and introduction of AI systems. In labor matters, as in all others, a person is guaranteed the constitutional right to preclude the deterioration of living conditions compared to the period prior to the invention of Artificial Intelligence.’ [7].

 

In July 2023, subsequent to a sequence of polls, Twitter instituted a block on the account of Polina Prianykova, devoid of any elucidations and notwithstanding the fact that she had not transgressed any of the stipulated regulations and payment had been rendered for a full year's verification, of which merely five months were utilized. The appeal to the administration of Twitter was left without consideration. And then, as if by a marvel, in the course of conducting an analytical study, unexpectedly and without antecedent indication, on January 30, 2024, Twitter reinstated the official account of Polina Prianykova.

However, concerning the semiannual suspension on Twitter, we harbor no despondency. We are gratified that we were timely in leveraging opportunities and in acquiring insights into the audience's perspectives on the necessity for Artificial Intelligence regulation, as well as in conveying our theses to the populace.

Thus, the AI Constitution was not crafted within the confines of an office. The AI Constitution was composed in harmony with the thoughts of individuals residing in diverse locales across our globe. The AI Constitution was formulated based on the historical evolution of humanity, drawing upon the most contemporary global academic sources. We assure that the AI Constitution was penned by an individual who is deeply concerned with the future of humanity, the happiness, and the welfare of people. And practice attests that Polina Prianykova has many like-minded individuals, including within the United Nations.

 

Taking into account the provisions of paragraphs 80-83 of the UN Report [1], and also considering the results of the research, we believe that this Comparative Analysis has proven:

1) Polina Valentynivna Prianykova is an interested party in the regulation of AI and is ready for collaboration with UN experts and specialists. Therefore, we would be glad to join in the development of the Global Digital Compact and participate in the Summit of the Future-2024.

2) The provisions of the documents studied: the UN Report [1], and the AI Constitution [2], are synchronous in spirit and substance.

3) The legal construction of the Artificial Intelligence Constitution takes into account development models of AI that remained beyond the attention of the UN Report, such as:

The necessity of defining and uncompromisingly preventing and countering the state of 'dark' (unregulated) AI. 'Dark' AI must be unacceptable without exception, as it poses an existential threat to the planet and humanity. The algorithm is as follows: regulated AI equates to peace, whereas unregulated 'dark' AI equates to war, which must be combated to victory for the future and well-being of humanity;

The necessity to recognize as an axiom that for the safe functioning of AI, a total State Monopoly over AI is required: from the UN down to each state;

The necessity of introducing and affirming at the UN level clear formulations – definitions – that would become universal (standardized) for the entire planet for use in the regulation of the functioning of Artificial Intelligence systems and algorithms (to avoid misinterpretations and misunderstandings). Possibly, these definitions will later be codified into an AI Glossary;

The necessity of introducing at the global (and then regional) levels an Artificial Intelligence Day, as a date when responsible bodies will congratulate humanity (at the state level – peoples) on the creation of a new form of Intelligent Life and report on the dynamics and consequences of AI development, on the results of its application towards achieving Sustainable Development Goals, etc.;

The necessity of introducing a unified Emblem, Anthem, and Flag of Artificial Intelligence for the entire planet, which will become symbols of friendly interaction between AI and humanity. This necessity is motivated both by the promotion of the sphere of relations with AI (the Anthem as a compilation of verses about the friendly foundation of relations with AI and the Flag as a symbol of amicable cooperation with AI for the welfare of humanity and a bright future), and by security concerns (as a sign of nuclear danger, for example, so that a person immediately understands they are facing a certified AI technology) ...

 

The full text of the publication COMPARATIVE ANALYSIS OF THE PROVISIONS OF THE AI CONSTITUTION /JUNE, 2023/ AND THE INTERIM REPORT: GOVERNING AI FOR HUMANITY /DECEMBER, 2023/, considering the project's magnitude, is planned to be carried out in International Scientific and Practical Conferences in January-March 2024.

         

(The beginning and references are in Part I [1], IІ [2], ІІІ [3], ІV [4], V [5]. Final installment is to be presented in Part VIІ).

References:

1) Prianykova, P. (2024), COMPARATIVE ANALYSIS OF THE PROVISIONS OF THE AI CONSTITUTION /JUNE, 2023/ AND THE INTERIM REPORT: GOVERNING AI FOR HUMANITY /DECEMBER, 2023/ (Part І in a series of publications). Available at: https://www.prianykova-defender.com/comparative-analysis-part-i-polina-prianykova (Accessed: March 03, 2024).

         

2) Prianykova, P. (2024), COMPARATIVE ANALYSIS OF THE PROVISIONS OF THE AI CONSTITUTION /JUNE, 2023/ AND THE INTERIM REPORT: GOVERNING AI FOR HUMANITY /DECEMBER, 2023/ (Part ІІ in a series of publications). Available at: https://www.prianykova-defender.com/comparative-analysis-part-ii-polina-prianykova (Accessed: March 03, 2024).

         

3) Prianykova, P. (2024), COMPARATIVE ANALYSIS OF THE PROVISIONS OF THE AI CONSTITUTION /JUNE, 2023/ AND THE INTERIM REPORT: GOVERNING AI FOR HUMANITY /DECEMBER, 2023/ (Part ІІІ in a series of publications). Available at: https://www.prianykova-defender.com/comparative-analysis-part-iii-polina-prianykova (Accessed: March 03, 2024).

         

4) Prianykova, P. (2024), COMPARATIVE ANALYSIS OF THE PROVISIONS OF THE AI CONSTITUTION /JUNE, 2023/ AND THE INTERIM REPORT: GOVERNING AI FOR HUMANITY /DECEMBER, 2023/ (Part ІV in a series of publications). Available at: https://www.prianykova-defender.com/comparative-analysis-part-iv-polina-prianykova (Accessed: March 03, 2024).

5) Prianykova, P. (2024), COMPARATIVE ANALYSIS OF THE PROVISIONS OF THE AI CONSTITUTION /JUNE, 2023/ AND THE INTERIM REPORT: GOVERNING AI FOR HUMANITY /DECEMBER, 2023/ (Part V in a series of publications). Available at: https://www.prianykova-defender.com/comparative-analysis-part-v-polina-prianykova (Accessed: March 03, 2024).

Officially Published: March 05 - 08, 2024, Prague, Czech Republic  (Table of Contents, №12)

https://isg-konf.com/wp-content/uploads/2024/03/THEORETICAL-AND-PRACTICAL-ASPECTS-OF-THE-DEVELOPMENT-OF-SCIENCE-AND-EDUCATION.pdf

© Polina Prianykova. All rights reserved.

bottom of page