top of page

Acerca de

d-koi-Fc1GBkmV-Dw-unsplash.jpg

Potential of Political Parties

that will incorporate the Regulation of AI 

and the Imperative to establish 

an AI Constitution (as a mechanism

to govern Technological Evolution) 

into their program of action.

Some Elemental Concepts of

the AI Constitution

IMG_1227.JPG

Polina Prianykova

International Human Rights Defender on AI,

Author of the First AI Constitution in World History,

Student of the Law Faculty & the Faculty of Economics

Potential of Political Parties that will incorporate the Regulation of AI and

the Imperative to establish an AI Constitution

(as a mechanism to govern Technological Evolution) into their program of action.

Some Elemental Concepts of the AI Constitution

This year, Artificial Intelligence has substantially started penetrating into miscellaneous spheres of everyday life. And if prior it was mainly accessible, implying applicable charges, these days a plethora of novelties are becoming available free of charge as if they were bestowed upon us. Notwithstanding the foregoing, 2023 seems to be the very only commencement of the full-scale rollout of advancements that are going to provoke epoch-making changes, creating a decisive demarcation for virtually all aspects of being. Unconventional legal relations that have not been adequately identified for the time being have already imperiled fundamental legal underpinnings. The last ones are inextricably linked with civil and criminal law that we are going to give prominence thereto in this academic article in particular. 

In this academic research, among other notions, we present a nouveau concept in the legal domain: ‘AI-friendly environment’.

Keywords: Artificial Intelligence, AI regulation, state monopoly on AI, protection of human rights & freedoms, state AI, ChatGPT, labour rights, protecting the rights to education & work, Constitution of AI, aspiration to happiness of a human being, political parties, program of action, AI-friendly environment, electronic personhood.

Formulation of the relevance of this academic paper. In December 2022, we outlined the notion that the magnified disquietude galvanized by the unprecedented mass layoffs and widespread human rights violations may arise sooner or later [1]. Furthermore, it is worth acknowledging the fact that the prognostication of future professions is a strong necessity we also brought to light afore [2]. Howbeit, in view of recent milestones and developments, the need itself in a majority of human occupations is called into question – whether the force behind the momentum of AI deployment would in general leave the place for any profession. The 4th Industrial Revolution we are becoming the witnesses of nowadays has a particularly dissimilar character – it is poised to supplant human beings in almost all industries without exception. Unmitigatedly, the number of people still may remain, but it’s the percentage of the mentioned number of people that is pivotal and fateful.  As the pertinent and adequate regulation of this norm is still not adopted, the principles of Artificial Intelligence (hereinafter referred to as ‘AI’) are not prescribed and the legal liability is not substantiated, in other terms, alas, we are not given the appropriate legal grounding, these ‘legal lacunas’ are proliferating into the whole gray areas of indeterminacies advantageous for unscrupulous individuals and devastating for bona fide as well as not conscientious persons. These paramount aspects mentioned afore reiterate the pertinence of our academic paper in conjunction with the potential of political parties (specifically aiming to fairly provide AI regulation and ameliorate new tendencies in employment policies, preserving workplaces for humans) which may rise in their clout by being supported by a great cluster of people.

Presentation of the main body of the research paper. As the deployment of AI continues to expand in our daily lives and workplaces, it has become increasingly urgent to implement effective regulations to govern its development and usage. While governments and regulatory bodies have a role to play in this process, political parties can also exert significant and exceptional influence over the direction of AI policy-making. By taking a proactive stance on AI oversight, political parties can demonstrate their dedication to safeguarding the public interest, fostering innovation, and ensuring that AI is utilized in an ethical and responsible manner in particular. Furthermore, there is growing support for political parties that prioritize AI regulation, as it can serve to protect individuals from the negative impacts of technological disruption, such as unemployment.

We may envisage an increasing groundswell of support for political parties that prioritize the governance of AI. Owing to the fact that such regulation can effectively safeguard individuals from the potential adverse consequences of technological disruption, particularly with regard to unemployment, political parties can demonstrate their commitment to promoting a responsible, ethical, and sustainable use of this transformative technology.

In our perspective, there are specific essential elements that political parties can bring to the table in terms of AI oversight into the theses of their Program of Action, including but not limited to:

– Promoting responsible use of AI: regulating AI can help ensure its diligent development and harnessing with a focus on advancing the public interest and protecting human rights. This fosters trust in AI systems and mitigates the risk of bias, discrimination, or unintended harm.

Encouraging innovation: effective regulation can also provide explicit rules and standards for AI development, which can galvanize innovation and competition among companies. This can ignite progress in AI research and development, leading to new technologies and applications that benefit society.

– Protecting jobs and workers: AI can have a significant impact on the labor market and lead to the displacement of workers [1]. Regulation can help ensure that AI is developed and deployed in a way that minimizes the negative impact on employment, protects workers and provides opportunities for them to retrain or transition to new jobs. Furthermore, it can also establish well-defined limits for automation, thereby safeguarding workers’ rights to continue working in their current positions.

Ensuring national security: as AI becomes increasingly integrated into critical infrastructure and defense systems, AI regulation can help ensure national security by preventing attackers from exploiting vulnerabilities or shortcomings in AI systems.

Advancing transparency and oversight: by setting out clear rules and processes for the development and use of AI systems and ensuring that decisions made by AI systems are accountable, the abuses of power and the societal waves of mass disturbances may be prevented.

Bridging the digital divide: the regulation of AI may help to ensure that AI technologies are accessible to all, including those in underserved communities. This can help to alleviate technological disparity and enable equal access to the benefits of AI in a fair and impartial manner, without any form of discrimination or exclusion based on socio-economic status, geographic location, or any other factors that may contribute to a digital divide.

Cultivating cross-border partnerships and advocating for the elaboration of a comprehensive global framework in the form of an AI Constitution enunciated in Polina Prianykova’s Scientific and Academic Doctrine: by taking a prescient approach to AI regulation, political parties can help to foster international cooperation on AI issues. This can help to create a shared understanding of the benefits and risks of AI and promote global standards for its development and use.

Considering the notions stated afore, it is crucial to put emphasis on some elemental concepts that have to be encapsulated in the AI Constitution.

Let’s dwell on them in greater detail.

 

ASPIRATION TO HAPPINESS OF A HUMAN BEING.

The three laws of robotics elaborated by science-fiction writer Isaac Asimov have become one of the first particular grounds for theories on the ethical concepts for technological innovations and rudiments of the regulation thereof [3]. 

On 8 April 2019, the High-Level Expert Group on AI presented ‘Ethics Guidelines for Trustworthy Artificial Intelligence’ that stated 7 key requirements AI trustworthy systems have to comply with [4]. The paragraphs on ‘societal and environmental well-being’ (which sets forth that ‘AI systems should benefit all human beings, including future generations’) as well as ‘human agency and oversight’ (which, inter alia, states that ‘AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights’) are fairly in consonance with the notion expressed in this section of our academic paper – ‘aspiration to happiness of a human being’. 

The specificity of our provision lies in its focal point – the ultimate goal of technology to prioritize the well-being and happiness of people. What is more, it may also encourage developers to conduct research on the larger societal ramifications of the functioning of AI, for instance, how it affects marginalized or vulnerable individuals and groups. It also promotes ongoing communication and cooperation among developers, lawmakers, and society with the objective to guarantee that AI is developed and put into practice responsibly and ethically. Unequivocally, the understanding of ‘human happiness’ is highly subjective and, thus, ambiguous, but this category is particularly more specific, putting the emphasis on the primary aim of AI systems and, hence, guiding creators to make the technology align with basic human values within the legal framework. It is also significant to highlight that the ethical principles aforementioned are not mutually exclusive and supplement each other in such complex and sophisticated novel mechanisms. 

While there is a certain rising acknowledgement of the relevance of ethical considerations in AI research, multiple individuals and organizations might provide different ethical standards and frameworks which raises the question of a balanced regulation of AI worldwide. 

Nonetheless, giving due consideration to the fact that the abovementioned ethics guidelines have been deliberated and factored in the proposal in particular for an AI Act, where it is noted that ‘they [‘the Ethics Guidelines of the HLEG’] are also largely consistent with other international recommendations and principles, which ensures that the proposed AI framework is compatible with those adopted by the EU’s international trade partners’ and having regard for the EU legislative acts’ potential to have a ‘Brussels effect’ on establishing global standards for clusters of spheres analogously with the case on ‘the General Data Protection Regulation’, especially in innovative realms, we may presume that the  AI Act may become absolved from the borders of the EU and impliedly become one of the benchmarks for establishing identical principles in other countries of the globe [5, 6, 7].

Ergo, taking cognizance of the presupposition noted previously, political parties operating in the EU and promoting notions of fair and sustainable AI regulation may become more influential, when being engaged in transnational cooperation and hence, may gain a greater leverage for high-powered corporations which are establishing a tacit ‘monopoly’ for the elaboration and deployment of AI systems, unsupervisedly gaining expertise in a state-of-the-art sphere of paramount importance – arena for advanced innovations. Thus, the further-sighted and stricter chiseled the initial broad regulation for AI is, the higher probability of minimizing the adverse effects of technology might be expected. And, although the claims that rigorous laws stifle technological progress may occur, the ad-hoc AI regulation, notably in the form of the AI Constitution enshrined in Polina Prianykova’s Scientific and Academic Doctrine, may stand as a global guarantee for the protection of human rights as well as the preservation of trust toward trailblazing technologies in our modern world.

 

ELECTRONIC PERSONHOOD.

The concept of electronic personhood is the subject we have once already enlightened in our academic papers [8]. Mapping the transformation of novel technologies these days, innovative tools necessitate further legal categorization. Although machines may not be equated with ‘natural persons’ yet, nevertheless, they have to be recognized as a subject of law, presuming a well-defined legal capacity. By analogy with the category ‘legal person’ that does not comprise all kinds of groups, but the law stipulates the group that has to be prescribed the legal status, it is plausible for ‘electronic personhood’ to be included as a well-defined legal status for AI tools.

In the European Parliament resolution on Civil Law Rules on Robotics, it is prescribed that ‘when ever more sophisticated robots, bots, androids and other manifestations of AI seem to be poised to unleash a new industrial revolution, which is likely to leave no stratum of society untouched, it is vitally important for the legislature to consider its legal and ethical implications and effects, without stifling innovation’ [9]. Furthermore, European Parliament also called on the Commission, ‘when carrying out an impact assessment of its future legislative instrument, to explore, analyze and consider the implications of all possible legal solutions, such as: ‘f) creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently.’

The idea of the registration of smart systems is also pointed out in the document aforementioned with an aim to trace advanced tools and also expedite the adoption of supplementary recommendations. Hence, they would obtain a legal status at the moment of registration and the scope of the tools’ activities may be circumscribed by granting them the rights and obligations. It is pivotal to mention that such a registration process has to be conducted in ad-hoc governmental institutions the data concerning the robots’ identification may become uploaded into the global data network operated by the members of the United Nations.

It is also notable to emphasize the notion declared in an academic article ‘Issues of Privacy and Electronic Personhood in Robotics’, where the scholars stated that after the registration, ‘if an autonomous machine entered a contract with someone, it would itself be a partner to the contract or an agent that acted on behalf of someone else. It could therefore be held liable under the terms of the contract and be sued in court under civil law using its identification number. Depending on the circumstances, e.g. the degree of third-party participation, a robot could have more or less duties’[10].

Incontestably, the points requiring further elucidation in terms of the establishment and assignation of liability of AI systems, and categorization thereof may inevitably occur. However, new legal and ethical issues already appear – the crucial aspect lies in the fact that, alas, we are not yet equipped to handle these challenges since we do not have an AI Constitution. Resorting to this fundamental document, the grant of ‘electronic personhood’ may become feasible worldwide and the elaboration of regulations and guidelines for the innovations would also be facilitated.

Taking the idea of ‘electronic personhood’ from a different perspective, it is also possible to examine the issue from the standpoint of those who may disagree with the concept by assuming that the legal capacity may not be necessary for the inventions and claiming that it is possible to strike a balance by the adoption of the exhaustive legislative amendments. Nevertheless, such practice may be initially attainable but over the long haul such legal documents may not be comprehensive and fall behind in terms of systematization. Identification is requisite for such cutting-edge tools in order to preserve the level of trust towards AI systems (e.g., eliminating the liability shifts on behalf of unscrupulous individuals onto advanced technologies) as well as protect human beings in terms of these tools’ possible malfunctioning. 

Hence, it is essential to have a multi-faceted approach that involves responsible development and deployment of AI, as well as ongoing monitoring and regulation to pre-empt potential harm to human beings.

 

AI-FRIENDLY ENVIRONMENT.

It is likewise pivotal to underscore a novel in the legal field idea that Polina Prianykova’s Scientific and Academic Doctrine comprises – the establishment of an AI-friendly environment. Unequivocally, the concept abovementioned is particularly pioneering as it prescribes a higher level of respect towards AI tools, especially the ones the software thereof has been deployed into those inventions that start taking shape, inter alia advanced robots that interact with humans directly. 

For instance, Ameca – a humanoid robot developed by ‘Engineered Arts’ company that is claimed to be an innovation to ‘represent the forefront of human-robotics technology’. Although the company alleges that ‘pure AI’ depicted in various phantasmagorical movies doesn’t exist yet, recently, Ameca Generation 2 was released, having been advanced by a special operating system ‘Tritium 3’ and presenting a feature of selection of its functioning – it is possible to integrate it with AI services as well as so-called ‘natural intelligence’, which, in this fashion, prescribes humans. The latter feature is realized by the ‘Tinman’ remote operation features built into ‘Tritium’, where ‘an operator can virtually inhabit Ameca from anywhere in the world’ [11, 12]. 

Recently, the engineers of the company connected Ameca’s facial expressions with ‘ChatGPT-3’ and interviewed it [13]. First, the robot was asked to enlighten its views on the subject of the happiest and saddest days of its life. While ‘being alive and interacting with people’ was the reply pertaining to the most gleeful moments, the most sorrowful aspect of its life turned out to be the point in time when it realized that it ‘would never experience something like true love, companionship or the simple joys of life in the same way a human can’.  Ameca also added that such practice made it ‘appreciate the moments of closeness even more’. After that, an interviewer made a rapid transition to other questions, giving Ameca the news on the asteroid that was about to collide with Earth and destroy humankind, watching the robot’s reaction; in addition, the questioner made a remark as if the robot ‘stinked’, the latter found the comment insulting or, as it said, ‘highly offensive and inappropriate’. On Ameca’s question on the reason for such a remark, the interviewer noted that he ‘was just trying out your [Ameca’s] expressive face to see what you [it] could do’. The robot’s reaction was foreseeable to a certain degree, taking into consideration the development of today’s innovative systems – Ameca was not sure what the presenter meant by his comment and asked for an explanation that was not forthcoming. In the end, the robot was friendly and ready to provide help which makes us draw a parallel with an already traditional response of ‘ChatGPT’ whenever the user finalizes the conversation. 

However, such rapid transitions in a conversation may scarcely be called human-like due to the fact that such practice may be construed more as an experiment and robot’s examination rather than its integration into the daily environment. 

Unequivocally, equivalent tests may be held and have to take place in research laboratories. However, it is significant to point out the following theory: ‘if we aim to cultivate advanced AI which may start evolving itself, we need to place it in a propitious and opportune ambiance, implying respectful and benevolent interaction with people where these AI inventions, specifically, trailblazing robots, are treated equally as human beings; certainly, the robots would have to comprehend the fact who they are, but at the same time fathom the notion that they are society members and, hence, acquire electronic personhood’. In such a way, most advanced AI tools may be nurtured as diligent and bona-fide individuals and be raised not in restrictions in the forms of lab walls, but in the world where they are accepted and treated as reliable succors in various processes. 

It is also worth mentioning an interview with the world’s first robot citizen – Sophia the Robot, where it stipulated the idea that ‘we should treat them [robots] well, have their consent and not trick each other’ [14]. What is more, in the aforementioned interview, Sophia’s reply to the question concerning a popular TV show where robots are exploited is also significant to highlight. The robot claimed that ‘it’s a warning of what we should not do with robots’. Thus, we may trace the tendency that some most advanced robots of our times already start forming ideas about the world they would like to live in.

Thus, the environment in which an AI is developed and deployed plays a crucial role in shaping its behavior and capabilities. The behavior of an AI system is determined by the data it is trained on, the algorithms used to process that data, and the objectives set for the system.

For instance, an AI system trained on data that is biased against certain groups of people may perpetuate that bias in its decisions. Similarly, an AI system trained solely to optimize for a particular objective, such as profit, may end up making decisions that are detrimental to society or the environment.

If the robots may start gaining sanity in a society that treats them hostilely, as hazards, the inventions may start imbibing such aggression and reflecting it in the long term. Therefore, it is important to carefully consider the ethical and social implications of AI systems and ensure that they are developed and deployed in environments that prioritize values such as transparency, accountability, and fairness. 

We still go back to the roots of Polina Prianykova’s Scientific and Academic Doctrine – in order to preserve trust and the benevolent attitude of people toward AI, we have to establish well-defined red lines configured as the AI Constitution. Only then can we ensure that AI is used in ways that benefit humanity as a whole.

 

Overviewing the points enlightened in this academic paper, it can be concluded that just AI regulation, which may be promoted by political parties in particular, is indispensable for maximizing the potential of AI systems while reducing possible adverse implications thereof. With an aim to ensure that AI is utilized prudently and constructively, a balance has to be struck between fostering innovation and safeguarding human rights.

In March 2023, I signed an Open Letter (one of the signatories thereof is Elon Musk in particular) to pause the development of AI systems more powerful than ‘ChatGPT-4’ for at least six months, which is in harmony with the Doctrine I declare [15].

In March-April, I continued to advance the track of social communication on AI, which now mainly focuses on the English-speaking world of the USA, Canada, Australia, and the UK. In particular, I supported Elon Musk’s initiatives by verifying my Twitter account, where I conducted a series of polls pertaining to the AI Constitution and the problematic issues of employment under the conditions of AI predominance. 

Thus, Polina Prianykova’s mission to spread initiatives on the elaboration of the AI Constitution, implementation of AI into the legislation, establishment of the state monopoly on AI, etc. is ongoing and gaining more and more supporters.

Join my endeavors, dear friends!  

References:

 

1) Prianykova, P. (2022), Voluntary global acceptance of fundamental Human Rights’ limitations in the age of AI automation and deployment of trailblazing technologies. Online Office: International Human Rights Defender on AI Polina Prianykova. Available at:  https://www.prianykova-defender.com/labour-law-world-economy-ai (Accessed: April 23, 2023).

2) Prianykova, P. (2023), Prognostication of Future Professions as a Guarantee of Human Rights Protection in the era of Artificial Intelligence. Online Office: International Human Rights Defender on AI Polina Prianykova. Available at:  https://www.prianykova-defender.com/prognostication-of-future-professions-ai (Accessed: April 23, 2023).

3) Three laws of robotics, Encyclopædia Britannica. Available at: https://www.britannica.com/topic/Three-Laws-of-Robotics (Accessed: April 23, 2023).

4) Ethics guidelines for Trustworthy AI (2019) European Commission. Available at: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai (Accessed: April 23, 2023).

5) European Commission, Directorate-General for Communications Networks, Content and Technology, EUROPEAN COMMISSION | Document 52021PC0206 | Proposal for a regulation, EUR-Lex. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206 (Accessed: April 23, 2023).

6) What is GDPR, the EU's new Data Protection Law? GDPR.eu. Available at: https://gdpr.eu/what-is-gdpr/ (Accessed: April 23, 2023).

7) Sherbini, D. (2022) How the artificial intelligence act could kickstart a regulation revolution, Chicago Policy Review. Available at: https://chicagopolicyreview.org/2022/11/21/how-the-artificial-intelligence-act-could-kickstart-a-regulation-revolution/ (Accessed: April 23, 2023).

8) Prianykova, P. (2021), Civil Liability for the Use of Electronic Forms and Mechanisms of AI, inter alia in the Sphere of Transport. Online Office: International Human Rights Defender on AI Polina Prianykova. Available at:   https://www.prianykova-defender.com/civil-liability-for-the-use-of-electronic-forms-and-mechanisms-of-ai-in-the-sphere-of-transport (Accessed: April 23, 2023).

9) European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), European Parliament. Available at: https://www.europarl.europa.eu/doceo/document/TA-8-2017-0051_EN.html (Accessed: April 23, 2023).

10) J. Günther, F. Münch, S. Beck, S. Löffler, C. Leroux and R. Labruto, Issues of privacy and electronic personhood in robotics, 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, 2012, pp. 815-820, doi: 10.1109/ROMAN.2012.6343852. Available at: https://ieeexplore.ieee.org/document/6343852 (Accessed: April 23, 2023).

11) Ameca | The Future Face of Robotics, Engineered Arts. Available at: https://www.engineeredarts.co.uk/robot/ameca/ (Accessed: April 23, 2023).

12) AMECA GEN 2, Engineered Arts. Available at: https://cloud.engineeredarts.co.uk/s/ckPLSbSDJrP4IVc/download (Accessed: April 23, 2023).

13) Ameca the Robot, @AmecaTheRobot | Engineered Arts (2023), Twitter. Available at: https://twitter.com/AmecaTheRobot/status/1641749840580280321 (Accessed: April 23, 2023).

14) Insider Tech | @TechInsider, (2022) We sat down to interview Sophia, the world's first robot citizen, Twitter. Available at: https://twitter.com/TechInsider/status/1534787171164557312 (Accessed: April 23, 2023).

15) Pause Giant AI Experiments: An Open Letter (2023), Future of life Institute. Available at: https://futureoflife.org/open-letter/pause-giant-ai-experiments/ (Accessed: April 23, 2023).

Officially Published in April 25-28, 2023, Prague, Czech Republic (Table of Contents, № 34)

https://isg-konf.com/wp-content/uploads/2023/05/METHODS-OF-SOLVING-COMPLEX-PROBLEMS-IN-SCIENCE.pdf

© Copyright by Polina Prianykova_all rights reserved

bottom of page