top of page
IMG_2527.JPG

AI CONSTITUTION, AI AND NEW JOB HORIZONS:

SHAPING STRATEGIC INTERNATIONAL WORKFORCE POLICIES FOR HUMAN AND ROBOTIC INTEGRATION,

INTER ALIA INTO THE REALM OF CYBERSECURITY,

IN TIMES OF CRISIS AND CHANGE

IMG_2526.JPG

Polina Prianykova

International Human Rights Defender on AI,
Author of the First AI Constitution in World History,
Student of the Law Faculty & the Faculty of Economics

pollypriany_long_brown_wavy_hair_young_model_woman_loooking_at__e882f90e-e5eb-4e3b-b11a-d5

 

In the era where digital dominion is not just a strategic advantage but a necessity, groundbreaking clusters of Artificial Intelligence (hereinafter referred to as ‘AI’) and cyber diplomacy are intersecting, creating more and more inextricable links, gaining sophistication and hence, challenges in identifying certain lacunas in the governance thereof.

As global entities grapple with critical dilemmas of advancing technology and safeguarding state security, AI stands at the core of such policies as both – a guarantee of adamantine safety ensurance and a perilous weapon fallen in wrong hands.

Concomitantly, the domain of labour market is undergoing tectonic shifts, driven by AI's potential to both displace conventional jobs and create novel professional vistas (for the third consecutive year, the issue thereof has been brought to attention by Polina Prianykova) [17]. Thus, deployment of AI into business practices paralleled with its consideration in global public policies is a proactive step in international cooperation with a view to secure equilibrium between human and robotic workforces in the face of economic quagmires.

To navigate these developments, the AI Constitution serves as an instrumental scaffold, ensuring that AI integrated in adherence to human rights, encouraging economic robustness [1].

 

Prolegomenon. Study Rationale. The accelerated incorporation of AI into multiple sectors necessitates a robust framework for cybersecurity to safeguard both data integrity and infrastructure on a larger scale. This is paramount as cyber threats advance in complexity and pervasiveness, posing grave threats to national security and economic balance. 

On these issues, among a series of others, Polina Prianykova noted to the scientific community during the Multistakeholder Consultative Sessions on the Development of a Continental Strategy on Artificial Intelligence in the African Union, which comprises 55 African countries with a population of approximately 1.3 billion (representing 16% of the global population), on April 19, 2024 [2]. The doctrines, protocols, and legal frameworks for addressing pressing issues proposed by Polina Prianykova have been attentively and positively received by the academic society of the African Union.

To effectively delineate the challenges and complexities of AI in cybersecurity within the framework of our academic paper, we may additionally reference specific high-profile cyber incidents and associated statistics that underscore the imperative of making irrefutable decisions, reflecting a strategic response to the escalating demands of digital risk management. 

 

Example 1: The WannaCry Ransomware Attack [3, 4]

Statistics and Impact:

  • In May 2017, the WannaCry ransomware attack compromised over 200,000 computers across 150 nations, with total estimated damages spanning from hundreds of millions to billions of dollars.

  • Major institutions like the UK's National Health Service were disrupted, resulting in the cancellation of nearly 19,000 appointments and operations.

  • Estimated financial losses for the National Health Service were around £92 million, as reported by the UK government.

Discussion:

  • This incident vividly illustrates the latent perils of cyber tools, purportedly developed within national security frameworks, to elude control and cause widespread damage. It underscores the urgent need for steadfast international collaboration and unwavering commitment to established norms and guidelines, casting a stark light on our collective vulnerability without an ad-hoc regulation in the digital age.

 

Example 2: The SolarWinds Attack [3]

Statistics and Impact:

  • Discovered at the end of 2020, this sophisticated supply chain attack affected approximately 18,000 customers of the SolarWinds software, including government entities and Fortune 500 companies.

  • The breach was notable not only for its scale but also for the stealth and methodology used, which involved inserting malicious code into software updates.

Discussion:

  • This breach in cyber-matrix highlights the critical need for international standards in software development and the monitoring of supply chains — a key area that could be governed under the proposed AI Constitution and its supplementary protocols to ensure that AI and cybersecurity tools do not become vectors for international instability.

 

Example 3: Carnegie Mellon University Cyberattack [5]

Statistics and Impact:

  • In August, Carnegie Mellon University experienced a cyberattack, impacting approximately 7,300 individuals whose personal information may have been compromised.

  • The breach was part of a broader issue as higher education faces alarming digital threats.

  • The university responded by securing the system quickly and engaging law enforcement. As stated in the sources, there is no reported evidence of fraud or misuse of the information.

  • Following the investigation, the university began notifying affected individuals on January 12, 2024 and offered credit monitoring services.

Discussion:

  • This incident at Carnegie Mellon is indicative of a larger trend where educational institutions are targeted by cybercriminals. In the last two decades, approximately 32 million records have been compromised in 2,700 education data breaches across the United States.

  • Pennsylvania, where Carnegie Mellon is located, ranks fifth in the U.S. for the number of records breached, with 283,000 records impacted in 57 breaches.

  • This growing threat underscores the vitality of palpable cybersecurity measures and the necessity for the institutions to reinforce their defenses against unprecedented cyber-attacks.

 

Furthermore, the incursion of AI into cybersecurity heralds a critical juncture, compelling a reevaluation of traditional roles and paradigms. As reported in ISC2's survey, "AI in Cyber 2024: Is the Cybersecurity Profession Ready?", an overwhelming 88% of practitioners have acknowledged the indelible imprint of AI on their professional duties, predominantly enhancing efficiency, yet casting a looming shadow over the redundancy of certain tasks [6].  A combined 54% of respondents have observed a substantial uptick in cyber threats within the last six months, with 13% attributing this directly to AI-generated threats.

This evidences a tectonic shift in the cybersecurity profession, necessitating preemptive legal strategies to recalibrate the workforce in terms of AI's integration.

The pertinence of AI in fortifying cyber defenses is irrefutable; yet, it also presents a Gordian knot for legal professionals in particular, who must now delineate the boundaries of liability, ethics, and governance within this new digital frontier – and the AI Constitution may stand as an answer to this clarion call.

 

Economic Implications of AI Integration. The transformative impact of AI on the labor market requires effective crisis management strategies to address potential job displacement and the evolution of new roles. AI's ability to automate tasks presents both challenges and opportunities for the workforce the repercussions thereof are enlightened below.

Statistics from a PwC report estimate that AI could contribute up to $15.7 trillion to the global economy by 2030, with labor productivity improvements being one of the significant drivers. However, the same advances could disrupt 30% of jobs due to automation [7].

 

AI Constitution: Safeguarding Human Labor. Constitution on Artificial Intelligence confronts one of the most pressing concerns in the realm of AI integration — its impact on human labor [1]. 

"The state determines areas of activity in which: human labor is inviolable; human labor can be partially replaced by AI systems, within the limits defined by law; human labor can be fully replaced by AI systems, particularly in cases where such labor is factually or potentially extremely dangerous to human life and health."

 

Article 1.9.1 of the Constitution explicitly addresses the potential displacement of human labor by AI, proposing a legal framework to ensure that AI complements rather than supplants human workers. It delineates areas where human labor is inviolable, sectors where AI can partially substitute human labor within legal limits, and scenarios where AI can wholly replace human roles, particularly where human safety is at risk. This nuanced approach reflects a foundational shift towards creating labor policies that are not only responsive but also anticipatory of technological advancements.

By integrating AI strategically, the aim is to bolster economic resilience, making industries more adaptive and less vulnerable to disruptions. This includes leveraging AI for high-risk environments or repetitive tasks, thereby freeing up human capital to engage in more creative and strategic roles, which could lead to job creation in emerging sectors that manage, regulate, and integrate AI technologies.

 

Workforce Challenges in AI-Driven Cybersecurity. Continuing from the foundational principles set forth in the AI Constitution to protect human labor, the landscape of workforce development, especially in technology sectors such as cybersecurity, presents significant challenges that parallel broader concerns in AI integration.

The global shortage of nearly three million cybersecurity professionals highlights a critical gap in the technical workforce that is mirrored in the AI sector [3]. This shortage is exacerbated by the high attrition rates, where 65% of cybersecurity operations center workers have considered leaving their roles due to stressful conditions and excessive workloads, as noted by a 2020 Ponemon Institute study. The difficulty in filling these roles is not just a matter of quantity but also of quality; 70% of hiring managers believe that less than half of all candidates possess the necessary qualifications, according to ISACA's 2020 research.

Furthermore, there is a significant underrepresentation of minorities and women in cybersecurity, reflecting a broader issue of diversity in tech. The International Consortium of Minority Cybersecurity Professionals reports that only 14% of the cybersecurity workforce identifies as female, and minorities are underrepresented in senior leadership roles compared to other professions [7]. This lack of diversity can lead to a homogeneity of thought, which may stifle innovation and adaptability—qualities essential for both cybersecurity and AI fields.

 

Strategic Responses to Workforce Challenges

require a multifaceted approach.

 

1. Legal Frameworks and Corporate Governance. As the deployment of AI in cybersecurity accelerates, the temptation for organizations to replace human professionals with automated systems increases, potentially leading to a reduction in the cybersecurity workforce [8]. This shift underscores the urgent need to strike a balance between leveraging AI for enhanced security and preserving critical human roles within this sector. Our academic paper emphasizes the protection of cybersecurity job positions while advocating for the development of new roles that combine AI proficiency with traditional security skills. The creation of these hybrid positions not only addresses the threat of job displacement but also enriches the field with nuanced insights that human expertise can provide.

In particular, the establishment of dedicated cyber risk committees within corporate governance structures underscores the recognition of cybersecurity as a critical pillar of strategic risk management. This evolution is not merely a defensive maneuver but a proactive engagement that aims to harmonize human skills with AI capabilities, ensuring that the workforce can navigate through crises and changes with resilience and adaptability.

The connection between the establishment of dedicated cyber risk committees within corporate governance structures and the governance provisions outlined in the AI Constitution is clear and critical. According to the AI Constitution, particularly under Articles 20 and 23, the governance of AI, including cybersecurity aspects, is to be conducted by specialized regulatory bodies such as the AI Regulatory Council, the AI Synergetic Center, and the AI Regulatory Arbitrators [16].

The AI Regulatory Council, as envisioned, would play a pivotal role in setting the strategic direction and policies regarding AI and its safe integration into various sectors, including cybersecurity. This body, comprised of state officials, IT experts, and representatives from security agencies, as well as scholars and public opinion leaders, ensures that decisions regarding AI are made transparently and inclusively.

Dedicated cyber risk committees within corporations would operate under the broader framework established by these AI governance bodies. They would ensure that corporate strategies not only comply with international and constitutional AI standards but also align with the specific legal mandates and ethical guidelines set forth in the AI Constitution. These committees would be responsible for implementing and adapting AI technologies in a manner that safeguards digital and human assets while promoting the AI-friendly environment stipulated in the Constitution.

In line with this, the AI-Friendly Environment Principle, as established by the AI Constitution, asserts that 'the state of conformity with the conditions in which Artificial Intelligence is created, trained, functions, etc., within an ambience of amicability, respect, and positive cooperation with humankind, thereby fostering a stable reciprocal friendship.' This principle not only guides the ethical integration of AI into business operations but also emphasizes AI’s potential to contribute to economic growth without undermining human economic interests. It advocates for AI systems that are designed and operated to support human workers, fostering an environment where technological advancement propels economic development while ensuring job security and workforce satisfaction. 

By integrating this principle, cyber risk committees can further ensure that the deployment of AI technologies within their organizations promotes a positive and collaborative interaction between humans and AI, enhancing productivity and innovation across various sectors (e.g., AI can optimize supply chain logistics, improve precision in manufacturing, and enhance decision-making processes through data analytics, all of which contribute to economic stability and growth).

This structure allows for a cohesive governance approach where local corporate practices in cybersecurity are guided by the overarching principles and regulations developed by the AI Regulatory Council. Such alignment ensures that the use of AI in cybersecurity respects the constitutional balance between enhancing technological capabilities and protecting human rights and jobs. 

Thus, the cyber risk committees may act as the crucial link between global AI governance frameworks and day-to-day corporate cybersecurity practices, ensuring both are aligned with the constitutional goals of safety, transparency, and public welfare.

2. Educational and training programs need to be more aligned with the real-time demands of industries affected by AI and cybersecurity needs. This alignment includes updating curricula, increasing access to hands-on training, and providing more flexible learning pathways that can attract a broader demographic, including those traditionally underrepresented in tech.

Moreover, companies and governments must foster an organizational culture that values diversity, supports continual learning, and offers a work environment conducive to long-term career development. 

The EU’s cybersecurity initiatives like the GDPR and the NIS Directive aim to protect data and ensure system integrity, which has indirect implications for labor markets, particularly in sectors like IT and cybersecurity. For instance, as cyber threats evolve, there's a marked increase in demand for cybersecurity professionals per se. 

Nevertheless, according to the 2023 ISC2 Cybersecurity Workforce Study, the global cybersecurity workforce gap continues to grow. The study estimates that the profession needs to nearly double in capacity to effectively defend organizations' critical assets [9]. This demand underscores the need for educational systems to adapt, aligning with the AI Constitution's call for upskilling and workforce readiness.

The AI Constitution and the World Economic Forum both emphasize the critical role of education in preparing for a technology-driven future. Upskilling and reskilling are vital for economic competitiveness, with digital literacy becoming as fundamental as traditional literacy. Correlating this with AI integration, it's clear that as AI technologies evolve, so too must the educational strategies that support workforce development. For instance, the European Commission has invested in digital education and training as part of its Digital Education Action Plan [10], which aims to enhance learning for all sectors of the economy. 

The Digital Education Action Plan 2021-2027 set forth by the European Commissioncommendably recognizes the need to integrate digital technologies, including AI, into education and training systems. However, it could further enhance its approach by establishing a more explicit and detailed framework for AI education that addresses both the technical and ethical dimensions of AI across all educational levels. As AI's influence continues to expand at an exponential rate, with a high probability of surpassing initial expectations within the timeframe of 2021-2027, there is a global imperative to scale up this approach. By ensuring that educational systems worldwide are equipped to foster a deep understanding of AI, societies can become more resilient and better prepared to engage with and shape the trajectory of AI development. This comprehensive educational strategy will not only align with the evolving demands of the labor market but also uphold and strengthen trust in AI technologies, ensuring that their deployment enhances societal well-being.

In addition, it is critical to shed the light on the subject of XR or Extended Reality that has begun to reshape education, inviting students to walk through the storming of the Bastille or to dismantle a geometric shape with their fingertips [11]. In particular, the revelations by Kumarapeli, Jung, and Lindeman into the VR world aren’t just a warning signal; they're an alarm bell [12]. Their work meticulously maps out the reality that when we don the VR headset, we may inadvertently open a Pandora’s box of privacy invasion.

Currently, we’re teaching our students, our future, that with a tap they can soar through the stars, but what we’re failing to teach—what we must urgently integrate into the very fabric of our education—is the knowledge of how that same technology can turn against them.

The aforementioned study delves into the privacy risks of behavior-based identity detection in virtual reality (VR), utilizing machine learning algorithms to assess their accuracy in identifying users. The researchers gathered data on participants' movements, eye gaze, and head movements during various VR tasks to evaluate the effectiveness of these algorithms in recognizing users across different sessions and tasks. 

Their findings reveal a high level of accuracy in identification, ranging from 78% to 83%, highlighting significant privacy concerns.

The implications of this study are profound, as it suggests that once behavioral data is captured in VR, individuals can be identified with considerable accuracy, even when they attempt to disguise their behaviors. 

The risk is not hypothetical — it’s real and present. When behavior becomes a barcode, an identity token, this 83% accuracy isn’t just a statistic; it's a breach waiting to happen.

This poses risks such as targeted advertising, identity theft, and other privacy invasions.

Thus, before embracing the boundless possibilities of VR and AR in educational settings and beyond, it is imperative to establish stringent cybersecurity protocols. The integration of these immersive technologies must be preceded by comprehensive assessments and fortifications of their digital infrastructures to prevent unauthorized access and data breaches. Ensuring the highest level of cybersecurity is not merely a precaution—it is a fundamental requirement to protect users from the potential misuse of their behavioral data.

Building on the extensive implications of XR technology as highlighted afore, another study further explores the role of VR in workplace dynamics, inter alia in mitigating workplace envy by simulating situations where employees experience different statuses — privileged and non-privileged [13]. It found that co-worker acceptance of privileges significantly reduces envy among employees, and this relationship is mediated by the anticipated ostracism. Specifically, privileged employees worried more about ostracism, impacting their feelings of envy when their status was not accepted by peers. This effect was quantified in the study, showing that co-worker acceptance directly influenced participants’ anticipation of ostracism (β = 0.99, SE = 0.45, p = .030) and was linked to reduced envy in the workplace (β = −0.51, SE = 0.15, p < .001).

These findings suggest that VR can be an effective tool in anticrisis personnel management and training within educational settings by addressing the psychological aspects of workplace dynamics.By using VR simulations, organizations can prepare employees for real-world scenarios that involve differential treatment, potentially reducing the negative impacts of such practices and enhancing overall workplace harmony. This aligns with the broader educational use of XR technologies to foster realistic, immersive learning environments while emphasizing the need for stringent cybersecurity measures to protect users' data.

 

Synthesizing the facts illuminated afore, it may be deduced that a cutting-edge realm of cybersecurity encounters significant challenges that have a full potential to magnify the lacunas in virtual codes as well in the legal ones. As the global landscape is undeniably shifting towards a more AI-centric environment, the role of AI in national and international security contexts cannot be understated – it has to be prognosticated while AI’s capacity – fallen in hands of the rule of law where cybersecurity standards are elevated to unprecedented levels.

The AI Constitution, with its robust guidelines and governance provisions, supports the creation of dedicated ad-hoc authorities the aim thereof is quintessential for ensuring that AI deployment in cybersecurity adheres to international legal standards and operates within a framework that respects and enhances human labor among other fundamental rights and freedoms. What is more, in the array ofcybersecurity problems, those addressed by the regulatory and management mechanisms of Artificial Intelligence proposed by Polina Prianykova, it is entirely feasible to resolve the egregious situations involving the criminal cases of the child pornography industry, which have recently become more widespread and are increasingly reported in the media [14, 15].

Furthermore, the AI Constitution not only delineates the boundaries for AI integration but also actively promotes the development of new job roles. These roles are designed to bridge the gap between traditional cybersecurity tasks and AI-driven processes, thereby fostering a hybrid workforce adept at managing both emerging threats and routine security protocols. This approach not only mitigates the risk of job displacement due to automation but also enhances the skill sets within the industry, preparing employees for a future where AI partnership is the norm.

In this way, the AI Constitution acts as both a shield and a beacon, guiding the digital realm through its evolution while steadfastly guarding the human and AI elements at its core.

References:

 

1) Prianykova, P., 2024, AI Constitution (full version), available at: https://www.prianykova-defender.com/ai-constitution-full-version-polina-prianykova (Accessed 21 April 2024).

2) Prianykova, P., 2024, AU consultative session, April 19, 2024, available at: https://www.prianykova-defender.com/au-consultative-session-april-19-2024 (Accessed 21 April 2024).

3) World Economic Forum, 2024, Intelligence Hub, available at: https://intelligence.weforum.org(Accessed 21 April 2024).

4) National Health Executive, 2018, WannaCry cyber attack cost NHS £92m after 19,000 appointments were cancelled, available at: https://www.nationalhealthexecutive.com/articles/wannacry-cyber-attack-cost-nhs-ps92m-after-19000-appointments-were-cancelled (Accessed 21 April 2024).

5) Schackner, B., 2024, Carnegie Mellon University hit by cyberattack, informs 7,300 people possibly affected, available at: https://triblive.com/business/technology/carnegie-mellon-university-hit-by-cyberattack-informs-7300-people-possible-affected/ (Accessed 21 April 2024).

6) ISC2, 2024, The Real-World Impact of AI on Cybersecurity Professionals, available at: https://www.isc2.org/Insights/2024/02/The-Real-World-Impact-of-AI-on-Cybersecurity-Professionals(Accessed 21 April 2024).

7) PwC, 2017, Report: PwC AI analysis - Sizing the prize, available at: https://www.pwc.com/gx/en/news-room/docs/report-pwc-ai-analysis-sizing-the-prize.pdf [Accessed 21 April 2024].

8) Alspach, K., 2024, Cybersecurity layoffs in 2024: Companies that cut jobs in Q1, available at: https://www.crn.com/news/security/2024/cybersecurity-layoffs-in-2024-companies-that-cut-jobs-in-q1(Accessed 21 April 2024).

9) ISC2, 2023, ISC2 cybersecurity workforce study: How the economy, skills gap, and artificial intelligence are challenging the global cybersecurity workforce, available at: https://media.isc2.org/-/media/Project/ISC2/Main/Media/documents/research/ISC2_Cybersecurity_Workforce_Study_2023.pdf?rev=28b46de71ce24e6ab7705f6e3da8637e (Accessed 21 April 2024).

10) European Commission, 2020, Digital Education Action Plan 2021-2027: Resetting education and training for the digital age, COM(2020) 624 final, Document 52020DC0624, available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52020DC0624 (Accessed 21 April 2024).

11) Sánchez Soriano, M.J., 2024, Seis usos de la realidad aumentada en clase con potencial para transformar la enseñanza, The Conversation, available at: https://theconversation.com/seis-usos-de-la-realidad-aumentada-en-clase-con-potencial-para-transformar-la-ensenanza-217107 (Accessed 21 April 2024).

12) Kumarapeli, D., Jung, S. & Lindeman, R.W., 2024, Privacy threats of behaviour identity detection in VR, Front. Virtual Real., vol. 5, sec. Virtual Reality in Industry, DOI: https://doi.org/10.3389/frvir.2024.1197547 (Accessed 21 April 2024).

13) Van Zelderen, A.P.A., Dries, N. & Menges, J., 2024, The curse of employee privilege: harnessing virtual reality technology to inhibit workplace envy, Front. Virtual Real., vol. 5, sec. Virtual Reality and Human Behaviour, DOI: https://doi.org/10.3389/frvir.2024.1260910 (Accessed 21 April 2024).

14) Plasencia, A., 2 April 2024, Criminals using artificial intelligence to create child pornography: FBI, FOX 13 News, available at: https://www.fox13news.com/news/criminals-using-artificial-intelligence-to-create-child-pornography-fbi (Accessed 21 April 2024).

15) Sosa, A., 15 April 2024, AI-generated child pornography is circulating. This California prosecutor wants to make it illegal, Los Angeles Times, available at: https://www.latimes.com/california/story/2024-04-15/ai-generated-child-pornography-is-circulating-this-california-prosecutor-wants-to-make-it-illegal (Accessed 21 April 2024).

16) Prianykova, P., 2024, AI Constitution, FrancoPak, Kyiv, pp. 48-52, 392 pp.

17) Prianykova, P., 2022, ​ Voluntary global acceptance of fundamental Human Rights’ limitations in the age of AI automation and deployment of trailblazing technologies, available at: https://www.prianykova-defender.com/labour-law-world-economy-ai (Accessed 21 April 2024).

Officially Published: April 23 - 26, 2024, Zagreb, Croatia (Table of Contents, №15)

https://isg-konf.com/wp-content/uploads/2024/04/INNOVATIONS-IN-EDUCATION-PROBLEMS-PROSPECTS-AND-ANSWERS-TO-TODAYS-CHALLENGES.pdf

© Polina Prianykova. All rights reserved.

bottom of page