Polina Prianykova
President of the Global AI Center,
International Human Rights Defender on AI,
Author of the First AI Constitution in World History

This academic paper has been
officially submitted to the
United Nations Office on
Drugs and Crime (UNODC)
as part of the 2025 Joint Constructive Dialogue on Technical Assistance and International Cooperation, held under the auspices of the UN Convention against Transnational Organized Crime (UNTOC).
The Joint Constructive Dialogue with relevant stakeholders, including non-governmental organizations and academic institutions, was convened on Monday, 2 June 2025, following the 16th meetings of the Working Group of Government Experts on Technical Assistance and the Working Group on International Cooperation, in accordance with paragraph 53 of the Procedures and Rules for the Functioning of the UNTOC Review Mechanism.
This written submission [1] represents the official contribution of the Global Scientific Center for Strategic Research on Artificial Intelligence POLLYPRIANY to Panel 1 of the Dialogue, thematically dedicated to:
“Prevention of organized crime through public-private partnerships and socioeconomic, cultural and behavioural pathways, with a focus on organized fraud.”
The data, analysis, and legislative recommendations presented herein were developed exclusively by the Global AI Center POLLYPRIANY for this forum and form part of an ongoing juridical and scientific initiative to align predictive Artificial Intelligence with international anti-fraud and human rights norms.
Keywords:
Global Scientific Center for Strategic Research on Artificial Intelligence POLLYPRIANY; AI Constitution by POLINA PRIANYKOVA; Supranational Protocol on Responsible AI Usage and Labor Rights; Algorithmic Governance in Public–Private Partnerships; Fraud Prevention.
Introductory Statement.
On behalf of the Global AI Center POLLYPRIANY, we are honored to participate in the 2025 Joint Constructive Dialogue on Technical Assistance and International Cooperation convened under the auspices of the United Nations Convention against Transnational Organized Crime (UNTOC).
As an independent international scientific institution, the Global AI Center POLLYPRIANY is committed to pioneering legal frameworks, scientific foresight, and AI-friendly governance models for the digital age. Drawing from our interdisciplinary expertise in law, ethics, economics, and AI systems engineering, we advance legally sound, human-centric proposals for the global regulation of Artificial Intelligence.
Among our constitutional initiatives is the Artificial Intelligence Constitution (2023), a landmark document created by POLINA PRIANYKOVA, and officially submitted to the United Nations during the Global Digital Compact process, addressed to the Co-Facilitators and the UN Secretary-General [2]. The AI Constitution serves as a juridical beacon and philosophical reference point for our proposals — including those presented herein — offering a structured vision of lawful AI development, state oversight, and digital sovereignty.
Building on that high-level constitutional vision, we have also authored and introduced the Supranational Protocol on Responsible AI Usage and Labor Rights (2024), which complements the Constitution by offering immediately actionable regulatory mechanisms for States and international actors [3]. The Protocol focuses on fraud-sensitive domains where AI intersects with public-private governance — particularly labor rights, welfare allocation, digital procurement, and financial oversight.
Our contribution to this Dialogue is both principled and pragmatic: it draws from the normative foundations of the AI Constitution while integrating the operational instruments, audit models, and anti-fraud clauses articulated in the Protocol. Together, these two documents form a soft-law constitutional scaffold for preventing algorithmically-enabled fraud and reinforcing democratic oversight in AI-powered PPPs.
In light of this year’s focus — the prevention of organized crime through public-private partnerships and sociocultural pathways, with an emphasis on organized fraud — the Global AI Center POLLYPRIANY respectfully submits a set of tangible, constitutionally grounded recommendations aimed at:
-
Legislatively embedding AI-enabled predictive analytics into government–industry anti-fraud cooperation;
-
Institutionalizing AI fraud foresight units under national and international regulatory mandates;
-
Codifying behavioural and sociocultural risk assessment frameworks to prevent algorithmic bias;
-
Supporting the drafting of a Model Law on the AI–Fraud Nexus;
-
And initiating the creation of a UN-certified international registry of financial crime AI systems to ensure algorithmic transparency and compliance with human rights safeguards.
These proposals are based on the legal doctrine and scholarly work of the Center and its three constitutional research arms [4]:
-
AI Institute on Proactive Space Strategies and Innovations.
-
AI Institute on Digital Economy and Eco-Cybersecurity.
-
AI Institute on Advanced Intellectual Property Law and Ethical Governance.
Our team includes jurists, economists, philosophers, engineers, and ethics scholars. We believe that the inclusion of constitutionally compliant Artificial Intelligence within PPPs can offer a historic leap forward in the global prevention of organized fraud and the institutionalization of peaceful, human-centric digital order.
I. Codification of AI-Led Public–Private Partnership (PPP) Governance Models.
Legal Context and Normative Justification.
In accordance with the normative corpus advanced within the Artificial Intelligence Constitution— specifically Articles 1.9, 2.3, 13.1, and 17.1 — algorithmic systems and their underlying data architectures are to be construed as digital public goods, falling within the sovereign regulatory domain of the State [5]. Their deployment within public-private partnerships (PPPs), particularly in domains susceptible to systemic abuse, mandates not only monopolistic oversight by the State, but also constitutionally structured legal accountability and socio-ethical safeguards against algorithmic exploitation and harm.
Complementing this constitutional foundation, the Supranational Protocol on Responsible AI Usage and Labor Rights introduces binding normative guidelines for the application of AI in fraud-sensitive infrastructures such as public procurement, digital finance, and welfare distribution. These frameworks collectively ground a supranational logic in which AI-enabled PPPs become not merely technological collaborations but instruments of prophylactic statecraft against organized fraud.
Legislative and Regulatory Proposals.
1. Statutory Mandate for Predictive AI in PPPs.
UNTOC States Parties shall promulgate national legislation — herein proposed as an AI–PPP Prevention Statute — mandating the integration of AI-powered predictive analytics within PPP frameworks operating in domains with elevated historical exposure to organized fraud schemes. Such domains include but are not limited to:
-
Public procurement and digital tendering ecosystems [6];
-
Digital identity verification and national civil registries;
-
Social benefit distribution networks (cash transfers, pensions, subsidies);
-
Digital banking, e-wallets, and fintech infrastructure.
This statutory obligation shall form the cornerstone of fraud-interruption architecture, enabling anticipatory identification of anomalous patterns, identity clustering, phantom accounts, procurement overinflation, and other emblematic behaviors of organized criminal operations.
2. Establishment of National Digital Fraud Prevention Registries.
To prevent covert deployment and algorithmic laundering, each Member State shall establish a National Digital Fraud Prevention Registry under the supervision of an AI Regulatory Executor (cf. AI Constitution, Art. 26). This Registry shall act as the centralized audit and transparency node for all AI predictive models deployed within PPPs. Its legal mandate shall include:
-
Statutory registration of all deployed AI systems operating in fraud-vulnerable PPPs;
-
Biannual independent audits evaluating:
-
Algorithmic fairness and demographic parity compliance;
-
Precision rates and statistical reliability thresholds;
-
False-positive and false-negative indices disaggregated across protected social categories;
-
Alignment with national anti-discrimination and international human rights obligations.
-
Audits shall be legally admissible in procurement disputes, corruption inquiries, and fraud-related prosecutions under the national criminal code.
3. Enforceable Transparency Obligations within PPP Contractual Frameworks.
Every contractual arrangement governing an AI-enabled PPP shall include a mandatory Transparency Annex, enforceable through civil and administrative remedies. This annex shall obligate private contractors and technical vendors to disclose:
-
Full provenance documentation of training datasets, including geographic, demographic, and temporal coverage;
-
Documentation of model evolution, including major updates, threshold recalibrations, and adaptive logic modifications;
-
An intelligible, non-technical explanation of decision-making processes, classification parameters, and adjudication logic;
-
A Social Impact Statement, as defined in Articles 6.6 and 9.6 of the AI Constitution, demonstrating anticipated impact on vulnerable, marginalized, or legally protected populations, including mitigation measures and redress procedures.
Failure to comply with these obligations shall constitute grounds for procurement disqualification, financial penalties, and/or suspension from further engagement in state-affiliated digital infrastructure projects.
4. Oversight Architecture: Tri-Sectoral Governance with Constitutional Quorum.
To safeguard pluralism and prevent collusion or capture within AI-governed PPPs, an institutionalized Tri-Sectoral Oversight Board shall be established per partnership, composed of:
-
State representatives (from data protection authorities, AI ethics councils, and anti-corruption units);
-
Private-sector stakeholders (contracting firms, AI system providers, auditors);
-
Civil society delegates (legal scholars, AI ethicists, community watchdogs, labor rights advocates).
All material decisions, including system deployment approval, audit acceptance, and emergency system suspension, shall be subject to the ¾ +1 quorum rule. This ensures constitutional legitimacy, safeguards minority opinion, and institutionalizes friction as a barrier to unchecked automation.
II. Institutionalization of National AI–Fraud Foresight and Prevention Divisions (FFPDs).
Legal Rationale and Doctrinal Foundation.
Pursuant to Articles 6.2, 9.5, 20.1.5, and 24.1.9 of the Artificial Intelligence Constitution, UNTOC States Parties are urged to codify anticipatory AI governance mechanisms as instruments of lawful foresight, particularly in fraud-vulnerable sectors where pre-incident intelligence can mitigate systemic abuse.
Complementarily, the Supranational Protocol on Responsible AI Usage and Labor Rights mandates the adoption of “pre-incident regulatory modeling” as a legally enforceable standard of care in AI deployment across both public welfare systems and digital economic infrastructure. Within this legal-theoretical framework, the institutionalization of Fraud Foresight and Prevention Divisions (FFPDs) emerges not as an administrative innovation, but as a constitutional imperative to prevent the crystallization of organized fraud patterns before they achieve operational viability.
Implementation Framework and Normative Configuration.
1. Statutory Establishment of FFPDs.
Each State Party shall enact enabling legislation establishing a Fraud Foresight and Prevention Division (FFPD) as an independent body embedded within the national AI Synergetic Center, as defined in Article 26.1.4 of the AI Constitution. The FFPD shall be entrusted with:
-
Coordinating with AI system providers, government procurement entities, and regulatory enforcement agencies;
-
Executing real-time computational surveillance over:
-
Behavioral micro-pattern shifts in welfare and financial platforms;
-
Emergent anomalies in cross-jurisdictional transaction flows;
-
Volatility indicators within crypto-assets and token-based ecosystems;
-
Socio-cultural trust breaches manifesting as fraud-enabling behavior (e.g., impersonation economies, document laundering, identity multiplexing).
-
The legal character of the FFPD shall be hybrid–preventive: part regulatory agency, part technical observatory, and part cross-border fraud signal router.
2. Functional Mandate and Analytical Toolsets.
The core operative functions of each FFPD shall be constitutionally circumscribed and technologically robust. They shall include:
-
Generation of simulation-based forecasting models to predict the adaptive evolution of organized fraud [7] in:
-
Social benefit fraud ecosystems;
-
Synthetic identity construction networks;
-
Cryptocurrency anonymization and mixer abuse;
-
Shell procurement and subcontractor laundering;
-
-
Activation of an Early-Warning Architecture, issuing tiered alerts to government and industry stakeholders upon detection of statistically significant fraud signal patterns;
-
Participation in international data exchange and coordination through bilateral or multilateral frameworks, ensuring lawful interoperability and reciprocal access to relevant fraud intelligence.
3. Ethical Compliance Protocol and Constitutional Review.
To forestall algorithmic overreach and institutional abuse, all outputs and decision-support data generated by the FFPD shall be subject to constitutional compliance auditing under the following provisions of the AI Constitution:
-
Articles 1.8–1.9: Prohibition against adversarial or predatory AI use vis-à-vis humanity;
-
Article 3: Rule of law, procedural fairness, and algorithmic due process;
-
Articles 4.3, 10.6, 11.6: Protection of cultural heritage, personal identity, and bodily dignity.
All models and outputs must undergo ex-ante validation and ex-post oversight by the AI Regulatory Arbitrators, pursuant to Article 27, ensuring consistency with both domestic civil liberties and international human rights instruments.
4. Budgetary Entrenchment and Whistleblower Safeguards.
To insulate the FFPD from politicization, budgetary starvation, or regulatory capture, the following financial and legal provisions shall apply:
-
The Division shall receive a ring-fenced allocation within each State Party’s national AI infrastructure budget (Art. 21.1.1), renewable every fiscal year based on a multi-year performance audit;
-
Legislative codification of whistleblower protections under Article 17.5 shall be mandatory, with provisions for:
-
Anonymous internal reporting of ethical or legal violations;
-
Legal standing for conscientious objectors refusing to comply with exploitative or unlawful AI operations;
-
Cross-border immunity clauses for whistleblowers cooperating in joint investigative operations under UNTOC’s mutual assistance protocols.
-
The formal establishment of Fraud Foresight and Prevention Divisions (FFPDs) within AI-governed PPP frameworks represents a juridically actionable response to the algorithmic acceleration of organized financial crime. These units provide an institutionalized bridge between technical vigilance and legal foresight, anchoring AI deployments within a framework of constitutional integrity, international legality, and anticipatory governance. Their success depends not only on computational sophistication, but on the juridical legitimacy they derive from proactive statutory entrenchment.
III. Legislative Translation Pathways and Harmonization Mechanisms.
Normative Objective.
To operationalize the preceding recommendations into enforceable legislative instruments across diverse jurisdictions, this section proposes a dual-track legislative strategy. It combines a soft-law reference architecture grounded in the Artificial Intelligence Constitution with binding domestic and transnational codification measures aligned with UNTOC mandates and broader principles of international law.
1. Drafting and Adoption of a Model Law on Predictive AI in Anti-Fraud Public–Private Partnerships (PPP-AI-Fraud Law).
In consultation with UNODC, OHCHR, and relevant treaty bodies, UNTOC States Parties are encouraged to initiate the development of a UN-endorsed Model Law, serving as a legislative prototype for national uptake. This Model Law shall:
-
Reference the Artificial Intelligence Constitution as a guiding soft-law document outlining constitutional AI principles for lawful, ethical, and sovereignty-preserving AI deployment in public infrastructure;
-
Mandate the establishment of Digital Fraud Prevention Registries, ensuring that all AI-driven fraud prevention systems are lawfully registered, subject to regular auditing, and compliant with fundamental rights protections;
-
Legally require the institutionalization of Fraud Foresight and Prevention Divisions (FFPDs)within national regulatory architecture, in accordance with Recommendation II;
-
Impose proportionate liability standards on private actors participating in PPPs using predictive AI tools, including obligations for transparent data practices, explainability, and public accountability;
-
Ensure legal alignment with international public law, digital rights instruments, and regional data protection frameworks.
2. Vertical Integration into Domestic and Multilateral Legal Instruments.
To operationalize constitutional safeguards at scale, States Parties should:
-
Amend national public procurement legislation to incorporate enforceable requirements for “Constitutionally Compliant Predictive Intelligence,” ensuring algorithmic accountability, explainability, and social impact disclosure;
-
Embed these legal standards into:
-
Development aid contracts where AI is deployed in service delivery,
-
Bilateral cooperation frameworks governing digital governance and anti-corruption initiatives,
-
Public–private projects financed by multilateral development banks.
IV. Development of Socio-Cultural and Behavioural Risk Maps for AI Deployment in PPPs.
Legal & Epistemological Rationale.
The deployment of Artificial Intelligence within public-private partnerships — especially those operating in fraud-sensitive domains such as digital identification, welfare distribution, and financial eligibility — must not occur in a sociocultural vacuum. Algorithmic systems, when deployed without contextual safeguards, may inadvertently reinforce systemic bias, misinterpret behavioral norms, or result in disproportionate exclusion.
To mitigate such harms and to pre-empt organized fraud networks from exploiting algorithmic misclassification, the Supranational Protocol on Responsible AI Usage and Labor Rights mandates the integration of a Human Vulnerability Impact Layer into all AI systems used for resource allocation, public trust certification, or state-sanctioned identification.
Algorithmic misrecognition is not only an ethical risk — it is a criminogenic pathway. When culturally embedded behaviors are misclassified as deviant, or when demographic groups are routinely denied services due to AI profiling, such disenfranchisement creates a vacuum in which organized fraud actors may recruit, deceive, or exploit the digitally excluded [8]. Therefore, cultural integrity and behavioral calibration are not ancillary; they are central to lawful and peaceful AI deployment.
Empirical and Policy Measures.
1. Mandate for Cultural and Behavioral Risk Mapping in AI-Driven PPPs.
-
Every AI-enabled PPP initiative operating in fraud-vulnerable sectors shall be preceded by a Cultural and Behavioral Risk Mapping Study, conducted by an interdisciplinary working group comprising data scientists, legal scholars, sociologists, economists, and cultural anthropologists.
-
These studies shall identify and document:
-
Culturally sensitive trigger terms or behavior patterns that may be misread by AI systems;
-
Variances in risk perception across demographic, linguistic, and religious communities;
-
Historical patterns of algorithmic mistrust, community disengagement, or passive disobedience;
-
Intersectional vulnerabilities (e.g., stateless persons, single mothers, displaced youth, informal workers) that correlate with susceptibility to fraud victimization or recruitment.
-
-
The Risk Mapping Study must also include a Predictive Taxonomy of how such misclassifications may be:
-
Exploited by organized fraud networks (e.g., synthetic identity creation, phantom enrollment in benefits systems);
-
Used as camouflage by transnational fraud actors operating across cultural fault lines;
-
Connected to historical patterns of digital exclusion or systemic bias.
-
2. Integration into Algorithmic Design, System Logic, and Community Interaction.
-
Cultural risk maps shall be encoded directly into the operational logic of AI systems and reviewed by the AI Regulatory Executor (per Art. 26.1.5), with the following mandates:
-
Prevent the misclassification of culturally normal behavior as deviant or fraudulent;
-
Flag latent algorithmic discrimination or enforcement disproportionality;
-
Calibrate outputs using localized ethical reasoning while maintaining legal consistency and human rights standards.
-
-
AI systems deployed in regions with high organized fraud prevalence must include a Community Harm Reduction Interface, enabling:
-
Locally adapted prompts or visual alerts when behavior is misread by the algorithm;
-
Educational modules informing users of common fraud manipulation tactics and preventive steps;
-
Real-time connectivity to local civil society actors working on digital literacy, social protection, or anti-fraud awareness.
-
3. Constitutional Review and Supranational Intelligence Loop.
-
Outputs from the Risk Mapping Study shall be evaluated under the Digital Dignity Standard.
-
Prior to deployment, each system must undergo a Cultural Integrity Review by a tri-sectoral oversight board (composed of state regulators, private-sector developers, and civil society experts), following quorum rules and the ¾ +1 principle (see Recommendation I).
-
Where identified behavioral patterns match known transnational fraud typologies (e.g., crypto laundering behavior clusters, biometric spoofing indicators), the flagged intelligence shall be automatically routed to the national Fraud Foresight and Prevention Division (FFPD) (see Recommendation II), and cross-checked under international cooperation protocols compliant with Art. 24.1.4.
V. Establishment of a Transnational AI–Fraud Intelligence Grid (TAFIG).
Legal Justification.
Given the inherently cross-border structure of organized fraud networks, the necessity for supranational coordination of AI-led intelligence mechanisms is both urgent and legally defensible. As articulated in Articles 24.1.3, 24.1.4, and 26.1.7 of the Artificial Intelligence Constitution, States are urged to establish digital cooperation regimes, ensuring lawful interoperability across national boundaries. This aligns with the Supranational Protocol on Responsible AI Usage and Labor Rights, which mandates harmonized digital due diligence procedures for transnational actors in both the public and private spheres.
Operational Framework.
1. Legal Formation of the TAFIG.
-
Establish a Transnational AI–Fraud Intelligence Grid (TAFIG) under the coordination of the UNTOC Secretariat or an affiliated treaty-based organ (e.g., an Intergovernmental Task Force on Digital Crime Prevention).
-
States Parties shall accede to TAFIG through Memoranda of Algorithmic Cooperation (MACs) — binding intergovernmental instruments that enable:
-
Secure and encrypted metadata exchange on anomalous or flagged transactions;
-
Joint modelling of emergent fraud topologies and typologies;
-
Access to anonymized, jurisdiction-specific behavioral pattern libraries for comparative analysis and predictive calibration.
-
2. Treaty-Level Data Sovereignty Safeguards.
All participation must be governed by a Digital Safeguard Clause embedded in the foundational TAFIG treaty, ensuring that:
-
Each State retains sovereign control over its data pipelines;
-
No AI-generated conclusions are exported or acted upon across borders without dual-key authorization (recipient and source jurisdictions);
-
All shared datasets comply with the AI Constitution, protecting informational dignity, cultural privacy, and identity non-commodification.
3. Non-Weaponization & Peaceful Use Limitation
-
All algorithmic models exchanged through TAFIG must be certified non-weaponizable under Articles 1.8 and 13.1 of the AI Constitution and must be contractually barred from use in:
-
Military, paramilitary, or offensive cyber operations;
-
Surveillance programs targeting protected groups;
-
Political profiling, dissent prediction, or electoral manipulation.
-
-
A Third-Party Neutrality Mechanism may be appointed to oversee peaceful use compliance, with binding arbitral capacity to adjudicate suspected misuse.
VI. Establishment of an International Registry of AI Systems Used in Fraud-Sensitive PPPs
(AI–FSR).
Legal Mandate and Normative Foundation.
To uphold principles of transparency, accountability, and human rights in the deployment of algorithmic systems within high-risk public–private partnerships, a supranational registry of such systems is both legally warranted and technically feasible [9].
This recommendation is rooted in the Artificial Intelligence Constitution, which mandates transparent and rights-oriented oversight mechanisms for AI systems operating in sensitive domains, particularly those with measurable impact on public rights, labor structures, or institutional trust. It is further reinforced by the Supranational Protocol on Responsible AI Usage and Labor Rights, which requires the international registration of labor-impacting algorithmic infrastructures under auditable,multilingual, and rights-compliant conditions.
Proposal & Institutional Framework.
1. Creation of the AI–PPP Fraud System Registry (AI–FSR)
Establish the AI–FSR under the auspices of a UN-mandated digital authority — ideally in partnership with the International Telecommunication Union (ITU) or World Intellectual Property Organization (WIPO) — as a multilingual, interoperable global repository cataloguing:
-
All approved AI tools used in PPPs operating in fraud-sensitive sectors (e.g., welfare, procurement, identity systems);
-
Technical documentation, including model architecture, training parameters, and update histories;
-
Audit trail records, certification outcomes, and legal status under national and international law.
2. Mandatory Reporting Obligations for Public and Private Entities.
All AI systems deployed in state-authorized PPPs targeting fraud mitigation must be reported to the AI–FSR within 90 days of operational initiation.
Private contractors and technology providers must submit:
-
Lineage documentation detailing the origin and evolution of training datasets;
-
Plain-language summaries of system logic, decision pathways, and fallback mechanisms, published in both English and the official language(s) of the jurisdiction;
-
Legal attestations of compliance with audit rights, data sovereignty clauses, and the Peaceful Use Limitation Doctrine (Art. 1.8 of the AI Constitution).
3. Interoperability and Institutional Linkages.
To ensure system-wide coherence and avoid data silos, the AI–FSR must be interoperable with existing national and regional AI monitoring frameworks. In particular:
-
National Fraud Registries and FFPDs (Fraud Foresight and Prevention Divisions) shall have automatic linkage privileges to the AI–FSR, enabling real-time updates and cross-validation of flagged systems;
-
Regional data protection authorities and digital infrastructure courts may be granted observer access under a regulated interface;
-
A Global Ethics Sync Interface shall be introduced to facilitate standardization of risk indicators, audit categories, and cultural integrity thresholds across jurisdictions.
This ensures that the registry functions not merely as a passive archive, but as an active node in the global anti-fraud governance architecture.
4. Verification Mechanism, Public Transparency, and Rights of Challenge.
Each registered system shall be subject to independent third-party verification every 12 months, conducted by accredited AI Regulatory Executors or Ethics Boards.
The non-classified segments of the AI–FSR shall be made publicly accessible to:
-
Academic researchers, investigative journalists, and civil society watchdogs;
-
International organizations, regional human rights courts, and procurement oversight bodies.
A Rights of Challenge procedure shall be codified, empowering individuals and communities to submit a Constitutional Brief of Challenge where AI systems are alleged to:
-
Produce discriminatory or harmful outcomes;
-
Breach ethical, cultural, or labor rights;
-
Violate transparency, audit, or peaceful use obligations.
Conclusion.
Taken together, these recommendations do not merely suggest technical upgrades to existing public–private fraud prevention systems — they propose a foundational realignment of how predictive analytics, algorithmic governance, and constitutional legality intersect in the digital age.
By embedding AI systems within a framework of juridical accountability, socio-cultural calibration, and supranational oversight, the proposals advanced herein aim to fortify democratic institutions against the adaptive strategies of organized fraud networks. The creation of algorithmically compliant PPPs, Fraud Foresight Divisions, transnational registries, and an international intelligence grid offers a rights-respecting, forward-looking, and operationally scalable path toward global fraud prevention.
We respectfully urge the UNTOC Secretariat, participating Member States, and associated treaty bodies to consider these recommendations as integral elements in the evolving architecture of digital crime prevention — and to treat constitutionally compliant Artificial Intelligence not merely as a technological asset, but as a legal imperative in the fight against transnational organized crime.
Final Scholarly Reflection.
As a comprehensive juridico-philosophical work, this paper contributes to the global discourse not only by offering actionable policy mechanisms, but by situating predictive Artificial Intelligence within a broader constitutional cosmology. By bridging the theoretical scaffolding of the Artificial Intelligence Constitution with the enforceable provisions of the Supranational Protocol on Responsible AI Usage and Labor Rights, we have presented a model of transnational AI governance that is anticipatory, rights-anchored, and jurisprudentially coherent. This document stands as both a legal manuscript and an institutional blueprint — advancing a new global logic wherein Artificial Intelligence is governed not by technological determinism, but by democratic consent, foresight-based legality, and supranational accountability. In doing so, it reaffirms the foundational thesis of the Global AI Center POLLYPRIANY: that lawful AI is not an accessory to governance, but its evolving constitutional substance [10]. In other words, lawful AI is not merely a “smart program” designed to assist officials or governments. It represents a new stage in the evolution of governance — one that transforms its very essence constitutionally, legally, and conceptually.
References:
1) Prianykova, P. (2025). Statements and Contributions: Joint Constructive Dialogue on Technical Assistance and International Cooperation. Global AI Center POLLYPRIANY. UNODC. Available at: https://www.unodc.org/documents/organized-crime/constructive-dialogues/IC_TA_2025/Statements/Statements_and_Contributions_POLINA_PRIANYKOVA_Global_AI_Center_POLLYPRIANY.pdf (Accessed: 15 June, 2025);
2) Prianykova, P. (2023). FIRST IN THE WORLD HISTORY CONSTITUTION OF ARTIFICIAL INTELLIGENCE, UNITED NATIONS, NEW YORK, 2023-2025 (Series of publications). Online Office: International Human Rights Defender on AI POLINA PRIANYKOVA. Available at: https://www.prianykova-defender.com/ai-constitution-polina-prianykova (Accessed: 15 June, 2025);
3) Prianykova, P. (2024). AI PROTOCOL I. Supranational Protocol on Responsible AI Use and Labor Rights. Online Office: International Human Rights Defender on AI POLINA PRIANYKOVA. Available at: https://www.prianykova-defender.com/ai-protocol-i (Accessed: 15 June, 2025);
4) Prianykova, P. (2024). Global Scientific Center for Strategic Research on Artificial Intelligence POLLYRPIANY as a Platform for Optimizing Logical Trajectories and Presenting the Rational Core of Scientific Initiatives. Online Office: International Human Rights Defender on AI POLINA PRIANYKOVA. Available at: https://www.prianykova-defender.com/global-ai-center-pollypriany-institutes (Accessed: 15 June, 2025);
5) AI Constitution / Polina Prianykova – Kyiv, «FrancoPak», 2024, - 392 pages.
6) LaCascia, H., & Kramer, M. (2021). Vulnerabilities of ICT Procurement to Fraud and Corruption. Governance and the Digital Economy in Africa, Technical Background Paper Series. Washington, DC: The World Bank. Available at: http://documents.worldbank.org/curated/en/099082423220534472 (Accessed: 15 June, 2025);
7) Ahern, D. (2024). The New Anticipatory Governance Culture for Innovation: Regulatory Foresight, Regulatory Experimentation and Regulatory Learning. doi: 10.48550/arXiv.2501.05921 (Accessed: 15 June 2025).
8) O’Brien, T. (2021). Compounding Injustice: The Cascading Effect of Algorithmic Bias in Risk Assessments. Georgetown Journal of Law & Modern Critical Race Perspectives, Volume 13 Issue 1, 2021, Available at SSRN: https://ssrn.com/abstract=3694818 or http://dx.doi.org/10.2139/ssrn.3694818(Accessed: 15 June 2025).
9) Lamanauskas, T. (2025). Standards help unlock trustworthy AI opportunities for all.International Telecommunication Union (ITU). Available at: https://www.itu.int/hub/2025/04/standards-help-unlock-trustworthy-ai-opportunities-for-all/ (Accessed: 15 June 2025).
10) Online Office: International Human Rights Defender on AI POLINA PRIANYKOVA (2020-2025). Available at: https://www.prianykova-defender.com/ (Accessed: 14 June, 2025).
Officially Published:
June 17 - 20, 2025,
Paris, France
(Table of Contents, №16)
© POLINA PRIANYKOVA.
All rights reserved.