Polina Prianykova
President of the Global AI Center,
International Human Rights Defender on AI,
Author of the First AI Constitution in World History

In the contemporary era of
hyper-digitalisation,
the phenomenon of human trafficking assumes multi-faceted and polymorphous configurations, permeating both physical and virtual loci of human existence. Among its most insidious manifestations is the exploitation of children through Artificial Intelligence (AI) — ranging from generative architectures capable of producing synthetic visages, to predictive algorithms that extrapolate and exacerbate human vulnerability. This emergent paradigm constitutes a juridical challenge of immediate exigency, demanding a response grounded in ratio legis rather than mere ad hoc reaction. AI may function simultaneously as a means of exploitation and as an instrument of protection. This ontological dualism of technology necessitates the establishment of a normative apparatus for prognostication, prevention, and liability in respect of algorithmic harm. Within the contemporary matrix of governance, wherein States and private entities deploy automated systems across education, social welfare, and communicative infrastructures, a novel legal duty arises — the duty of algorithmic due diligence.
The concept advanced in this study articulates a law-anchored, evidentiary architecture of counter-trafficking, synthesising the principles of the Palermo Protocol [1], the United Nations Convention on the Rights of the Child [2], the corpus of international labour law (inter alia, augmented by recent policy novelties elaborated by the Global AI Center POLLYPRIANY) [3], and the Constitution of Artificial Intelligence [4, 5] — created in June 2023 by POLINA PRIANYKOVA and subsequently adopted by the Global AI Center POLLYPRIANY as a normative-ethical compass.
This article examines the juridical logic of regulating Artificial Intelligence within the domain of child-trafficking prevention, advances a definition of prognostics as a legitimate preventive instrument, delineates the obligations of States with regard to transparency, oversight, and accountability for algorithmic decision-making, and formulates an interpretative approach to algorithmic facilitation of exploitation as a new form of criminal conduct.
Keywords:
Artificial Intelligence; Child Trafficking; Prognostics; Algorithmic Accountability; Rights of the Child; Person with Disability; Preventive Law; Digital Exploitation; Palermo Protocol; AI Constitution; Due Diligence; Legal Predictability; Digital Security.
Formulation of the relevance of
this academic paper.
Given the accelerated integration of AI into virtually all strata of societal existence, the issue of its juridical regulation within the context of child protection acquires exceptional and urgent relevance. Algorithmic systems today possess the capacity to forecast disappearance risks, detect digital grooming networks, or conversely — to facilitate human trafficking through the absence of adequate oversight and algorithmic accountability. Hence, the regulatory lacunae in the governance of AI constitute a direct threat to the fundamental rights of the child — notably, the rights to security, dignity, and protection from exploitation.
The topicality of this research is further reinforced by the emergence of the phenomenon of “virtual victims” — children whose image, likeness, or voice is synthetically generated, thereby dissolving traditional legal boundaries between the natural person and the digital construct. Such transformation calls for the reconsideration of criminal-law definitions and the broadening of international jurisdictional scope, particularly with regard to crimes committed through or by means of algorithmic mediation.
The study’s novelty is rooted in defining a legal model that converts technological advancement into a preventive duty of the State, uniting international legal doctrine with principles of transparency, humanism, and foreseeability. Consequently, the governance of Artificial Intelligence is conceived not as an impediment to progress, but as a juridical guarantee of innovation’s lawfulness.
Primary segment of
the research paper.
We propose a rights-based, evidence-led architecture to counter child trafficking that responsibly leverages Artificial Intelligence (AI) across prevention, victim identification and assistance, financial disruption, and accountability — while hard‑wiring child safeguards, privacy, and due‑process guarantees. At its core, this architecture integrates prognostication — the lawful and ethical forecasting of vulnerability patterns and emerging risk environments — enabling early, rights-compliant intervention before exploitation occurs.
This Guide places particular emphasis on susceptible and highly sensitive categories of children, including children with disabilities and those born into families where parents or legal guardians themselves live with disabilities. Such children may be more exposed to manipulation, deception, and coercion, and may not fully comprehend that they are being trafficked or exploited. The Guide also recognises intersectional vulnerabilities among children belonging to minority, displaced, or conflict-affected groups, extending protection across humanitarian and digital contexts. The proposed AI measures are thus designed to embed heightened sensitivity to these intersecting vulnerabilities, ensuring that predictive and protective systems neither stigmatize nor overlook such cases but respond with precision, dignity, and accessibility.
Our contribution translates the Palermo Protocol and child‑rights obligations into practical, auditable AI measures: safety‑by‑design standards for platforms; privacy‑preserving analytics for missing‑children and institutional‑care risks; cross‑border, federated models to detect grooming, recruitment, and synthetic identity abuse; and interoperable data taxonomies to accelerate child‑friendly services. We prioritize ethical participation of child survivors, explicit limits against mass surveillance, encryption‑compatible methods, and measurable outcomes (time‑to‑assist, referral completion, and harm‑reduction).
The Global AI Center stands ready to engage with States, UN entities, and survivor-led organizations in co-piloting a dedicated roadmap aimed at progressively operationalizing these measures within existing international and national frameworks.
I. Legal & Policy Anchoring
The Palermo Protocol (Trafficking in Persons Protocol) imposes binding obligations upon States with respect to prevention, protection, and international cooperation, expressly recognizing that the means element (coercion, fraud, deceit) is not a requisite condition for the establishment of child trafficking. Accordingly, the duty to prevent encompasses anticipatory, systemic, and data‑driven measures capable of identifying vulnerabilities and emerging exploitative modalities before harm occurs.
The Convention on the Rights of the Child (CRC) and its Optional Protocols, alongside the ILO Convention on the Worst Forms of Child Labour, the UN General Assembly and CCPCJ resolutions, and regional instruments, collectively form the corpus juris for child protection in both physical and digital environments.
Contemporary interpretative guidance emphasizes: (i) evidence generation and lawful data exchange; (ii) addressing demand drivers, including those in global supply chains; (iii) safeguarding children deprived of parental care and those in alternative care settings; (iv) prohibiting the immigration detention of children; and (v) countering technology‑facilitated exploitation. The present Guide operationalizes these duties through Artificial Intelligence that is lawful, necessary, proportionate, transparent, and auditable, ensuring continuous alignment with human‑rights norms and international due‑process standards. These measures must be interpreted in light of the emerging jurisprudence of due diligence in technology governance, whereby inaction toward algorithmic harm constitutes a breach of preventive obligations.
The Artificial Intelligence Constitution (2023), authored by Polina Prianykova and adopted by the Global AI Center POLLYPRIANY as a foundational instrument for AI governance, supplies the normative anchor for all measures herein. Core constitutional tenets applied include: the primacy of human dignity and child protection; the principle of an AI‑friendly environment fostering cooperative human–AI relations; the rule of law and transparency; the prohibition of mass surveillance and all forms of dignity‑degrading algorithmic practices; the neutrality and non‑competition of AI vis‑à‑vis humankind; and the State obligation to secure educational and labour safeguards, including the prognostication of professions and vulnerabilities. The Constitution’s provisions on emergency control over “dark AI” phenomena further inform the preventive architecture advanced herein.
Together, these provisions form a proto-constitutional corpus applicable to predictive and preventive technologies in the child-protection domain.
Policy Note. All AI‑enabled interventions proposed in this Guide are conditioned upon explicit legal bases, prior human oversight, and algorithmic impact assessments (AIAs) conducted under publicly reviewable standards. Transparency to competent regulators, independent red‑teaming, and continuous ethics supervision are mandatory. Encryption shall remain inviolable: only client‑side safety‑by‑design, on‑device safety nudges, and privacy‑preserving analytics (federated or differential‑privacy models) are permissible within the prescribed safeguards.
II. Legislative Rationale and Preventive Architecture
1. Legal Rationale for Regulating Artificial Intelligence.
The decision to regulate Artificial Intelligence within the framework of child trafficking arises from a clear legal necessity: AI has become both an instrument of exploitation and a potential vector of protection. Its dual-use nature imposes upon States a positive obligation of foresight — to prevent foreseeable harm arising from technologies that they authorise, deploy, or permit within their jurisdiction.
Unregulated AI ecosystems allow traffickers to exploit algorithmic opacity.
Trafficking networks may employ:
-
Generative models to fabricate synthetic child sexual abuse material (CSAM) and deepfakes;
-
Recommendation algorithms that amplify exploitative content and enable grooming;
-
Chatbots and voice models used to lure, manipulate, or extort minors;
-
Synthetic identities that bypass age-verification and create fictitious guardianship or adoption records;
-
Predictive advertising tools that identify and target children based on emotional or socio-economic vulnerability; and
-
Automated payment systems that obscure illicit flows through micro-transactions and digital tokens.
Each of these practices converts technological progress into a means of coercion, deception, or commodification, falling squarely within the material scope of the Palermo Protocol. The absence of regulation allows AI to become an untraceable accomplice, diffusing accountability across code, platforms, and jurisdictional boundaries.
Conversely, when embedded within the rule of law, Artificial Intelligence can operationalise the duty to prevent. Predictive analytics can identify patterns of disappearance, detect child-grooming clusters, forecast orphanage trafficking, and expose financial typologies of exploitation. AI thereby becomes part of the State’s preventive machinery, its operation inseparable from the principle of foreseeability that governs risk management under international law.
Properly constrained, AI becomes a juridical instrument of due diligence — capable of alerting authorities to emergent harm while preserving the privacy, dignity, and procedural rights of the child.
Thus, regulation is not an obstacle to innovation but the legal condition of its legitimacy. By defining what AI must not do (facilitate exploitation) and what it may lawfully do (anticipate and prevent harm), States fulfil their obligations to ensure that technology operates within the boundaries of human rights, not outside them.
2. Legal Preconditions for Predictive and Preventive Systems.
Preventive AI systems derive legitimacy only when anchored in law. To ensure that predictive systems operate within the scope of legality rather than discretion, the following cumulative conditions must be codified in domestic legislation:
-
Legality and Mandate. Each AI tool addressing child trafficking must operate under an explicit statutory or delegated legal basis specifying purpose, authority, and scope. Absence of a clear mandate renders the processing arbitrary and ultra vires.
-
Necessity and Proportionality. Predictive analytics must be necessary for achieving a legitimate protective objective and proportionate to the interference it causes with privacy or data protection rights.
-
Transparency and Auditability. Algorithms affecting children’s safety must be transparent to regulators, subject to red-teaming, and auditable by independent authorities.
-
Human Oversight and Contestability. No algorithmic output may independently determine an individual’s status or trigger coercive action. Every predictive flag must be reviewed by a qualified officer with a recorded justification.
-
Safeguards for Disabilities and Vulnerability. Predictive systems must be calibrated to recognise and accommodate the heightened manipulation risk of children with disabilities or children born to guardians with disabilities, who may not perceive coercion or deceit.
-
Accountability and Traceability. Every algorithmic decision-chain must remain traceable to a human authority and legally reviewable under administrative law.
Within this framework, prognostication operates as a lawful anticipatory measure. It transforms statistical insight into actionable protection without converting probability into accusation. This distinction preserves legality, avoids stigmatization, and ensures compatibility with due-process guarantees.
3. State Obligations, Liability,
and International Responsibility.
Because Artificial Intelligence has become structurally integrated into the domains of communication, education, social welfare, and finance, States can no longer rely on private self-regulation to ensure protection from digital exploitation. The duty of algorithmic diligence thus emerges as a derivative of the State’s primary duty to protect, forming part of its positive obligations under international human-rights and criminal law.
Accordingly, States must:
-
Establish licensing or notification regimes for all AI systems whose operation may affect the rights and safety of children, including systems used by public authorities and private entities in risk-sensitive sectors;
-
Impose civil, administrative, and — where the gravity of consequences so warrants — criminal liability on developers, operators, or executives whose negligent or reckless algorithmic design, deployment, or supervision foreseeably facilitates trafficking or exploitation;
-
Guarantee supervisory powers enabling competent regulators to suspend, inspect, or revoke the operation of AI systems that demonstrate non-compliance with safety or transparency obligations;
-
Ensure access to judicial remedies for children, guardians, or their legal representatives harmed by algorithmic misuse, including the right to compensation, correction, and injunctive relief; and
-
Provide for cross-border cooperation in forensic AI analysis, mutual legal assistance, and evidentiary exchange, ensuring that algorithmic evidence is authenticated and admissible under harmonized procedural standards.
In cases of systemic or transnational harm, criminal responsibility may extend to corporate officers and supervisory officials under doctrines of command responsibility and reckless disregard for human security.
Failure to regulate foreseeable technological risks constitutes a breach of the duty of due diligence and may engage international responsibility under the doctrine of State accountability. The principle of omission applies equally to the digital sphere: non-regulation of AI, where harm is reasonably predictable, amounts to acquiescence in trafficking activity and may, in aggravated circumstances, be construed as complicity by omission.
4. Threat Taxonomy:
Algorithmic Exploitation and Systemic Risks.
Artificial Intelligence magnifies threats through its speed, scalability, predictive precision, and capacity for anonymity. These same qualities that render AI transformative in legitimate sectors also make it perilous when deployed without legal constraints. The law must therefore evolve from a reactive instrument into a predictive shield — capable of regulating not merely the consequences of exploitation, but the conditions of its technological possibility.
The threat architecture of modern trafficking has migrated into algorithmic infrastructures. Traffickers may utilise automated and semi-autonomous systems to perform functions once dependent on human intermediaries: recruitment, manipulation, concealment, and monetisation.
The following trends are empirically observed and legally significant:
-
Synthetic Personhood and Fabricated Identities. Generative models produce synthetic children and falsified guardianship constructs — a phenomenon that dissolves the evidentiary nexus between personhood and corporeality, thereby compelling legislators to reconceptualize the notion of “victim” within digital jurisprudence. Such systems also generate fictitious adoption or parental records, enabling the laundering of identities across jurisdictions. These practices exploit normative lacunae within civil-registration, immigration, and family-law frameworks, effectively rendering the principle of child traceability unenforceable.
-
Algorithmic Grooming and Behavioural Manipulation. Recommendation engines and behavioural-targeting systems, optimised for engagement, inadvertently replicate the logic of grooming. They connect predators with susceptible minors based on inferred vulnerabilities, thereby converting algorithmic neutrality into algorithmic complicity.
-
Deepfake Exploitation and Digital Commodification. Image and voice synthesis technologies permit the recreation of a child’s likeness without their existence or consent, producing an entirely new category of “non-corporeal victims.” The absence of a bodily referent challenges traditional definitions of victimhood and necessitates statutory recognition of virtual child exploitation.
-
Micro-Financial Laundering and Tokenised Abuse Economies. Decentralised payment systems and AI-assisted micro-transfer algorithms fragment proceeds of exploitation into imperceptible units. The velocity and anonymity of such transactions undermine existing AML/CFT frameworks, calling for algorithmic auditability clauses within financial legislation.
-
Adaptive Manipulation of Children with Disabilities. Natural-language and emotion-recognition models enable traffickers to tailor coercive communication toward children with cognitive or developmental vulnerabilities, exploiting empathy algorithms and speech-mimicry tools. This constitutes a novel form of psychological entrapment requiring classification as digital coercion.
-
Autonomous Data Brokerage and Predictive Profiling. Commercial algorithms trading in behavioural data can predict emotional instability, loneliness, or dependency—traits traffickers exploit to target victims. The absence of legal ceilings on data inferences transforms prediction itself into a vector of risk.
Collectively, these mechanisms establish a new class of algorithmic offences — acts of facilitation, concealment, or omission committed through code — requiring corresponding adaptation of substantive and procedural criminal law.
5. Juridical Conversion of Technology into Protective Duty.
The juridical objective is to transpose these technological capabilities into enforceable duties of care, aligning innovation with the principle of non-maleficence in digital governance.
When placed within a regulated, rights-based architecture, the same technological attributes can be reversed to serve law and protection. Properly legislated AI systems can:
-
Reinstate Traceability and Legal Identity. Federated identity-verification systems with cryptographic provenance can ensure continuity of a child’s legal existence across borders, preventing “paper disappearance” in institutional care or migration.
-
Transform Predictive Profiling into Preventive Foresight. Prognostic algorithms, lawfully constrained and audited, may forecast zones of heightened vulnerability — identifying service gaps, migration corridors, or orphanage-intake anomalies — without assigning suspicion to individuals.
-
Authenticate Reality and Restore Evidentiary Integrity. Forensic AI, equipped with watermarking, content provenance, and metadata attestation, can validate authenticity of digital evidence, ensuring admissibility and protecting the integrity of investigations.
-
Disrupt Financial Exploitation in Real Time. Algorithmic tracing of anomalous microflows linked to exploitation typologies can enable near-instant freezing of suspect assets, coupling financial compliance with victim restitution mechanisms.
-
Augment Capacity of Frontline Institutions. AI copilots and multilingual translation systems can empower social workers, prosecutors, and helpline operators to process complex data, identify urgency patterns, and respond inclusively to children with disabilities.
-
Codify Predictive Accountability. Legislators should institutionalise “AI responsibility registers” where each preventive system is mapped to a supervising authority, legal mandate, and rights-impact report, ensuring ex-ante compliance rather than ex-post correction.
Such recognition positions the regulation of Artificial Intelligence within the doctrine of emerging international technological law, heralding a shift from passive compliance to active legal foresight.
III. Juridical Implications and Interpretative Evolution
These developments demand a redefinition of the concept of means in trafficking offences to include algorithmic acts that enable or conceal exploitation. The law must begin to treat algorithmic design, deployment, and failure to intervene not merely as technical conduct but as legally imputable behaviour.
Accordingly, States are urged to:
-
Extend the notion of act or omission in trafficking statutes to cover negligent algorithmic facilitation;
-
Recognise virtual child exploitation and digital coercion as aggravating circumstances;
-
Require all AI operators in risk sectors to file Algorithmic Transparency Declarations detailing preventive safeguards; and
-
Mandate interoperability of forensic AI systems across jurisdictions to ensure continuous evidentiary chains.
In this future-oriented legal framework, Artificial Intelligence becomes both subject and object of regulation — a potential accomplice if ungoverned, yet a guardian when domesticated by law.
IV. Conclusions & Empirical Theses
The findings advanced in this Guide converge upon an urgent legal and moral premise: the protection of children in the digital age has become the litmus test of lawful Artificial Intelligence. The governance of AI is now inseparable from the governance of human vulnerability — and within that spectrum, child vulnerability remains the gravest and most immediate concern of international law.
In the context of child trafficking, technological neutrality is a legal fiction. Each unregulated algorithm that amplifies exploitation constitutes an act of omission; each regulated and rights-compliant algorithm that prevents it becomes an act of juridical guardianship. The law must therefore evolve from reactive condemnation to anticipatory protection — transforming foresight into a binding duty.
Empirical Thesis I.
The rise of algorithmic exploitation elevates the Palermo Protocol’s preventive obligation from moral aspiration to technological duty of care. States must not only prohibit human traffickers; they must also foresee and forestall digital systems that enable them. Predictive diligence thus becomes a measurable legal standard. Inaction toward foreseeable algorithmic harm, particularly where it endangers children, constitutes a breach of international due diligence.
Empirical Thesis II.
Artificial Intelligence must evolve from an evidentiary aid into a juridical instrument of protection. When lawfully mandated, transparent, and ethically supervised, AI can perform the Protocol’s threefold obligations — prevention, protection, and prosecution — in ways previously unattainable by human capacity alone. Properly constrained, predictive systems can identify patterns of disappearance, intercept synthetic grooming networks, and restore traceability to children erased by digital manipulation.
Accordingly, States, international organisations, and private entities share a tripartite duty of technological due diligence, requiring:
-
the legal domestication of AI governance through statutory mandates, licensing, and continuous oversight;
-
the institutionalisation of predictive accountability, where anticipatory knowledge of risk imposes an obligation to act; and
-
the creation of interoperable, child-centred AI infrastructures ensuring cross-border victim tracing, forensic authentication, and survivor-oriented justice.
Failure to meet these obligations amounts to a violation of the emerging doctrine of algorithmic omission, engaging both State and corporate responsibility under the principles of international technological law. Compliance, conversely, transforms AI into a lawful custodian of human dignity — an active guardian of the child’s inalienable right to safety, development, and recognition.
Ultimately, the architecture proposed herein affirms that the legitimacy of AI is measured by its capacity to protect children, not to predict their vulnerability for profit. Law and science must therefore operate in concert — precision balanced by compassion, foresight bound by legality. The Global AI Center POLLYPRIANY, through its Institutes and collaborative partners, stands prepared to co-pilot these measures with States, UN entities, and survivor-led organisations — advancing an era in which digital progress and human protection are no longer opposing ends but integrated obligations.
Findings and Juridical–Empirical Significance of the Study
The conducted research has demonstrated that the regulation of AI in the field of combating child trafficking must be regarded not as a technical task, but as a normative-legal and civilizational duty of the State. In the contemporary world, AI becomes not merely a technological instrument, but a juridical indicator of the maturity of a legal system — one capable of foreseeing risks rather than merely responding to them.
The legislative introduction of predictive algorithms in the sphere of child protection does not contradict the principles of international law, provided that their legality, proportionality, transparency, and human oversight are ensured. Under these conditions, AI functions not as a “threat”, but as an instrument of legal foresight — one that transforms knowledge into preventive action.
The State is obliged to establish legal mechanisms of algorithmic due diligence — that is, to foresee, assess, and prevent risks of digital exploitation. Such an approach shifts the emphasis from post-factum control to ex-ante responsibility, whereby every algorithm is evaluated not only by its consequences, but also by its potential capacity to cause harm.
Institutionalisation of Preventive Architectures.
The concept of a lawful prognostic architecture, developed within the framework of this research, may serve as a normative model for United Nations Member States in shaping national strategies for combating human trafficking.
Its implementation presupposes the establishment of: independent ethics-legal commissions for algorithmic audit; state registries of responsibility for the deployment of AI systems; mechanisms of interstate cooperation in the field of forensic AI, ensuring evidentiary continuity in the investigation of crimes.
Normative Humanisation of the Digital Space.
The article substantiates that the legitimacy of technological progress is measured not by the efficiency of algorithms, but by the capacity to preserve human dignity. The principle of “humanised artificiality” presupposes that the development of AI must remain subordinate to the rule of law, the rights of the child, and the standards of justice.
Contribution to International Legal Doctrine.
The results of the research provide the foundation for the emergence of a new branch — international technology law, within which algorithmic inaction is recognised as a form of legal wrongdoing, and foresight becomes a juridical obligation.
This opens prospects for: the development of universal standards of algorithmic accountability; the consolidation of the principle of “predictive legality” within international treaties; the preparation of recommendations for States and international organisations on the lawful use of AI in child-protection contexts.
Practical Significance.
The practical value of the study lies in the creation of a model for the legal regulation of Artificial Intelligence, applicable by governments, academic institutions, international organisations, and judicial bodies in policy design, expert evaluation, and inter-agency programming. The proposed provisions may also serve as a basis for: draft laws and subordinate acts defining the boundaries of AI use in the social and educational sectors; the development of child-friendly AI standards, under which every technological innovation is accompanied by independent legal audit; the creation of international educational programmes aimed at teaching the principles of ethical and legally responsible use of AI.
Accordingly, the findings of the study confirm that the protection of the child in the digital era constitutes the measure of the legality of AI. Every algorithm that foresees and prevents harm represents an act of juridical care; every algorithm that ignores risks constitutes an act of legal omission. It is precisely at this juncture that law and technology converge in a common mission — to ensure that progress serves dignity rather than domination.
References:
1) United Nations. (2000). Protocol to Prevent, Suppress and Punish Trafficking in Persons, Especially Women and Children, supplementing the United Nations Convention against Transnational Organized Crime (General Assembly resolution 55/25 of 15 November 2000; entered into force 25 December 2003; United Nations Treaty Series, vol. 2237, p. 319).
2) United Nations. (1989). Convention on the Rights of the Child (adopted by General Assembly resolution 44/25 of 20 November 1989; entered into force 2 September 1990; United Nations Treaty Series, vol. 1577, p. 3).
3) Prianykova, P. (2024). AI PROTOCOL I. Supranational Protocol on Responsible AI Use and Labor Rights. Online Office: International Human Rights Defender on AI POLINA PRIANYKOVA. Available at: https://www.prianykova-defender.com/ai-protocol-i (Accessed: 26 October 2025);
4) Prianykova, P. (2023). FIRST IN THE WORLD HISTORY CONSTITUTION OF ARTIFICIAL INTELLIGENCE, UNITED NATIONS, NEW YORK, 2023-2025 (Series of publications). Online Office: International Human Rights Defender on AI POLINA PRIANYKOVA. Available at:https://www.prianykova-defender.com/ai-constitution-polina-prianykova (Accessed: 26 October 2025);
5) AI Constitution / Polina Prianykova – Kyiv, «FrancoPak», 2024, — 392 pages.
📌Officially published:
lX International scientific and practical conference "Development of science: theories, methodology, practice and technologies", October 28-31, 2025, Paris, France (Table of Contents, No. 34)
https://isg-konf.com/wp-content/uploads/2025/10/DEVELOPMENT-OF-SCIENCE-THEORIES-METHODOLOGY-PRACTICE-AND-TECHNOLOGIES.pdf
© POLINA PRIANYKOVA.
VALENTYN PRIANYKOV.
All rights reserved.


