08 October 2025,
the Global AI Center POLLYPRIANY participated in the UNODC Constructive Dialogue on Trafficking in Persons (UNTOC Review Mechanism), convened in Vienna, with this year’s substantive focus on evidential issues in trafficking cases — especially those linked to online scams.
At the Dialogue, we placed particular emphasis on one urgent legal premise:
At the very point where technology meets child safety, law must speak with absolute clarity.
The protection of children has become the litmus test of lawful Artificial Intelligence.
AI governance is now inseparable from the governance of human vulnerability — and child vulnerability remains one of the gravest, most immediate concerns of international law.
In our doctrinal work, we address a hard truth that too often remains implicit:
💠 In the context of child trafficking, technological neutrality is a legal fiction.
💠 Every unregulated algorithm that amplifies exploitation is an act of omission.
💠 Every lawfully constrained, rights-compliant AI measure that prevents harm becomes an act of juridical guardianship.
We therefore advanced a rights-based, evidence-led architecture to counter child trafficking — responsibly leveraging AI across:
💠 Prevention
💠 Victim identification & assistance
💠 Financial disruption
💠 Accountability and evidentiary integrity
— while hard-wiring child safeguards, privacy, and due-process guarantees.


What makes our
contribution distinct
At its core is prognostication — the lawful and ethical forecasting of vulnerability patterns and emerging risk environments, enabling early, rights-compliant intervention before exploitation occurs.
And we place particular emphasis on highly sensitive categories of children, including:
💠 children with disabilities;
💠 children whose parents/guardians live with disabilities;
💠 children affected by displacement, minority status, and conflict.
These children may be more exposed to manipulation, deception, and coercion — and may not fully comprehend that they are being trafficked or exploited. Our approach is designed to ensure predictive and protective systems neither stigmatise nor overlook such cases, but respond with precision, dignity, and accessibility.
Practical, auditable AI measures we advance
💠 Safety-by-design standards for platforms (prevention built into systems, not appended after harm)
💠 Privacy-preserving analytics for missing-children and institutional-care risks
💠 Cross-border, federated detection models to identify grooming, recruitment, and synthetic-identity abuse
💠 Provenance-by-default / forensic authenticity mechanisms to protect evidentiary chains in the age of deepfakes
💠 Explicit limits against mass surveillance + encryption-compatible methods
💠 Measurable outcomes: time-to-assist, referral completion, harm reduction
Our empirical theses
-
The Palermo Protocol’s preventive obligation must evolve into a technological duty of care: States must foresee and forestall digital systems that enable exploitation.
-
AI must evolve from an evidentiary aid into a juridical instrument of protection — capable, when lawfully mandated and supervised, of performing prevention, protection, and prosecution in ways previously unattainable.
Accordingly, States, international organisations, and private entities share a tripartite duty of technological due diligence:
-
legal domestication of AI through mandates, licensing, and oversight;
-
institutionalised predictive accountability (anticipatory knowledge → duty to act);
-
interoperable, child-centred AI infrastructures for cross-border tracing, forensic authentication, and survivor-oriented justice.

📌 Following the Dialogue, we consolidated our position in a Guide Statement and Contribution prepared exclusively for
the United Nations Office
on Drugs and Crime:
The Global AI Center stands ready to engage with States, UN entities, and survivor-led organizations in co-piloting a dedicated roadmap aimed at progressively operationalizing these measures within existing international and national frameworks.
© POLINA PRIANYKOVA.
VALENTYN PRIANYKOV.
All rights reserved.


