top of page

Acerca de

deepmind-8heReYC6Zt0-unsplash.jpg

Particularities of the Regulation
of the AI Algorithms,
inter alia in online platforms and services,
based on the example of 'TikTok'

Polina Prianykova

International Human Rights Defender on AI,

Author of the First AI Constitution in World History,

Student of the Law Faculty & the Faculty of Economics

IMG_1269.JPG

 Particularities of the Regulation of the AI Algorithms,

inter alia in online platforms and services, based on the example of 'TikTok'

on the basis of the EU and PRC legal framework

Today, certain sophisticated technological systems are being deployed into modern appliances, further serving as succors in data processing. These are the novelties many people have not been given the respective opportunity to reflect thereon. Hence, perplexity and some issues arise pertaining to the general understanding of how these algorithms work as well as the norms compliant with the rule of law that have to be taken into account by competent IT specialists while elaborating high-tech and sophisticated AI mechanisms. 

Keywords: AI algorithms, AI regulation, content moderation, civil liability, legislation on AI, EU and Chinese legal systems, novel relations. 

Formulation of the relevance of the scientific article. In the world plethoric with innovations, Artificial Intelligence has taken various transformations that may be either visible or ulterior, pursuing its main aim – simplifying the analysis of data, solving particular problems, improving the overall performance of inventions, and, unequivocally, helping people. The last presumes serving and giving a person more opportunities to focus on tasks a human being may want to give a greater priority. 

Hence, we become witnesses of the expansion of novelties powered by AI which present a wide range, varying from smart houses and vehicles with different levels of automation to applications that basically can be downloaded to any device. However, the relevant attention towards the establishment of basic rules for the formation of specific AI algorithms has not been given yet. Thus, as a part of my Scientific doctrine on AI implementation into the worldwide legislation (Scientific doctrine on AI implementation into the worldwide legislation by Polina Prianykova), it is significant to put an emphasis not only on the enlightening the nature of AI algorithms, the purpose of their deployment and argument the tendency of their increasing popularity in use among miscellaneous modern companies, but also to stipulate the urgent necessity of the ad-hoc regulation of AI algorithms, inter alia in applications, which represents the central objective of this scientific work. 

Recent research and publication analysis. As the subject we shed light on is quite comprehensive, legislation on AI, parallel with works of respected scholars and journalists, is analyzed. Referring to our previous scientific research [1], we are enhancing the level of understanding of how the fundamental AI algorithms function, basing the differentiation thereof on the ‘Final Report of a CEPS Task Force’ [2].  The legal framework of the European Union, as well as the People’s Republic of China, is considered. We have primarily focused on the explicit example of an online platform that has implemented certain AI mechanisms and the ways similar online services shall be supervised and regulated in compliance with conventional law principles. 

Presentation of the main body of the article. To begin with, it is essential to note and elaborate on the basis of AI’s functioning, what lies intra this complex structure, especially one of the most crucial and inseparable parts – a set of AI algorithms. It is self-evident that an algorithm is a specific aggregation of instructions that have to be followed, however, when AI is involved, the subject gains its complexity as those basic rules have to be applied for solving more intricate issues. 

As it is prescribed in the Final Report of the Centre for European Policy Studies Task Force ‘Artificial Intelligence and Cybersecurity’, AI algorithms could be classified into two main categories – symbolic and non-symbolic. This taxonomy implies that symbolic AI systems work on certain preprogrammed principles that are not altered while the systems are not trained per se. Non-symbolic AI algorithms, in turn, are divided into two types, depending on how the mechanisms of Machine Learning are applied. So-called ‘static ML systems’ are trained only before their deployment; they are changing and evolving while being taught, but after implementation, these systems are crystallized and presumed to act according to the principles programmed afore. In contrast, non-symbolic ‘evolving ML systems’ are more flexible, attaining new data and adjusting to relevant and upcoming situations [1]. 

Lending support to the notion enlightened in our previous research ‘Civil liability for the use of electronic forms and mechanisms of AI, inter alia in the sphere of transport’ [2], explicitly stated standards of AI’s development and functioning have to be formulated and realized, bearing in mind the rules implemented into the mechanisms which constitute its core.  Moreover, different forms of liability, including the civil one, for any violations of law caused by the algorithmic malfunctions must be transparently entrenched. 

Focusing on some precedents of the AI’s specific behavior, we may see why the granular approach towards regulation of AI has to be applied, by inference taking notice of the algorithms’ standards.

Recommendations are bright examples of how the aforementioned mechanisms attract more and more consumers to certain online services. These platforms vary from the websites that offer buying products to the ones which demonstrate videos or publications up to the user’s taste. At the heart of the apps’ rising popularity lies an efficient AI algorithm that matches the content with the user’s preferences. 

For instance, renowned for admiring and negative reviews ‘TikTok’ was developed by a Chinese Internet entrepreneur Zhang Yiming, one of the founders of ‘ByteDance Ltd.’ that is, incidentally, aimed ‘to combine the power of Artificial Intelligence with the growth of mobile Internet to revolutionize the way people consume and receive information’. The software developers seem to have done their best in order to make the app so viral as the AI mechanisms are triggered to select personalized content from the cold start. Thus, it is not surprising that “TikTok’s” algorithm is also called the ‘secret sauce’ behind its global success [3]. The app also has its own updated Community Guidelines, claiming to give a priority to ‘safety, diversity, inclusion, and authenticity’ [4]. 

As a matter of course, if a natural person wants to become a ‘TikTok’ user, the thing to do is to consent to the terms of its Privacy Policy and further comply with Community Guidelines. In addition, the service can be used by businesses or entities [5]. Content moderation is asserted to be a mix of human and technological interference. Moreover, it is highlighted that the platform ‘does not enable activities that violate laws or regulations’, albeit it is important to note that any references to concrete legislative acts and accepted principles established therein, for instance, the Universal Declaration of Human Rights, have not been cited in that regard [4]. Nonetheless, ‘TikTok’ Transparency Centre claims their philosophy to be ‘informed by the International Bill of Human Rights (which includes the Universal Declaration of Human Rights and the International Labour Organization’s Declaration on Fundamental Principles and Rights at Work) and the United Nations Guiding Principles on Business and Human Rights’ [6]. 

Nevertheless, the policy seems to be quite clear and intelligible until we take a closer look at details of the personal data collected by the service. 

According to the service Privacy Policy, the platform collects the information about its users for a variety of reasons, including ‘personalizing the content you receive and providing you with tailored content that will be of interest to you’, ‘providing you with personalized advertising’ as well as ‘location-based services (where those services are available in your jurisdiction)’ [7].  Another objection is stipulated as ‘informing our algorithms’ which may be beneficial for the users by being shown the preferable content, but, on the other hand, serve as a great concern when touching upon the subject of cybersecurity and establishing the liability for any violations of human rights that may arise due to the hacked data flow or inadequate reliability of data protection systems. 

The popular service has not been trusted from time to time due to some aspects of its policy, including a high-profile case in one of the European Union’s countries. In Denmark, ‘TikTok’ was claimed to violate the privacy of young children. As stated by the European Data Protection Board in terms of the Dutch Data Protection Authority’s (hereinafter referred to as ‘DPA’) civil case, ‘by not offering their privacy statement in Dutch, TikTok failed to provide an adequate explanation of how the app collects, processes and uses personal data. This is an infringement of privacy legislation, which is based on the principle that people must always be given a clear idea of what is being done with their personal data’. As a result, a fine of € 750,000 (‘TikTok’ objected thereto) was imposed on the service [8]. 

Nevertheless, when the DPA provided a report on its investigation to the online platform, ‘TikTok’, in its turn, has adapted novel options to make an application for children under the age of 16 more credible. However, the DPA’s Deputy Chair Monique Verdier remarked that juveniles still have an opportunity to cheat the system by pretending to be older when creating their account. At the same time, the platform instilled a notion of the child’s privacy settings’ control from the side of the parents. Overall, the Deputy Chair highlighted that ‘the DPA welcomes the changes TikTok has made’ [8]. 

Another precedent also quintessentially correlates with the juvenile privacy issue, stating that the online platform collects the private information of children illegally and this matter has to be tried in High Court. The latter was alleged by the former Children’s Commissioner for England, Anne Longfield [9]. The debates started in December 2020, when a 12-year-old girl from England filed a lawsuit against the company, raising the question about the children’s data protection. A decision was made not to reveal the identity of the girl, remaining her anonymous, in order to ensure she was not going to be cyber-bullied [10]. The suit itself has certain statements related to the infringement of provisions stated in the General Data Protection Regulation (GDPR) and the U.K.’s Data Protection Act 1998. Since the filing of the suit, the UK introduced a ‘Draft Online Safety Bill’, aimed to exercise control over private data protection in particular [11].

It is noteworthy to mention recent research where the app activity in ‘Apple’ devices (that have their software versions updated to iOS 15.2 and above) was recorded and certain conclusions were deduced [12]. The latter are interconnected with the notion that users are currently not given the eventuality to get to know what data is shared exactly and how it may be used by third parties. Among social media applications, ‘TikTok’ became one of the leaders to track consumer data, especially taking into account the result of 13 out of 14 network contacts by third parties. The service refuted the concern of ‘CNBC Make It’, arguing it by the further statement ‘any network contacts went to only four third-party domains, all of which the company says are regularly used by other apps for functions such as network security and user certification, among others’ [13].

 

Nevertheless, in one of the ‘WIRED’ magazine articles, the algorithms of ‘TikTok’ are claimed to be ‘fuelled by data’. And this data collected seems to be the price for the opportunity to be given personalized content. The user’s views, the content of messages, and search history are not the limitations of the service’s analysis – the device the app is installed, IP address, and location are also added to this list [14]. In addition, the ‘Wall Street Journal’ has conducted its investigation on the algorithm of ‘TikTok’. One of the interviewees, Guillaume Chaslot, underlined that this algorithm ‘can get much more powerful and it can be able to learn your vulnerabilities faster’.  What is more, being led by the goal of making the user stay longer on the platform, it may form narrow informational tunnels, sometimes recommending content that is not verified by moderators. These so-called ‘rabbit holes’ may continue posing threats to the users who often rely on the service [15]. 

As reported by ‘TikTok’ itself, they are striving to be ‘consistent and equitable’ in their enforcement [16]. What is more, since February 8, 2022, the Community Guidelines have been substantially altered, especially in terms of enhancing the level of safety and credibility of the platform. The service explicitly mentions the opening of ‘state-of-the-art cyber incident monitoring and investigative response centers in Washington DC, Dublin, and Singapore this year’. It also underlines that its Fusion Centre monitors the possible threats and the company is cooperating with industry-leading experts in order to strengthen cybersecurity.  

The Minor safety that was often called into question, judging by the online community’s previous experience, has undergone certain changes. For instance, the content of those account holders who are under 16 would become ineligible to be promoted in so-called ‘for you feed’. These users also cannot avail themselves of the options such as direct messaging or hosting a livestream [4]. 

Moreover, as the AI algorithm is considered to be sentient to the user’s preferences, if it pinpoints that the content is inappropriate for a specific audience, e.g. elderly people or adolescents, it would promptly be hidden from recommendations. More clarity has also been given to what kind of posts may violate the Community Guidelines and then, consequently, be eliminated from the platform [16]. 

At the same time, the fact that recommendations are actually a result of a well-balanced amalgam of human and AI monitoring starts cracking and disintegrating by specific cases in the eyes of the world, inter alia indicated by the statistics provided by ‘TikTok’ itself. On March 25, 2022 ‘CNN Business’ published an article where the problems “Tiktok’s” content moderators encounter on a daily basis are shown [17]. 

Reece Young and Ashley Velez, former content moderators of the short-form video platform claimed that they had to come across analyzing great volumes of data that comprised ‘unfiltered, disgusting and offensive content’. As it is stated in their complaint ‘As a result of constant and unmitigated exposure to highly toxic and extremely disturbing images at the workplace, [Young and Velez] have suffered immense stress and psychological harm. Plaintiffs have sought counseling on their own time and effort due to the content they were exposed to’. The suit also alleges that the moderators had to sign non-disclosure agreements – that is supposed to be the reason ‘to keep inside the horrific things they see while reviewing content’ [17].

The ‘CNN Business’ article stipulates that the moderators are often given only 25 seconds for a video to review, notwithstanding the fact that they may even sort a few ones simultaneously. Specific safeguards for moderators are given a negative assessment due to the absence of some technological tricks, for instance, in forms of blurring the content in order to mitigate the anxiety and constant stress caused by disturbing images. Once, a spokesperson of the online platform remarked that ‘TikTok’ offers ‘a range of wellness services so that moderators feel supported mentally and emotionally’. However, the aforementioned lawsuit sets forth one of the objectives for ‘TikTok’ and ‘ByteDance’ to fund programs aimed to support the mental state of the service’s content moderators in particular [17].

 

According to the recently published enforcement report pertaining to the subject of the platform’s Community Guidelines, the videos removed by the automation constitute about 33,9%. Hence, it is possible to presume that all others were moderated by human beings and greater pressure is still put on them [18]. 

What stands as a no less complex issue is the fact of ‘child sexual abuse, rape, torture, bestiality, beheadings, suicide, and murder’ content uploaded onto the online service – these atrocities are stated in the complaint filed in California district court against ‘TikTok’ and ‘ByteDance’. Thus, the violative content moderated pertains not only to civil wrongs but also criminal offenses. It is vital to note that if the company decided to inform the responsible and competent governmental institutions about such cases (taking into account the geographic location certain violations appear), the service would become a real safeguard of the provisions it claims to value [6].

What is more, it has not been stipulated in the report whether the videos, not properly identified by the technology, then underwent human moderation. Hence, experts are not fully able to presume what constitutes the exact percentage of videos that were not reviewed by the AI adequately. Such unclarity may become a further burden, exacerbating the credibility of the innovation and preventing the stress mitigation of the workers in the long term.

It should be noted that, due to the malfunctioning of the moderation system, the taxonomy of the liability for the possible infliction of the moral damage to online services’ users or even the entailment of more detrimental repercussions to them has to be envisaged and specified not only establishing the liability from the perspective of the civil law but also the criminal system of laws. Hence, we can conceptionally identify the parties that may be liable for such circumstances. 

If the AI algorithm and control over its functioning are centrally organized, a special manual for its training or the software implementation into it is adopted and these processes are dependent on a certain department or the organization itself that represents a content provider, the legal entity has to be liable for the not-promptly-liquidated law infringements that arise due to the system’s unacceptable performance. The AI-powered software has to be eliminated or retrained then. 

However, if a natural person, e.g. an engineer or a programming developer has not complied with the promulgated instructions on how the AI systems are prescribed to be trained, and another natural person in charge of the verification of the rules’ compliance has not fulfilled the relevant commitments, the liability may be joint and several. It is also significant to establish the liability for the content makers and those, who become distributors of the violative content – thus, the liability presumed might be even more labyrinthine and compound. 

Having been developed by a Chinese company, it is interesting to focus on the specificity of two versions of ‘TikTok’ that are often claimed to be much dissimilar. The Chinese concept of the application is called ‘Douyin’ and, although it seems to be identical to its global analogue, not only the preferences of the users shaped the content rising in popularity. According to Joe Rogan, one of the famous American podcasters and sportsmen, he is convinced that the notions Chinese ‘Douyin’ is fostering are based on education more rather than just entertainment, putting emphasis on the ‘idea of engineering a society of more accomplished, more successful people’ [19]. It is important to shed light on the fact that this app has adapted a ‘youth mode’, a special feature that offers new content which is constellated from art exhibitions to science experiments with its main aim – ‘to inspire’. Moreover, the mode also prescribes children under 14 not to use the app from 10 p.m. to 6 a.m.; they also cannot spend more than 40 minutes per day on the app. However, the aforementioned policy applies only to those users who have registered under their real names and entered the actual age thereof [20]. Hence, obtaining a high level of accuracy of the aforementioned restrictions imposed has certain points of growth.

 

Such strategies concerning the establishment of elicit standards for children using various online platforms are grounded on the revised ‘Minor Protection Law’ where, in accordance with the article 2, the minors are considered to be natural persons under the age of 18. ‘Chapter V’ of this normative legal act is directly interlinked with the provisions on child’s security on the Internet, where article 64 specifies the Internet literacy (that comprises the minors’ awareness and ability to use the world wide web in a scientific, civilized, safe and rational manner). These provisions are also supported by the encouragement of Internet literacy from the parental side in article 71 [21]. 

It is also stipulated in article 66 that competent governmental institutions are responsible for strengthening supervision and audit of the safety measures in online services, ensuring the security of the Internet environment for the minors. In compliance with the article 68, press and publication, educational, health, culture and tourism, cybersecurity departments, etc. have to carry out awareness-raising campaigns to contribute to Internet addiction prevention.  What is more, the measures that may be taken to solve the latter problem have to be scientific and reasonable, not infringing on the physical and mental health of the minors addicted. Educational institutions such as schools are obliged to notify the minor’s parents promptly, in case they discover that the underage individual is addicted to the Internet. Jointly, they have to educate and guide the minor student to resume the normal pace of living and studying [21]. 

Service providers and products aimed to be available on the Internet, becoming real-working marketing ‘bonanzas’ in the long term, have to meet the requirements, outlined in corresponding provisions. For instance, online games can only be operated after being approved in conformity with the law; minors shall not be provided with game services from 10 p.m. to 8 a.m. the next day (article 75) [21]. 

Article 77 sets forth the statement that no organization or individual may show cyberbullying behaviors against minors. The aggrieved and their parents or other guardians have the right to notify network service providers about such incidents, while the latter have to take the relevant measures to solve the problem [21]. 

 

The assertion that should be put the emphasis on correlates with the notion of informing the governmental institutions, and more specifically the public security organs, if the online service provider discovers that users of their network commit illegal and criminal acts against the natural persons or legal entities. Not only do the content and the accounts of perpetrators have to be eliminated, but the relevant repercussions their acts entail have to be promptly and legally assessed, and the proportional responsibility has to be borne. Moreover, the last shall be taken in accordance with the real illegal act that may have been photographed or recorded as well as the content that may have caused mental or mental that further provoked physical harm to natural persons. Thus, the liability may be considered in civil and criminal perspectives. Similar preventative measures are proclaimed in the ‘Minor Protection Law’ of the People’s Republic of China. Unequivocally, it encompasses and protects the rights of the minors, highlighting in the corresponding article 80 that if network service providers discover that users use their network services to commit illegal and criminal acts against minors, they shall immediately stop providing network services to the users, keep relevant records, and report to the public security organs [22]. It is also vital to note that the provision mentioned afore shall cover the cases of commitment of illicit acts against all natural persons in order to expand the stratum to all people protected. If the information and some details concerning law violations have been concealed from the government by online service providers or their employees, they shall be accountable in conformity with the current legislation. 

Hence, it would become an efficient option to develop the AI algorithms trained under the supervision of cybersecurity organs to comply with the fundamental law principles such as legality and respect for human rights. Additionally, the algorithms shall be trained and, even if they evolve on their own, have to be systematically audited in accordance with certain stipulated and globally accepted standards. 

Synthesizing the information set above in our scientific article, it is significant to note that AI algorithm will be constantly upgrading, but only people are able to set particular boundaries and, surely, channel it towards the objection it has been once elaborated for – ‘increasing human well-being’ as it was once stated in ‘Artificial Intelligence Act’, a proposal for the regulatory framework of the innovation divulged by the European Commission in April 2021 [22].

 

By analyzing the sophisticated mechanism of the ways the relevant technology is being implemented into ‘TikTok’, an application with more than 1 billion active users as of January 2022 [23], we have observed how great deals of data are monitored and what points of growth, especially emphasizing the law aspects, miscellaneous novel online platforms and services have. 

 

It would also be contributing to the future to establish global standards which would prescribe what exact notions shall be promoted online, pertaining to the healthy lifestyle, aspiration to develop pioneering projects, encouraging knowledge and humanness. The evolution of Artificial Intelligence resembles somehow the forming years of every human being – failures are being made as well as certain discoveries achieved. However, the price of such mistakes is too high, bearing in mind the fact that people’s lives are at stake. As the words of Stuart Russel, a British computer scientist known for his comprehensive research on AI, were once paraphrased by ‘Quanta Magazine’ ‘in teaching robots to be good, we might find a way to teach ourselves’ [24], the gist can be deduced that we have to be ready to educate, become teachers of novelties and at the same time nurture better versions of ourselves, bearing in mind morality and adamantine law principles. 

According to the ‘Draft Opinion’ of the Committee on Legal Affairs on the proposal for a regulation of the European Parliament and of the Council laying down harmonized rules on Artificial Intelligence and amending certain Union Legislative Acts from March 2, 2022, our views on the sophisticated mechanisms’ functioning are supported by the further statement ‘clear rules supporting the development of AI systems should be laid down, thus enabling a European ecosystem of public and private actors creating AI Systems in line with the European values’ [25]. Hence, the granular approach toward regulating AI has to start being applied to AI’s kernel – algorithms, establishing common rules of their functioning as well as the alterations to the global legislation have to be implemented, considering the regulation of novel relations that arise not only while using modern devices such as automated vehicles, but also online services where civil and in particular cases even criminal liability shall be prescribed. 

References:

 

1. ‘Civil Liability for the use of electronic forms and mechanisms of AI, inter alia in the sphere of transport’, Polina Prianykova – URL: https://prianykovabusiness.wixsite.com/defender/civil-liability-for-the-use-of-electronic-forms-and-mechanisms-of-ai-in-the-sphere-of-transport – (Accessed on 14 April 2022).

2. ‘Artificial Intelligence and Cybersecurity. Technology, Governance and Policy Challenges. Final Report of a CEPS Task Force’, Lorenzo Pupillo, Stefano Fantin, Afonso Ferreira, Carolina Polito – URL: https://www.ceps.eu/wp-content/uploads/2021/05/CEPS-TFR-Artificial-Intelligence-and-Cybersecurity.pdf – (Accessed on 14 April 2022).

3. ‘Inside TikTok’s Compelling Recommendation Engine’, Avi Gopani – URL: https://analyticsindiamag.com/inside-tiktoks-compelling-recommendation-engine/ – (Accessed on 14 April 2022).

4. ‘TikTok’ Community Guidelines – URL: https://www.tiktok.com/community-guidelines?lang=en – (Accessed on 14 April 2022).

5. ‘TikTok’ Terms of Service – URL: https://www.tiktok.com/legal/terms-of-service?lang=en – (Accessed on 14 April 2022).

6. ‘TikTok’ Transparency Centre – URL: https://www.tiktok.com/transparency/en/

7. ‘TikTok’ Privacy Policy – URL: https://www.tiktok.com/legal/privacy-policy-row?lang=en – (Accessed on 14 April 2022).

8. ‘Dutch DPA: TikTok fined for violating children’s privacy’. European Data Protection Board – URL: https://edpb.europa.eu/news/national-news/2021/dutch-dpa-tiktok-fined-violating-childrens-privacy_en – (Accessed on 14 April 2022).

9. ‘TikTok hit with consumer, child safety and privacy complaints in Europe’, Natasha Lomas – URL: https://techcrunch.com/2021/02/16/tiktok-hit-with-consumer-child-safety-and-privacy-complaints-in-europe/ – (Accessed on 14 April 2022).

10. ‘TikTok faces legal action from 12-year-old girl in England’, ‘BBC News’ – URL: https://www.bbc.com/news/technology-55497350 – (Accessed on 14 April 2022).

11. ‘Draft Online Safety Bill’, the Minister of State for Digital and Culture – URL: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/985033/Draft_Online_Safety_Bill_Bookmarked.pdf– (Accessed on 14 April 2022).

12. ‘New Research Across 200 iOS Apps Hints that Surveillance Marketing is Still Going Strong’, Brian Klais – URL: https://app.urlgeni.us/blog/new-research-across-200-ios-apps-hints-surveillance-marketing-may-still-be-going-strong – (Accessed on 14 April 2022).

13. ‘TikTok shares your data more than any other social media app — and it’s unclear where it goes, study says’, Tom Huddleston Jr. – URL: https://www.cnbc.com/2022/02/08/tiktok-shares-your-data-more-than-any-other-social-media-app-study.html – (Accessed on 14 April 2022).

14. ‘All the ways TikTok tracks you and how to stop it’, Kate O’Flaherty – URL: https://www.wired.co.uk/article/tiktok-data-privacy – (Accessed on 14 April 2022).

15. ‘Inside TikTok’s Algorithm: A WSJ Video Investigation’, ‘The Wall Street Journal’ – URL: https://www.wsj.com/articles/tiktok-algorithm-video-investigation-11626877477 – (Accessed on 14 April 2022).

16. ‘Strengthening our policies to promote safety, security, and well-being on TikTok’, Cormac Keenan – URL: https://newsroom.tiktok.com/en-us/strengthening-our-policies-to-promote-safety-security-and-wellbeing-on-tiktok – (Accessed on 14 April 2022).

17. ‘TikTok hit by another lawsuit over working conditions for its content moderators’, Claire Duffy – URL: https://edition.cnn.com/2022/03/25/tech/tiktok-moderators-second-lawsuit/index.html – (Accessed on 14 April 2022).

18. ‘TikTok’ Community Guidelines Enforcement Report – URL: https://www.tiktok.com/transparency/en-us/community-guidelines-enforcement-2021-3/ – (Accessed on 14 April 2022).

19. ‘Joe Rogan explains why TikTok in China is ‘better’ than the US’, Alex Tsiaoussidis – URL:https://www.dexerto.com/entertainment/joe-rogan-explains-why-tiktok-in-china-is-better-than-the-us-1741211/ – (Accessed on 14 April 2022).

20. ‘The Chinese version of TikTok is limiting kids to 40 minutes a day’, Diksha Madhok – URL: https://edition.cnn.com/2021/09/20/tech/china-tiktok-douyin-usage-limit-intl-hnk/index.html

21. ‘中华人民共和国未成年人保护法’ or ‘Minor Protection Law’, Standing Committee of the National People's Congress of the People's Republic of China – URL: https://gkml.samr.gov.cn/nsjg/bgt/202106/t20210610_330495.html

22. ‘Artificial Intelligence Act’, European Commission – URL: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206 – (Accessed on 14 April 2022).

23. TikTok Statistics – 63 TikTok Stats You Need to Know [2022 Update], Werner Geyser – URL: https://influencermarketinghub.com/tiktok-stats/#toc-1 – (Accessed on 14 April 2022).

24. ‘Artificial Intelligence Will Do What We Ask. That’s a Problem’, Natalie Wolchover – URL: https://www.quantamagazine.org/artificial-intelligence-will-do-what-we-ask-thats-a-problem-20200130/ – (Accessed on 14 April 2022).

25. Draft Opinion on the proposal for a regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts, Committee on Legal Affairs – URL: https://www.europarl.europa.eu/doceo/document/JURI-PA-719827_EN.pdf – (Accessed on 14 April 2022).

Officially Published in April 19-22, 2022, Madrid, Spain (Table of Contents, №49)

https://isg-konf.com/wp-content/uploads/2022/04/Multidisciplinary-academic-notes.-Science-research-and-practice.pdf

© Copyright by Polina Prianykova_all rights reserved

bottom of page