RUDN Journal of Law (Oct 2024)
The use of Artificial Intelligence technologies by internet platforms for the purposes of censorship
Abstract
The issue of abuse in regulating content moderation on internet platforms using artificial intelligence technologies is relatively new in legal science and practice. Regulatory frameworks in this area are still evolving, and enforcement practices have yet to be fully established. The author employs formal-legal, comparative-legal, historical methods and legal modeling to analyze the negative consequences of using software systems with artificial intelligence elements for user content moderation. By examining various technological solutions utilized by internet platforms for data collection and processing, the article highlights a potential threat to citizens’ rights to access and share information if legal relations governing content moderation with artificial intelligence are not significantly enhanced. It explores evidence suggesting that in the absence of regulatory constrains transparency requirements, internet platforms may begin censorship by removing content, based on their own criteria, even if it does not violate laws or platform guidelines. The author argues that unchecked actions by internet platforms could restrict individuals and political entities from expressing their views, posing a significant threat to democratic principles. By examining Russian, EU and US laws alongside current trends in internet platforms operations, the article concludes that the existing legal frameworks are inadequate and calls for legislative oversight and control over technologies used for content moderation, algorithms, and artificial intelligence applications.
Keywords