Feminismo/s (Jan 2025)
AI and Language: New Forms for Old Discriminations? A Case Study in Google Translate and Canva
Abstract
The development of artificial intelligence (AI) is one of the greatest technological revolutions in recent human history. AI technology is widely used in various fields, including education. In this field, AI is studied as a discipline, and used as a tool to overcome social barriers. Like any human revolution, however, it is necessary to be careful about it and consider that the growing use of these new informatic systems also entails risks. One of them, it is the reinforcement of gender stereotypes and discrimination against women through linguistics feedback. Trough an experimental analysis conducted on common AI-integrated app –Google Translate and Canva–we will investigate linguistic behaviours such as responding to a command prompts. From the results obtained, we can demonstrate the existence of gender biases in the AI’s productions, both in textual and visual language. Gender biases are consequences of the structural inequalities present in society: it is not the technology that is sexist, but it is the dataset on which it is based, which in turn is based on the results produced by users and published on internet. In a society based on democracy and equality, it is important to ensure that the use of such a widespread technology as AI does not perpetuate existing stereotypes and does not allow to become a new form of strengthening discriminations. From a linguistics perspective, this means paying attention to the linguistic outputs, both textual and visual, provided by the AI and checking the dataset it has been training on. Due to their central role in the education of new generations, schools and institutions should prepare students for a critical vision of the phenomenon and provide them with the tools to contrast it. This path could start from teaching AI mechanisms and ethics of technology to students and using an inclusive language in the educational context.
Keywords