Automatic grammatical tagger for a Spanish–Mixtec parallel corpus
Hermilo Santiago-Benito,
Diana-Margarita Córdova-Esparza,
Noé-Alejandro Castro-Sánchez,
Juan Terven,
Julio-Alejandro Romero-González,
Teresa García-Ramirez
Affiliations
Hermilo Santiago-Benito
Facultad de Informática, Universidad Autonoma de Querétaro, Av. de las Ciencias S/N, Campus Juriquilla, C.P. 76230, Querétaro, Mexico
Diana-Margarita Córdova-Esparza
Facultad de Informática, Universidad Autonoma de Querétaro, Av. de las Ciencias S/N, Campus Juriquilla, C.P. 76230, Querétaro, Mexico; Corresponding author.
Noé-Alejandro Castro-Sánchez
Centro Nacional de Investigación y Desarrollo Tecnológico, Tecnológico Nacional de Mexico, Interior Internado Palmira S/N, Palmira, C.P. 62493, Cuernavaca, Morelos, Mexico
Juan Terven
CICATA - Unidad Querétaro, Instituto Politécnico Nacional, Cerro Blanco No. 141, Col. Colinas del Cimatario, C.P. 76090, Querétaro, Mexico
Julio-Alejandro Romero-González
Facultad de Informática, Universidad Autonoma de Querétaro, Av. de las Ciencias S/N, Campus Juriquilla, C.P. 76230, Querétaro, Mexico
Teresa García-Ramirez
Facultad de Informática, Universidad Autonoma de Querétaro, Av. de las Ciencias S/N, Campus Juriquilla, C.P. 76230, Querétaro, Mexico
In this work, we developed the first intelligent automatic grammatical tagger for a Spanish–Mixtec parallel corpus in Mexico. The proposed tagger consists of multiple phases. We started by collecting a Spanish–Mixtec parallel corpus of 12,300 sentences. Then, we tokenized the corpus at the word level, removing empty lines, duplicate sentences, and empty terms from the texts, followed by identifying word units, such as multiword and compound words, and defined word classes, specifying mandatory, recommended, and optional characteristics according to the EAGLES group. We established a standard for annotating words based on EAGLES, considering three elements: attribute, value, and code. Finally, we proposed a synthetic Mixtec tag using GPT-4, GPT-4o, and a manual tag using alignment, conditional random fields (CRF) and BERT models. We manually annotated 600 sentences for a total of 2800 words and semi-automatically annotated 3000 more sentences using GPT-4o with few-shot prompting. We trained multiple models for automatic grammatical tagging, achieving a precision of 0.74 and a recall of 0.80.