Applied Mathematics and Nonlinear Sciences (Jan 2024)
GPT Modeling for English Learning Assistance Functions in Non-Native Language Environments
Abstract
In this paper, we design an English multi-round dialogue model based on the pre-trained model GPT-2 and use Top-p and Top-k sampling instead of greedy search and cluster search sampling to optimize the decoding strategy. After that, a dialog history keyword replication mechanism is introduced to improve context consistency. Based on the GPT-2 English Multi-Round Dialogue Model, a conversational teaching model for teaching English in a non-native language environment has been constructed. The impact of this teaching model on students’ theoretical learning and teaching skills was examined. The results show that the BLEU1 index value of the GPT-2DH model after pre-training in the chatterbot corpus is 0.356, which is almost twice as much as that of seq2seq. As a result, the model in this paper is superior. Compared with traditional teaching methods, the GPT dialogic learning model significantly improved students’ theoretical learning performance, English listening and speaking skills, learning attitude (P=0.029<0.05), and extrinsic learning motivation (P=0.046<0.05) but had no significant effect on their instructional design skills and intrinsic learning motivation.
Keywords