Proceedings of the XXth Conference of Open Innovations Association FRUCT (Nov 2024)
Applying Retrieval-Augmented Generation for Academic Discipline Development: Insights from Zero-Shot to Tree-of-Thought Prompting
Abstract
This study assesses the efficiency of large language models (LLMs) in generating university course structures, comparing traditional methods with Retrieval-Augmented Generation (RAG). It involves a comparative analysis across twelve courses using four LLMs: starling-lm-7b-alpha, openchat_3.5, saiga2_13b, and gpt-3.5-turbo, with four distinct prompting approaches. Findings indicate that advanced prompting techniques significantly influence model performance and response variability. The study underscores the importance of selecting appropriate LLMs and prompting strategies to optimize educational outcomes, highlighting RAG's role in enhancing data retrieval accuracy in educational technology.
Keywords