Dianxin kexue (Jun 2024)

Survey on large language models alignment research

  • LIU Kunlin,
  • QU Xinji,
  • TAN Fang,
  • KANG Honghui,
  • ZHAO Shaowei,
  • SHI Rong

Journal volume & issue
Vol. 40
pp. 173 – 194

Abstract

Read online

With the rapid development of artificial intelligence technology, large language models have been widely applied in numerous fields. However, the potential of large language models to generate inaccurate, misleading, or even harmful contents has raised concerns about their reliability. Adopting alignment techniques to ensure the behavior of large language models is consistent with human values has become an urgent issue to address. Recent research progress on alignment techniques for large language models were surveyed. Common methods for collecting instruction data and human preference datasets were introduced, research on supervised tuning and alignment adjustments was summarized, commonly used datasets and methods for model evaluation were discussed, and future research directions were concluded.

Keywords