Mathematics (Dec 2024)

Large Language Model-Assisted Reinforcement Learning for Hybrid Disassembly Line Problem

  • Xiwang Guo,
  • Chi Jiao,
  • Peng Ji,
  • Jiacun Wang,
  • Shujin Qin,
  • Bin Hu,
  • Liang Qi,
  • Xianming Lang

DOI
https://doi.org/10.3390/math12244000
Journal volume & issue
Vol. 12, no. 24
p. 4000

Abstract

Read online

Recycling end-of-life products is essential for reducing environmental impact and promoting resource reuse. In the realm of remanufacturing, researchers are increasingly concentrating on the challenge of the disassembly line balancing problem (DLBP), particularly on how to allocate work tasks effectively to enhance productivity. However, many current studies overlook two key issues: (1) how to reasonably arrange the posture of workers during disassembly, and (2) how to reasonably arrange disassembly tasks when the disassembly environment is not a single type of disassembly line but a hybrid disassembly line. To address these issues, we propose a mixed-integrated programming model suitable for linear and U-shaped hybrid disassembly lines, while also considering how to reasonably allocate worker postures to alleviate worker fatigue. Additionally, we introduce large language model-assisted reinforcement learning to solve this model, which employs a Dueling Deep Q-Network (Duel-DQN) to tackle the problem and integrates a large language model (LLM) into the algorithm. The experimental results show that compared to solutions that solely use reinforcement learning, large language model-assisted reinforcement learning reduces the number of iterations required for convergence by approximately 50% while ensuring the quality of the solutions. This provides new insights into the application of LLM in reinforcement learning and DLBP.

Keywords