Scientific Reports (Oct 2024)
Foundation models assist in human–robot collaboration assembly
Abstract
Abstract Human–robot collaboration (HRC) is a novel manufacturing paradigm designed to fully leverage the advantage of humans and robots, efficiently and flexibly accomplishing customized manufacturing tasks. However, existing HRC systems lack the transfer and generalization capability for environment perception and task reasoning. These limitations manifest in: (1) current methods rely on specialized models to perceive scenes; and need retraining the model when facing unseen objects. (2) current methods only address predefined tasks, and cannot support undefined task reasoning. To avoid these limitations, this paper proposes a novel HRC approach based on Foundation Models (FMs), including Large Language models (LLMs) and Vision Foundation Models (VFMs). Specifically, a LLMs-based task reasoning method is introduced, utilizing prompt learning to transfer LLMs into the domain of HRC tasks, supporting undefined task reasoning. A VFMs-based scene semantic perception method is proposed, integrating various VFMs to achieve scene perception without training. Finally, a FMs-based HRC system is developed, comprising perception, reasoning, and execution modules for more flexible and generalized HRC. The superior performances of FMs in perception and reasoning are demonstrated by extensive experiments. Furthermore, the feasibility and effectiveness of the FMs-based HRC system are validated through an part assembly case involving a satellite component model.
Keywords