Nature Communications (Nov 2024)

Capturing forceful interaction with deformable objects using a deep learning-powered stretchable tactile array

  • Chunpeng Jiang,
  • Wenqiang Xu,
  • Yutong Li,
  • Zhenjun Yu,
  • Longchun Wang,
  • Xiaotong Hu,
  • Zhengyi Xie,
  • Qingkun Liu,
  • Bin Yang,
  • Xiaolin Wang,
  • Wenxin Du,
  • Tutian Tang,
  • Dongzhe Zheng,
  • Siqiong Yao,
  • Cewu Lu,
  • Jingquan Liu

DOI
https://doi.org/10.1038/s41467-024-53654-y
Journal volume & issue
Vol. 15, no. 1
pp. 1 – 14

Abstract

Read online

Abstract Capturing forceful interaction with deformable objects during manipulation benefits applications like virtual reality, telemedicine, and robotics. Replicating full hand-object states with complete geometry is challenging because of the occluded object deformations. Here, we report a visual-tactile recording and tracking system for manipulation featuring a stretchable tactile glove with 1152 force-sensing channels and a visual-tactile joint learning framework to estimate dynamic hand-object states during manipulation. To overcome the strain interference caused by contact with deformable objects, an active suppression method based on symmetric response detection and adaptive calibration is proposed and achieves 97.6% accuracy in force measurement, contributing to an improvement of 45.3%. The learning framework processes the visual-tactile sequence and reconstructs hand-object states. We experiment on 24 objects from 6 categories including both deformable and rigid ones with an average reconstruction error of 1.8 cm for all sequences, demonstrating a universal ability to replicate human knowledge in manipulating objects with varying degrees of deformability.