Advanced Intelligent Systems (Jan 2024)

Autoencoding a Soft Touch to Learn Grasping from On‐Land to Underwater

  • Ning Guo,
  • Xudong Han,
  • Xiaobo Liu,
  • Shuqiao Zhong,
  • Zhiyuan Zhou,
  • Jian Lin,
  • Jiansheng Dai,
  • Fang Wan,
  • Chaoyang Song

DOI
https://doi.org/10.1002/aisy.202300382
Journal volume & issue
Vol. 6, no. 1
pp. n/a – n/a

Abstract

Read online

Robots play a critical role as the physical agent of human operators in exploring the ocean. However, it remains challenging to grasp objects reliably while fully submerging under a highly pressurized aquatic environment with little visible light, mainly due to the fluidic interference on the tactile mechanics between the finger and object surfaces. This study investigates the transferability of grasping knowledge from on‐land to underwater via a vision‐based soft robotic finger that learns 6D forces and torques (FT) using a supervised variational autoencoder (SVAE). A high‐framerate camera captures the whole‐body deformations while a soft robotic finger interacts with physical objects on‐land and underwater. Results show that the trained SVAE model learns a series of latent representations of the soft mechanics transferable from land to water, presenting a superior adaptation to the changing environments against commercial FT sensors. Soft, delicate, and reactive grasping enabled by tactile intelligence enhances the gripper's underwater interaction with improved reliability and robustness at a much‐reduced cost, paving the path for learning‐based intelligent grasping to support fundamental scientific discoveries in environmental and ocean research.

Keywords