Vision-Based Spacecraft Pose Estimation via a Deep Convolutional Neural Network for Noncooperative Docking Operations
Thaweerath Phisannupawong,
Patcharin Kamsing,
Peerapong Torteeka,
Sittiporn Channumsin,
Utane Sawangwit,
Warunyu Hematulin,
Tanatthep Jarawan,
Thanaporn Somjit,
Soemsak Yooyen,
Daniel Delahaye,
Pisit Boonsrimuang
Affiliations
Thaweerath Phisannupawong
Air-Space Control, Optimization, and Management Laboratory, Department of Aeronautical Engineering, International Academy of Aviation Industry, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand
Patcharin Kamsing
Air-Space Control, Optimization, and Management Laboratory, Department of Aeronautical Engineering, International Academy of Aviation Industry, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand
Peerapong Torteeka
Research Group, National Astronomical Research Institute of Thailand, ChiangMai 50180, Thailand
Sittiporn Channumsin
Astrodynamics Research Laboratory, Geo-Informatics and Space Technology Development Agency (GISTDA), Chonburi 20230, Thailand
Utane Sawangwit
Research Group, National Astronomical Research Institute of Thailand, ChiangMai 50180, Thailand
Warunyu Hematulin
Air-Space Control, Optimization, and Management Laboratory, Department of Aeronautical Engineering, International Academy of Aviation Industry, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand
Tanatthep Jarawan
Air-Space Control, Optimization, and Management Laboratory, Department of Aeronautical Engineering, International Academy of Aviation Industry, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand
Thanaporn Somjit
Air-Space Control, Optimization, and Management Laboratory, Department of Aeronautical Engineering, International Academy of Aviation Industry, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand
Soemsak Yooyen
Air-Space Control, Optimization, and Management Laboratory, Department of Aeronautical Engineering, International Academy of Aviation Industry, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand
Daniel Delahaye
Ecole Nationale de l’Aviation Civile, 31400 Toulouse, France
Pisit Boonsrimuang
Faculty of Engineering, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand
The capture of a target spacecraft by a chaser is an on-orbit docking operation that requires an accurate, reliable, and robust object recognition algorithm. Vision-based guided spacecraft relative motion during close-proximity maneuvers has been consecutively applied using dynamic modeling as a spacecraft on-orbit service system. This research constructs a vision-based pose estimation model that performs image processing via a deep convolutional neural network. The pose estimation model was constructed by repurposing a modified pretrained GoogLeNet model with the available Unreal Engine 4 rendered dataset of the Soyuz spacecraft. In the implementation, the convolutional neural network learns from the data samples to create correlations between the images and the spacecraft’s six degrees-of-freedom parameters. The experiment has compared an exponential-based loss function and a weighted Euclidean-based loss function. Using the weighted Euclidean-based loss function, the implemented pose estimation model achieved moderately high performance with a position accuracy of 92.53 percent and an error of 1.2 m. The in-attitude prediction accuracy can reach 87.93 percent, and the errors in the three Euler angles do not exceed 7.6 degrees. This research can contribute to spacecraft detection and tracking problems. Although the finished vision-based model is specific to the environment of synthetic dataset, the model could be trained further to address actual docking operations in the future.