Engineering Proceedings (Nov 2022)

Recreating Lunar Environments by Fusion of Multimodal Data Using Machine Learning Models

  • Ana C. Castillo,
  • Jesus A. Marroquin-Escobedo,
  • Santiago Gonzalez-Irigoyen,
  • Marlene Martinez-Santoyo,
  • Rafaela Villalpando-Hernandez,
  • Cesar Vargas-Rosales

DOI
https://doi.org/10.3390/ecsa-9-13326
Journal volume & issue
Vol. 27, no. 1
p. 54

Abstract

Read online

The latest satellite infrastructure for data processing, transmission and reception can certainly be improved by upgrading tools used to deal with very large amounts of data from every different sensor incorporated within the space missions. In order to develop a better technique to process data, in this paper we will take an insight into multimodal data fusion using machine learning algorithms. This paper discusses how machine learning models are used to recreate environments from heterogeneous, multi-modal data sets. In particular, for those models based on neural networks, the most important difficulty is the vast number of training objects of the connected neural network based on Convolutional Neural Networks (CNN) to avoid overfitting and underfitting of the models.

Keywords