IEEE Access (Jan 2024)
Optimizing AI Transformer Models for <italic>CO</italic>₂ Emission Prediction in Self-Driving Vehicles With Mobile/Multi-Access Edge Computing Support
Abstract
With the increasing prominence of self-driving vehicles, there has been a pressing need to accurately estimate their carbon dioxide (CO2) emissions and evaluate their environmental sustainability. This paper has introduced a novel approach that leverages Artificial Intelligence (AI) transformer architectures to predict CO2 emissions in Society of Automotive Engineers (SAE) Level 2 self-driving cars, surpassing the performance of previous algorithms. After examining and comparing the use of previously proposed LSTM-based and the proposed transformer architecture (CO2ViT), and identifying their strengths and limitations, we have explored the vehicular networking paradigm with the Mobile/Multi-Access Edge Computing (MEC) capabilities of 5G infrastructure to provide the prediction service of the proposed transformer model under different networking topologies. Through extensive experimentation and evaluation on a dataset specifically designed for CO2 emissions prediction in self-driving vehicles, we have demonstrated the superior predictive capabilities of our proposed CO2ViT model based on the Visual Transformer architecture, achieving a model that predicts the CO2 emissions 71.14% faster than the previous state-of-the-art model (LSTM) applied to the same problem and that achieves an R2 higher score of 0.9898 against the one achieved by the LSTM (0.9712). Furthermore, we have deployed a 5G emulation testbed with MEC capabilities to demonstrate the proposed Deep Learning (DL) resilience of the model to changes and concurrent connections. While delays for 2 to 16 connected vehicles have grown linearly with a maximum delay value of 41.01 ms, resource limitations have arisen with 32 or more cars due to varied delays, necessitating additional physical resources for the emulated 5G network to achieve better performance under high stress. The deployed models’ inference time over the 5G infrastructure for 64 concurrent connected vehicles in scenarios A and B has been 4.31 ms and 8.42 ms, respectively.
Keywords