IEEE Access (Jan 2023)

Effects of Sim2Real Image Translation via DCLGAN on Lane Keeping Assist System in CARLA Simulator

  • Jinu Pahk,
  • Jungseok Shim,
  • Minhyeok Baek,
  • Yongseob Lim,
  • Gyeungho Choi

DOI
https://doi.org/10.1109/ACCESS.2023.3262991
Journal volume & issue
Vol. 11
pp. 33915 – 33927

Abstract

Read online

Autonomous vehicle (AV) simulation using a virtual environment has the advantage of being able to test algorithms in various scenarios with reduced resources. However, there may exist a visual gap between the virtual environment and the real-world. In this paper, in order to mitigate this gap, we trained Dual Contrastive Learning Generative Adversarial Networks (DCLGAN) to realistically convert the image of the CARLA simulator and then evaluated the effect of the Sim2Real conversion focusing on the lane keeping assist system (LKAS). Moreover, in order to avoid the case where the lane is translated distortedly by DCLGAN, we found the optimal training hyperparameters using feature similarity (FSIM). After training, we built a system that connected the CARLA simulator with DCLGAN and AV in real-time. As for the result, we collected data and analyzed them using the following four methods. First, image reality was measured with Fréchet Inception Distance (FID), which we quantitatively verified to reflect the lane characteristics. The CARLA images that passed through DCLGAN had smaller FID values than the original images. Second, lane segmentation accuracy through ENet-SAD was improved by DCLGAN. Third, in the curved route, the case of using DCLGAN drove closer to the center of the lane and had a high success rate. Lastly, in the straight route, DCLGAN improved lane restoring ability after deviating from the center of the lane as much as in reality. Consequently, it convinced that the proposed method could be applicable to mitigate the gap of simulation toward real-world.

Keywords