The Astrophysical Journal Supplement Series (Jan 2023)

Pixel-to-pixel Translation of Solar Extreme-ultraviolet Images for DEMs by Fully Connected Networks

  • Eunsu Park,
  • Harim Lee,
  • Yong-Jae Moon,
  • Jin-Yi Lee,
  • Il-Hyun Cho,
  • Kyoung-Sun Lee,
  • Daye Lim,
  • Hyun-Jin Jeong,
  • Jae-Ok Lee

DOI
https://doi.org/10.3847/1538-4365/aca902
Journal volume & issue
Vol. 264, no. 2
p. 33

Abstract

Read online

In this study, we suggest a pixel-to-pixel image translation method among similar types of filtergrams such as solar extreme-ultraviolet (EUV) images. For this, we consider a deep-learning model based on a fully connected network in which all pixels of solar EUV images are independent of one another. We use six-EUV-channel data from the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory (SDO), of which three channels (17.1, 19.3, and 21.1 nm) are used as the input data and the remaining three channels (9.4, 13.1, and 33.5 nm) as the target data. We apply our model to representative solar structures (coronal loops inside of the solar disk and above the limb, coronal bright point, and coronal hole) in SDO/AIA data and then determine differential emission measures (DEMs). Our results from this study are as follows. First, our model generates three EUV channels (9.4, 13.1, and 33.5 nm) with average correlation coefficient values of 0.78, 0.89, and 0.85, respectively. Second, our model generates the solar EUV data with no boundary effects and clearer identification of small structures when compared to a convolutional neural network–based deep-learning model. Third, the estimated DEMs from AI-generated data by our model are consistent with those using only SDO/AIA channel data. Fourth, for a region in the coronal hole, the estimated DEMs from AI-generated data by our model are more consistent with those from the 50 frames stacked SDO/AIA data than those from the single-frame SDO/AIA data.

Keywords