IEEE Access (Jan 2024)
Semantic Liquid Spray Understanding With Computer-Generated Images
Abstract
Understanding liquid spray is essential for spray applications, including but not limited to designing fuel-efficient engines. Due to the challenges involved in collecting real-world liquid spray images, a synthetic liquid spray was generated using fluid simulation based on the atomization of the liquid jet. Semantic segmentation was chosen to analyze the liquid spray, as it reflects the precise location of the objects in the image. This paper presents a workflow to train a U-Net with a small sample (only 24 training images) dataset under the constraint that no ground truth is provided. An image is selected from the generated images of liquid spray and edited by randomly masking some objects. After the chosen image is annotated, a data augmentation technique that includes rotation and Gaussian smoothing is applied, resulting in 24 images available as the training set. The RGB original-sized images are fed to U-Net for training. Due to how the liquid spray images are obtained in the real world, Gaussian smoothing is explored as the inductive bias. Gaussian smoothing is incorporated between the convolutional layers of U-Net to enhance its feature extraction ability. The experiment results showed that the segmentation output improved when smoothing was incorporated into the U-Net. By visualizing the convolutional feature map of trained U-Net, we discover that smoothing makes convolution less biased to texture information. Going through this workflow, the trained U-Net is found to generalize well to the test images despite learning from few samples. Code is available at https://github.com/lynerlwl/spray-unet
Keywords