Journal of Geophysical Research: Machine Learning and Computation (Jun 2025)

FaultVitNet: A Vision Transformer Assisted Network for 3D Fault Segmentation

  • Chao Li,
  • Sergey Fomel,
  • Yangkang Chen,
  • Robin Dommisse,
  • Alexandros Savvaidis

DOI
https://doi.org/10.1029/2024JH000488
Journal volume & issue
Vol. 2, no. 2
pp. n/a – n/a

Abstract

Read online

Abstract Fault detection and identification are pivotal in seismic interpretation, benefitting reservoir characterization and hydrocarbon exploration. Classic fault segmentation methods are mainly based on seismic attributes. With the rapid development of computational power, numerous deep learning (DL) based methods have been proposed for improved fault segmentation performance. However, due to the limited receptive field of the convolutional neural network (CNN), CNN will inherently pay more attention to local information, causing the risk of destroying the global completeness of faults and degrading fault detection accuracy. To overcome this problem, we propose an improved vision transformer and incorporate it into classic CNN to strengthen its ability for complex fault detection. The proposed vision‐transformer‐assisted network (FaultVitNet) exploits a hybrid attention mechanism for feature extraction, enabling the network to capture the global distribution information of faults. Moreover, because binary cross‐entropy (BCE) loss is heavily biased for fault segmentation, in our experiment, we combine binary cross‐entropy and dice loss to alleviate the influence of the unbalanced zero and non‐zero values on parameter optimization during training. We train our network using synthetic data along with some data augmentation schemes for improved generalization and accuracy. Compared with classic CNN‐based networks, our proposed FaultVitNet has better fault‐tracking capability, and numerical examples are used to validate the superior performance of the proposed method.

Keywords