Applied Sciences (Aug 2024)

A Cross-Working Condition-Bearing Diagnosis Method Based on Image Fusion and a Residual Network Incorporating the Kolmogorov–Arnold Representation Theorem

  • Ziyi Tang,
  • Xinhao Hou,
  • Xin Wang,
  • Jifeng Zou

DOI
https://doi.org/10.3390/app14167254
Journal volume & issue
Vol. 14, no. 16
p. 7254

Abstract

Read online

With the optimization and advancement of industrial production and manufacturing, the application scenarios of bearings have become increasingly diverse and highly coupled. This complexity poses significant challenges for the extraction of bearing fault features, consequently affecting the accuracy of cross-condition fault diagnosis methods. To improve the extraction and recognition of fault features and enhance the diagnostic accuracy of models across different conditions, this paper proposes a cross-condition bearing diagnosis method. This method, named MCR-KAResNet-TLDAF, is based on image fusion and a residual network that incorporates the Kolmogorov–Arnold representation theorem. Firstly, the one-dimensional vibration signals of the bearing are processed using Markov transition field (MTF), continuous wavelet transform (CWT), and recurrence plot (RP) methods, converting the resulting images to grayscale. These grayscale images are then multiplied by corresponding coefficients and fed into the R, G, and B channels for image fusion. Subsequently, fault features are extracted using a residual network enhanced by the Kolmogorov–Arnold representation theorem. Additionally, a domain adaptation algorithm combining multiple kernel maximum mean discrepancy (MK-MMD) and conditional domain adversarial network with entropy conditioning (CDAN+E) is employed to align the source and target domains, thereby enhancing the model’s cross-condition diagnostic accuracy. The proposed method was experimentally validated on the Case Western Reserve University (CWRU) dataset and the Jiangnan University (JUN) dataset, which include the 6205-2RS JEM SKF, N205, and NU205 bearing models. The method achieved accuracy rates of 99.36% and 99.889% on the two datasets, respectively. Comparative experiments from various perspectives further confirm the superiority and effectiveness of the proposed model.

Keywords