IEEE Access (Jan 2023)

Delamination Depth Evaluation in Near-Surface of GFRP Laminates Based on RQA-MKLSVM Model

  • Wei Guo,
  • Zhaoba Wang,
  • Youxing Chen,
  • Yong Jin,
  • Qizhou Wu

DOI
https://doi.org/10.1109/ACCESS.2023.3262133
Journal volume & issue
Vol. 11
pp. 30908 – 30919

Abstract

Read online

The increasing use of glass fiber reinforced polymer (GFRP) laminates in aircraft creates an urgent demand for nondestructive testing to identify defects such as impact damage and delamination in the materials’ near-surface structure. The High-frequency ultrasonic technique is an effective nondestructive testing technique capable of detecting such defects. However, with the increased delamination depth, it is difficult to accurately characterize the delamination defects in GFRP laminates by traditional characterization methods. And the conventional defect depth detection method takes much time and requires professional discrimination, which is easily affected by the level of inspectors and subjective factors. To address the issue of inaccurate feature extraction and low detection efficiency, we propose a new method for delamination depth recognition in the near-surface of GFRP laminates based on the RQA-MKLSVM model by combining the recurrence quantitative analysis (RQA) feature extraction method with the multi-kernel learning SVM technique to improve the capability of detecting delamination defects on GFRP laminates. Additionally, an image-enhanced method based on the nonlinear transformation function and wavelet multiscale product is used to locate the delaminated areas. Results show that the proposed method achieves accurate recognition of different delamination defects within depths ranging from 0.8 mm to 4.0 mm, and the average recognition rate has reached 95.63%. Compared with the training models based on discrete wavelet transform(DWT) and empirical mode decomposition (EMD), the recognition rate of our method is improved by 6.25% and 4.38%, respectively. And Compared with the two single-core models, the recognition accuracy of our method is improved by 8.55% and 5.00%, respectively.

Keywords