IEEE Access (Jan 2024)

Enhancing Quality Control: A Study on AI and Human Performance in Flip Chip Defect Detection

  • Wannisa Cheamsiri,
  • Anuchit Jitpattanakul,
  • Paisarn Muneesawang,
  • Konlakorn Wongpatikaseree,
  • Narit Hnoohom

DOI
https://doi.org/10.1109/ACCESS.2024.3521459
Journal volume & issue
Vol. 12
pp. 197840 – 197855

Abstract

Read online

This study introduces an advanced defect inspection model that utilizes object detection techniques to identify defects in Flip Chip cross-section images. The model serves as a valuable tool for failure analysis (FA) engineers working with Chip-on-Wafer (CoW) products by enhancing inspection precision and accuracy, save time and costs, reduce human error, and ensure reliability. The dataset, provided by an electronics manufacturing service provider in Thailand, and is divided into four categories: good bump, head-in-pillow (HIP) defect, non-wetting defect, and solder void defect. High-resolution images were captured with an Olympus BX53M microscope at $1000\times $ magnification, focusing on a 50-micrometer copper pillar (CP) bump diameter. To address dataset imbalances, this research applies image augmentation techniques and generative artificial intelligence (AI) to synthesize additional HIP defect images. The experimental setup involved seven image datasets used to train multiple object detection models, including YOLOv5, YOLOv6, YOLOv7, and YOLOv8, resulting in a total of 26 trained models. Results show that YOLOv5 and YOLOv8 required the shortest training time, clocking in at 0.86 hours (51 minutes 48 seconds), making them the most computationally efficient models. F1-score evaluations indicated that YOLOv5 achieved scores ranging from 0.948 to 0.981, outperforming the other models. Additionally, testing with a panel of five experts revealed that the model achieved higher accuracy and precision than experts with over 20 years of experience.

Keywords