Applied Computational Intelligence and Soft Computing (Jan 2025)
Addressing Class Imbalance Problem in Health Data Classification: Practical Application From an Oversampling Viewpoint
Abstract
While analyzing health data is important for improving health outcomes, class imbalance in datasets poses major challenges to machine learning classification models. This work, therefore, considers the class imbalance problem in stroke prediction using models such as K-nearest neighbors, support vector machine, logistic regression, random forest, and decision tree. This work balances the stroke dataset, thereby enhancing model performance, through various oversampling strategies: random oversampling (RO), ADASYN, SMOTE, and SMOTE–Tomek. Compared to the results of the imbalanced dataset, all applied oversampling techniques enhanced the correct classification of stroke events by the ML model. Among these, RO–SVM with RBF kernel was the best in terms of sensitivity, specificity, G-mean, F1-score, and accuracy values, offering the highest results with respective values of 89.87%, 94.91%, 92.36%, 89.64%, and 89.87%. After applying oversampling techniques, all the machine learning classifications were good enough to classify stroke status, especially for the minority class. This study has highlighted the importance of class imbalance issues in health datasets. Precise detection of instances of minority classes can be enhanced considerably by employing classification models with the implementation of hybrid strategies to effectively solve class imbalance issues, which, in turn, will help improve healthcare outcomes. Further research in integrating more advanced deep learning techniques into other health datasets with imbalances is encouraged to further validate or refine class imbalance approaches, as effective handling of imbalanced classes can substantially promote predictive model performance in the analysis of healthcare.