Journal of Innovative Science and Engineering (Jun 2020)
A Comparison of Software Defect Prediction Metrics Using Data Mining Algorithms
Abstract
Data mining is an interdisciplinary field that uses methods such as machine learning, artificial intelligence, statistics, and deep learning. Classification is an important data mining technique as it is widely used by researchers. Generally, statistical methods or machine learning algorithms such as Decision Trees, Fuzzy Logic, Genetic Programming, Random Forest, Artificial Neural Networks and Logistic Regression have been used in software defect prediction in the literature. Performance measures such as Accuracy, Precision, Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) are used to examine the performance of these classifiers. In this paper, 4 data sets entitled JM1, KC1, CM1, PC1 in the PROMISE repository, which are created within the scope of the publicly available NASA institution's Metric Data Program, are examined as in the other software defect prediction studies in the literature. These datasets include Halstead, McCabe method-level, and some other class-level metrics. Data sets are used with Wakiato Environment for Knowledge Analysis (WEKA) data mining software tool. By this tool, some classification algorithms such as Naive Bayes, SMO, K *, AdaBoost1, J48 and Random Forest were applied on NASA error datasets in PROMISE repository and their accuracy rates were compared. The best value among the accuracy rates was obtained in the Bagging algorithm in the PC1 data set with the values of %94.13.
Keywords