IEEE Access (Jan 2023)

Development of Testability Prediction Models Considering Complexity Diversity for C Programs

  • Hyun-Jae Choi,
  • Heung-Seok Chae

DOI
https://doi.org/10.1109/ACCESS.2023.3312556
Journal volume & issue
Vol. 11
pp. 98469 – 98485

Abstract

Read online

Testability prediction can help developers identify software components that require significant effort to ensure software quality, plan test activities, and recognize the need for refactoring to reduce the test effort. Previous studies have predicted code coverage as a measure of testability based on software metrics. However, these studies have primarily used object-oriented software with simple code structures. Industrial software developed using C is often more complex than the object-oriented software used in these studies. Models trained primarily on low-complexity training data may have insufficient training for the testability of high-complexity software. In this study, we developed a testability prediction model for C programs by considering the complexity diversity. We analyzed the impact of the complexity of the training/test data on the testability prediction model for C programs. The results showed that the model with the best performance achieves an MAE of 7.436 and an R2 of 0.813. Moreover, the results demonstrated that as the complexity diversity of the training data decreased, MAE increased from 5.203 to 6.361, and R2 decreased from 0.809 to 0.725. Furthermore, the performance of the model trained with low complexity-diversity deteriorated as the complexity level of the test data increased, with MAE increasing from 3.498 to 6.631, and R2 decreasing from 0.841 to 0.687. Additionally, in the correlation analysis between the model performance and the difference in the complexity of the training and test data, a strong correlation was observed, with MAE of 0.898 and R2 of -0.848.

Keywords