IEEE Access (Jan 2023)

Generating Python Mutants From Bug Fixes Using Neural Machine Translation

  • Sergen Asik,
  • Ugur Yayan

DOI
https://doi.org/10.1109/ACCESS.2023.3302695
Journal volume & issue
Vol. 11
pp. 85678 – 85693

Abstract

Read online

Due to the fast-paced development of technology, the software has become a crucial aspect of modern life, facilitating the operation and management of hardware devices. Nevertheless, using substandard software can result in severe complications for users, putting human lives at risk. This underscores the significance of error-free and premium-quality software. Verification and validation are essential in ensuring high-quality software development; software testing is integral to this process. Although code coverage is a prevalent method for assessing the efficacy of test suites, it has some limitations. Therefore, mutation testing is proposed as a remedy to tackle these limitations. Furthermore, mutation testing is recognized as a method for directing test case creation and evaluating the effectiveness of test suites. Our proposed method involves autonomously learning mutations from faults in real-world software applications. Firstly, our approach involves extracting bug fixes at the method-level, classifying them according to mutation types, and performing code abstraction. Subsequently, the approach utilizes a deep learning technique based on neural machine translation to develop mutation models. Our method has been trained and assessed using approximately $\sim $ 588k bug fix commits extracted from GitHub. The results of our experimental assessment indicate that our models can forecast mutations resembling resolved bugs in 6% to 35% of instances. The models effectively revert fixed code to its original buggy version, reproducing the original bug and generating various other buggy codes with up to 94% accuracy. More than 96% of the generated mutants also demonstrate lexical and syntactic accuracy.

Keywords