IEEE Open Journal of the Computer Society (Jan 2024)

ShadowBug: Enhanced Synthetic Fuzzing Benchmark Generation

  • Zhengxiang Zhou,
  • Cong Wang

DOI
https://doi.org/10.1109/OJCS.2024.3378384
Journal volume & issue
Vol. 5
pp. 95 – 106

Abstract

Read online

Fuzzers have proven to be a vital tool in identifying vulnerabilities. As an area of active research, there is a constant drive to improve fuzzers, and it is equally important to improve benchmarks used to evaluate their performance alongside evolving heuristics. Current research has primarily focused on using CVE bugs as benchmarks, with synthetic benchmarks receiving less attention due to concerns about overfitting specific fuzzing heuristics. In this paper, we introduce ShadowBug, a new methodology that generates enhanced synthetic bugs. In contrast to existing synthetic benchmarks, our approach involves well-arranged bugs that fit specific distributions by quantifying the constraint-solving difficulty of each block. We also uncover implicit constraints of real-world bugs that prior research has overlooked and develop an integer-overflow-based transformation from normal constraints to their implicit forms. We construct a synthetic benchmark and evaluate it against five prominent fuzzers. The experiments reveal that 391 out of 466 bugs were detected, which confirms the practicality and effectiveness of our methodology. Additionally, we introduce a finer-grained evaluation metric called “bug difficulty,” which sheds more light on their heuristic strengths with regard to constraint-solving and bug exploitation. The results of our study have practical implications for future fuzzer evaluation methods.

Keywords