In automated software testing, fuzzing is a relatively simple technique widely used in industry to discover bugs in a software under test (SUT). It consists in running the SUT iteratively with (usually) randomly generated or modified inputs, trying to produce an input that will make the SUT crash.
Several fuzzers have been proposed to date, however they have never been compared on a level playing field in order to understand whether there are differences in their effectiveness and performances, and if any is more suitable than others.
In this work we evaluate and compare nine prominent fuzzers by carrying out a thorough empirical study based on an open-source framework developed by Google, namely FuzzBench, and a manually curated benchmark of 14 real-world software systems.
The results show that honggfuzz and aflplusplus are, in that order, the best choices in terms of general purpose fuzzing effectiveness. The results also show that none of the fuzzer outperforms the others in terms of efficiency across all metrics, that no particular bug affinity is found for any fuzzer, and that the correlation found between edge coverage and number of bugs is benchmark dependent.
Wed 6 AprDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
11:30 - 12:45
|Metamorphic Fuzzing of C++ Libraries|
|POWER: Program Option-Aware Fuzzer for High Bug Detection Ability|
|Comparing Fuzzers on a Level Playing Field with FuzzBench|
|SWFC-ART: A cost-effective approach for Fixed-Size-Candidate-Set Adaptive Random Testing through small world graphs|
Muhammad Ashfaq Jiangsu University, Rubing Huang Macau University of Science and Technology (MUST), Dave Towey University of Nottingham Ningbo China, Michael Omari Takoradi Technical University, Dmitry Yashunin Harman X, Patrick Kwaku Kudjo University of Professional Studies, Accra-Ghana, Tao Zhang Macau University of Science and Technology (MUST)Link to publication DOI
|Discussion and Q&A|