OpenSource

Google introduced fuzzing testing system ClusterFuzzLite

Share on Facebook Share on Twitter Pinterest LinkedIn Tumblr

Google presented the ClusterFuzzLite project , which allows organizing fuzzing testing of code for early detection of potential vulnerabilities at the stage of continuous integration systems operation. Currently, ClusterFuzz can be used to automate fuzzing testing of pull requests in GitHub Actions , Google Cloud Build, and Prow , but support for other CI systems is expected in the future. The project is based on the ClusterFuzz platform , created to coordinate the work of fuzzing-testing clusters, and is distributed under the Apache 2.0 license.

It is noted that after the introduction of the OSS-Fuzz service by Google in 2016, more than 500 important open source projects were accepted into the continuous fuzzing testing program. Based on the checks carried out, more than 6,500 confirmed vulnerabilities have been eliminated and more than 21,000 errors have been fixed. ClusterFuzzLite continues to evolve fuzzing testing mechanisms with the ability to identify issues earlier in the peer review phase of proposed changes. ClusterFuzzLite has already been introduced into the processes of reviewing changes in systemd and curl projects, and made it possible to identify errors missed by static analyzers and linters that were used at the initial stage of checking new code.

ClusterFuzzLite supports project validation in C, C ++, Java (and other JVM-based languages), Go, Python, Rust, and Swift. Fuzzing testing is carried out using the LibFuzzer engine . The AddressSanitizer , MemorySanitizer, and UBSan (UndefinedBehaviorSanitizer) tools can also be called to detect memory errors and anomalies .

Key features of ClusterFuzzLite: quick check of proposed changes to find errors at the stage before code acceptance; download of reports on the conditions of the occurrence of crashes; the ability to switch to more advanced fuzzing testing to identify deeper errors that did not surface after checking the code change; generation of coverage reports to assess code coverage during testing; modular architecture that allows you to choose the required functionality.

Recall that fuzzing testing generates a stream of all sorts of random combinations of input data close to real data (for example, html pages with random tag parameters, archives or images with abnormal headers, etc.), and fixing possible failures in the process their processing. If a sequence crashes or does not match the expected response, then this behavior is highly likely to indicate a bug or vulnerability.

Write A Comment