Advanced Fuzzing & Crash Analysis (HITB 3-day training, May '19) - personal revue

windows
devops
fuzzing
smt
linux
offdev
afl
Tags: #<Tag:0x00007fe3bca07bc8> #<Tag:0x00007fe3bca07a88> #<Tag:0x00007fe3bca07948> #<Tag:0x00007fe3bca07808> #<Tag:0x00007fe3bca076c8> #<Tag:0x00007fe3bca07588> #<Tag:0x00007fe3bca07448>
#1

This is my subjective revue upon the “Advanced Fuzzing & Crash Analysis” training (by Richard Johnson) I took in Q2 2019.
This is not a feedback or a review: it’s a blog diary entry, that’s intended as a potentially useful reflection. My understanding about some of the training content is still minimal, and needs to be mature over time.

Fuzzing without instrumentation is fail.

Motivation: learning by doing, doing by skilling

Fuzzing works. Have you seen that Microsoft recently started to offer Security Risk Detection in the Developer Center?

Now what if our Devs take an interest, visit us in our security corner in the office and ask “how does this work”?

The answer should include that tools are easy to obtain, but tradecraft is not. Using tools isn’t going to make the key difference. Because what do you do with the results?
We can work to complete a vision of defensible DevOps pipelines. But it requires a cross-functional approach.

The MS Security Risk Detection service is based upon works from the SAGE project (RISE, Patrice Godefroiid et. al.) and likely inspired by works of Jared DeMott (PAIMEIpstalker, some years ago). It’s very interesting to follow the development in this domain over time.

So how does it work? Tradecraft 101

Obviously fuzz testing tools exist in many shapes and forms since a while. Most will generate exceptional input against defined application entry points (I/O, parameters, variables, …). – To crash it.

Crashes then get triaged. And that hopefully leads to bugs and fixes. Fixes and bugs. The circle of bug-life.

What did I learn?

The more you know, the more you forget. The more you forget, the less you know. So why not kill some bugs in the mean time?

Writing a AFL harness can turn a network I/O entry point into a file-fuzzing

Depending on the entry point different types of I/O may impose limits to the executions per second of a fuzzer.
Instead of sending packets over the wire we can peel the layers away and use the relevant routines within our targets directly. These will usually be those inner parts after the application entry points, that have parsing or at least data-evaluation functions (dissectors, vectorizers etc.).

Target modifications to get through the layers allow to reach new attack surface

We can modify targets to extend our reach towards internal entry points. This makes sense for vulnerability research as long as we don’t bypass sanitizing routines.
This way we retain small enough stub-apps (harnessers) that we can instrument.

Investing time to reverse engineer a target to select a proper hook function pays off in AFL persistence mode

Depending on the target application the initialization may cost CPU cycles, that are essentially wasted.
We may be able to reach global state (initialization works, we can fuzz subsequent target routines) by selecting a proper function to hook into (via exported function name or runtime offset).

Key is, that the handles (for resources under Windows especially) are setup, and that we maintain a valid global state.

Concolic execution and solver driven input generation with egas

We took a look at moflow, which uses applied program analysis techniques to generate input with solvers.

This is still ahead of “the curve” to a certain degree, because the tool logs out the modified corpus input and it’s not slowed down by theoretical problems like loop-unrolling and constraint optimization for the most part.

Cross-fuzzing with a refined corpus

We may have a pre-generated corpus of inputs (like files), that is refined and populated with tools like egas, radamsa etc…
This corpus may successfully crash an OpenSource target, that is relatively simple to instrument and to measure. The same corpus can be used against proprietary targets, given that it might use YAML, JSON, XML, … PNGs, TIFFs, PCAPs, …

What do I want to learn next

Let me post-process the training: it was over 400 slides, plus a 50 page workbook, two VMs and a ton of hands on exercises. A couple of well-versed bug-hunters and vulnerability researchers have taken the training over the years. It’s safe to say that the rabbit hole goes deep.

My list currently also includes non x86 or x86-64 targets, like ARM or JVM. Mostly because it’s intellectually stimulating. I think this course will help me to develop the mindset of fellow devs at work as well, who sooner or later will look at software-testing tools like Fuzzers.

Contact page