HomeLog InSource


[View] [Short] [Hash] [Raw]


Sandboxing is not Sanitization

I’m a big proponent of sandboxing. I think it will come close to solving computer security, and prove that all the people who laughed at Neil deGrasse Tyson (for saying “just build secure computers”) are fools.

However, like all sufficiently advanced technology, it isn’t magic. I want to share a lesson about it that recently crystallized for me, thanks to a certain HN thread.

Microsoft didn’t sandbox Windows Defender, so I did (trailofbits.com)

Wow, someone isolating large, complex and buggy parsers with sandboxing! Great!

Then I get down near the bottom of the thread, and I see this:


If the sandboxed process is compromised, all you can do is read a file that you already had access to (because it’s your exploit), and lie about the scan result. That is not terribly exciting.

I already know how it works, so why am I reading this? Oh, wait… Huh.

This is an anti-virus program. It’s designed to protect your PC from viruses. If a virus can trigger a bug in the virus scanner (and remember, it’s large, complex and buggy – that’s why we sandboxed it), it can lie about whether a file is infected.

Then, we can only assume, the virus gets parsed or executed in a privileged context and takes over your PC.

Hmm. That didn’t quite work as planned, did it?

All else being equal, a sandbox with an untrusted input will have an untrusted output. That’s just the way it works. A sandbox can constrain the output, but it can’t guarantee any specific qualities about the output’s nature.

The classic case for sandboxes are things like PDF viewers, image decoders and Flash player. These are all things with predominantly human-centric I/O. In other words, a bad JPEG can produce an ugly, misleading (or large) bitmap image, but from that point all you can really do is social engineering (basilisks aside).

On the other hand, generic file parsers that we all want to sandbox typically have output that is read by another program. That output might be in a structured format that the second (possibly trusted) program then parses. If that second parser has a vulnerability, you’ll notice we’re back to square one.

A real-world example of this is the QubesOS firewall VM. Qubes comes configured to run a separate instance of Linux as a firewall in front of your other VMs. However, both the firewall and the other VMs are probably running from the same Linux image (template), and are almost certainly running the same TCP stack. In other words, the firewall itself isn’t much safer than the VMs it’s supposed to protect, and once the firewall is compromised, the same exploit can probably be used to compromise the inner VMs. (Disclaimer: this configuration might’ve changed in the year or so since I last checked. One easy improvement would be to use a different, smaller OS like OpenBSD as the firewall.)

For lack of a better term, let’s call this the dirty dataflow problem. Dirty (untrusted) data flows into your sandbox. Then it flows out, into another sandbox (or worse, not a sandbox). As long as the trust level of the target is at least as low as the sandbox itself, this is fine. However, if you are expecting sandboxing to help you get data from a low-trust area to a higher-trust area, you’re fooling yourself. Shit runs downhill.

Again, don’t get me wrong, I think the coarse-grained security that sandboxing provides is just what the doctor ordered for making most software mostly secure quickly and cheaply. However, when you actually want to sanitize your inputs, fine-grained security (through secure languages, tooling, runtimes, or formal proofs) is necessary.