HomeLog InSource

notes.public

[View] [Short] [Hash] [Raw]

2017-03-31
Originally written 2016-12-08

Significantly advanced technology is indistinguishable from a deceiving god

One conclusion to draw from Ken Thompson’s historic speech, Reflections on Trusting Trust, states that we inherently must rely on tools to control machine code and data, and that in order to build secure systems, we need secure tools. However, the tools themselves cannot be verified by the naked eye, and so we need to trust more tools. Even an extremely simple tool, like a plain LED, could have a tiny computer hidden inside of it, preventing it from lighting up and concealing a larger attack. (With smart light bulbs, this fear is gradually becoming a reality.)

Paranoia over this sort of threat seems to be becoming more common, and probably for good reason. If miniaturization of computer hardware continues apace, we will eventually reach a point where it’s feasible to hide tiny, malicious computers in everything. One might call it an “internet of things.” There are already tiny keyloggers that fit inside the plugs of USB cords.

This concept, of a world full of undetectable forces intent on deceiving us, is surprisingly similar to the one imagined by Rene Descartes. I tried reading him and it was impenetrable, but hopefully we can trust Wikipedia:

https://en.wikipedia.org/wiki/Cogito_ergo_sum
https://en.wikipedia.org/w/index.php?title=Cogito_ergo_sum&oldid=748668175
Cogito ergo sum

At the beginning of the second meditation, having reached what he considers to be the ultimate level of doubt—his argument from the existence of a deceiving god—Descartes examines his beliefs to see if any have survived the doubt. In his belief in his own existence, he finds that it is impossible to doubt that he exists. Even if there were a deceiving god (or an evil demon), one’s belief in their own existence would be secure, for there is no way one could be deceived unless one existed in order to be deceived.

While reasonable people would agree that a deceiving god doesn’t actually exist, it seems like some people are intent on creating one.

Even under a deceiving technological god, let us assume that our skulls, brains and minds are intact (since biology is hard). Under this scenario, let’s say we can still trust our sense of logic and our own sight and touch. How much more can we know?

A few years ago I saw a tech demo for a “magic mirror”: a video camera with facial recognition software, hooked up to a display showing a computer-generated person. The idea was it would “reflect” the head position and facial expression detected by the camera back, in terms of the CG person’s head and face. It could have applications for animated movies, video conferencing, and who knows what else. (It was slightly different from a newer commercial tech called FaceRig, since it really did try to “mirror” its input.)

The reason I bring this up is because there was an interesting effect at the edges of the “mirror.” When a face is cut off at the camera’s edge, facial recognition fails. That means there is a dead zone around the edges.

With a real mirror, photons are reflected individually. Even if something is at the very edge of the mirror, a single photon can be accurately reflected. But because these “magic mirrors” reflect expressions, they can only interpret photons in aggregate. No matter how advanced or capable they become, they can’t accurately reflect an arbitrary portion of a face.

This concept of “cutting off” also applies to deceptive technology trying to disguise itself. Imagine you have a malicious multimeter, and you’re trying to inspect a malicious circuit. If they are continuously connected, the multimeter can detect a signature in the circuit’s signal, and begin displaying benign (fake) data. But if you can suddenly connect and disconnect the multimeter at any time, and if it must begin displaying data as soon as it is connected, it fundamentally can’t know what data to display until it is too late.

This was one of the design principles of the hardware isolated laptop[#]. Under the assumption of a bunch of compromised hardware modules, perhaps you could keep them in check by limiting their communication to low-bandwidth, inspectable channels. Even if you weren’t inspecting them at a given point in time, the threat that you could inspect them without warning would prevent malicious behavior. And because of the edge effect, other malicious tools couldn’t hide their behavior either.

Now, this requires being able to detect, observe and control these “edges,” wherever they are. That brings us to the problem of energy gapping:

https://news.ycombinator.com/item?id=12275826
nickpsecurity

Clive Robinson on Schneier’s blog came up with a thorough notion years ago: energy leaks plus “energy gapping” systems. He said if any form of energy or matter could transfer from one system toward another device then it should be considered a potential leak. So, you have to physically isolate them then block the whole spectrum practically.

Without some method of full isolation, you can’t create an “edge.” So despite all the other problems, this one tempered my enthusiasm for the hardware isolated laptop project.