HomeLog InSource

notes.public

[View] [Short] [Hash] [Raw]

2017-05-10

Interfaces versus Inheritance

It’s a common pattern in object-oriented languages to use subclassing, inheritance and method overriding to define and implement interfaces. For example, one of my old projects, EasyCapViewer, gradually grew five or six drivers for different version of the video capture hardware it supported. A similar project, Macam, had hundreds of drivers for almost every possible USB webcam. Both of these programs defined an semi-abstract base class with methods intended to be overridden by concrete driver implementations, and also convenience implementations that drivers could call back up to for shared behavior.

While working on EasyCapViewer, I learned what a bad idea that design is. The problem is that you are mixing the interface with a shoddy differential compression scheme. The more implementations you have, the more redundancy you will have, and the more that redundancy hurts when you find a bug or have to change something. As you add back-ends, you frequently discover code that is common amongst two or more of them. The obvious thing to do is push that code up into the common base class. That might mean adding a new method, or changing the behavior of an existing one.

However, the biggest piece of redundancy should be the interface itself, which every back-end is tied to (somehow) because every back-end needs to implement it. It can’t be factored out, and every time it changes, most or all of the implementations need to be updated. So it’s crucial that this interface be ironed out and hammered down as early as possible.

So there is a fundamental conflict between trying to optimize the interface to “compress” the implementations, because the appropriate compression depends on the implementations themselves, and they will change and grow over time. On the other hand the interface must not change, because that requires changing all of the implementations. (In lower circles of hell, changes to the interface will cause changes to the appropriate compressions, and vice versa. You can get caught in a loop until things finally stabilize at a new local optimum.)

This is, in a word, bad.

What I recommend, and what I did in my more recent project libkvstore (which supports “drivers” for several different storage engines), is to separate the interface from the code deduplication system. The interface is defined mostly up front, based on the nature of the thing being abstracted and what it needs to do. (This is extremely difficult and vastly under-appreciated, and it’s a topic I’m still not qualified to write much about.)

Then, as implementations are added and redundancy is noticed, “implementation helper” functions are added. These helpers are not publicly exposed, and might not have any formal basis or rationale for existing. They simply soak up whatever duplication is found. As new implementations are added and the apparent duplications change, new helpers can be added that are subsets or supersets of old helpers (and helpers might call each other behind the scenes). Helper functions are a lot like a compression dictionary. They just represent duplication, not necessarily any real structure or meaning.

There is one class of helper that I’m still on the fence about. In a language like C with textual (rather than semantic) macros, you can abstract out even the duplication of function declarations. That lets you standardize argument names and even make certain changes to function signatures easily. On the other hand, it makes the code ugly, harder to read, and harder for some tools (like syntax highlighters) to parse. It’s easy to dismiss this idea out of hand, but when you have hundreds of back-ends, every little bit of deduplication might be worth it. That said, automated refactoring tools can help with these problems in a way that might be more socially acceptable. (I optimistically used this technique for file type converters in StrongLink, even though it currently only has two.)

You might have noticed that I’ve recommended against using most of the standard features of object orientation here. It turns out that inheritance and overriding conflate different things in a way that causes problems. In particular, avoid about using differential compression to guide your interface designs. That said, they are appropriate tools in some cases, especially when you have a strong theoretical basis for what each class’s responsibilities and inheritance relationships are.

Happy coding!

Keywords: programming, interfaces, APIs

[View] [Short] [Hash] [Raw]

2017-04-29

This Mess of a Certificate Authority System

I recently had to switch all of my SSL certificates from StartCom to Let’s Encrypt, so let me take this opportunity to expound on the public key infrastructure used on the web.

First, to establish a secure connection over an untrusted channel using symmetric encryption, you need a shared secret. The conventional solution is to start a handshake using asymmetric encryption, through which each party can be identified by their public key. Then the problem becomes, how do you verify the other person’s public key?

There are basically two solutions in practice: SSL/TLS, which uses a hierarchy of certificate authorities to delegate validation to; and SSH, which presents the public key fingerprint to the users and lets them work it out themselves.

Incidentally, any system which doesn’t use either of these techniques, and which doesn’t rely on a pre-shared key, probably has a fundamental weakness. So for example, I think it should be pretty trivial to steal WiFi passwords just by masquerading as someone else’s access point and waiting for them to connect. The only way it could be secure is if the access point provided a public key for clients to validate. (If you Google phrases like “wifi mitm” you get a lot of application man-in-the-middle attacks like Firesheep, which are completely unrelated, so I’m not sure whether this exploit is known or practical.)

Anyway, public keys must be validated by the client, on way or another. Assuming the user is in control of his or her own computer, that means the responsibility ultimately falls to the user. However, since all of us are lazy and most of us are clueless, and browsers do certificate checking by default, we tend to let browsers do it for us.

Responsibility and power must be two sides of the same coin, because browsers have turned around and created a large industry of certificate authorities. The profits of these companies come directly from users’ disinterest in (and to some extent, ignorance and fear of) verifying certificates themselves.

Okay, so I may sound a little bit anti-capitalist but none of that is bad. The bad part is this:

Before you can verify a website’s certificate, before you can even connect to it, you need to know whether to expect the site to be encrypted or not. After all, there are still lots of unencrypted sites out there. If a site supports encryption, you don’t want to connect to it insecurely, but if it isn’t, you have to or it won’t work. (But note that both “upgrade” and “fallback” strategies are insecure. So you really do have to know in advance.)

The original solution was an “s” in the URL’s protocol. If you go to https://www.mybank.com, your browser knows that the connection must be encrypted. It won’t allow the connection to be downgraded by an attacker.

However, we follow URLs from all sorts of sources: other web pages, emails, chat, etc. We also type mybank.com in the address bar, which still assumes plain “http:”. Either way, if you end up at http://www.mybank.com (without “s”), then someone can intercept your unencrypted connection. So the “s” in the protocol is almost worthless.

The next solution was HSTS (Hypertext Strict Transport Security), which simply lets sites indicate that all future connections should be secure. However, that only helps once the user has connected (potentially insecurely) the first time, similar to TOFU (Trust On First Use).

The final solution was HSTS preloading, which is just a way for browsers to bundle in a list of sites which require encrypted connections. It was started by the Chrome developers but these days basically all browsers use their site list.

If a site isn’t on the big list, your connections to it are vulnerable: if the site uses HSTS, then the first time you visit it (on a new computer or after a fresh install); or otherwise, every time you visit the site without using a trusted link or manually typing “https:”.

In other words, the free HSTS preload list is a browser-run, semi-manual certificate authority, like they were trying to avoid by creating the CA ecosystem in the first place. If it were enhanced to record the fingerprint of each site’s root certificate, then all of the current flexibility could be maintained, but additionally free certificates could be more accessible and secure (because each level of delegation just increases the attack surface).

The problem of verifying certificates and the problem of knowing whether to expect a certificate really are the same problem, so it makes no sense for them to be completely disconnected (even handled by separate entities) the way they are currently.

If you want to get into really crazy territory, browser vendors could bring EV (Extended Validation) certificates in-house, which would give companies like Mozilla another revenue stream, instead of (bizarrely) outsourcing the profits like they do today. As EV gets increasingly automated (from what I understand, some CAs already have a completely automated EV process), this becomes more practical for software companies like browser vendors.

Unfortunately, the biggest problem with this plan is probably social. Once an industry is created, it’s nearly impossible to get rid of, no matter how unnecessary it becomes. Thus, as a final warning for aspiring protocol designers, be very careful where you insert opportunities for profit, because they’ll be impossible to remove later.

[View] [Short] [Hash] [Raw]

2017-04-25

https://news.ycombinator.com/item?id=14057727
Dylan16807

JIT can be faster than static compilation.

The argument goes that just-in-time (JIT) compilation can be faster than ahead-of-time (AOT) compilation because JIT can specialize based on the common codepaths and runtime data. For the sake of argument, let’s ignore profile-guided optimization (PGO), which nobody likes.

The counter-argument goes that if JIT is so great, why can’t you JIT C code?

It took me a while, but I finally found the answer. It turns out that there is something lighter than JIT, and more adaptive than AOT, and it’s built into every modern CPU: branch prediction!

The argument that CPUs are optimized for C code is both true and false. It’s true that CPUs are optimized for the most common code they run, which historically has been C/C++. However, the deeper truth is that CPUs (especially X86) are optimized as much as possible, period. Those optimizations are just limited by reality (hardware constraints, generality, etc.).

For the most part, C generates logic that is static enough that the CPU’s branch predictor is enough. But for dynamically typed languages, absolutely everything requires a branch prediction, which overwhelms the hardware.

The only dynamically typed language that is both commonly AOT compiled and heavily optimized that I’m aware of is Objective-C, which makes it an interesting case study. Of course, ObjC is not commonly considered that fast, even compared to C++ vtables.

The idea here is that, if you had a fast execution model for dynamically typed languages, you could make JavaScript as fast as V8 without 90% of the overhead.

First, imagine an AOT-compiled JavaScript that just used libobjc (the ObjC runtime which handles dynamic dispatch, amongst other things). It’d be almost as fast as Objective-C (but slower due to the lack of inline C and different idioms) and almost as lightweight (but still more bloated, assuming you need a garbage collector).

Next, the real question: is there an even faster execution model? Based on the idea of working with the CPU branch predictor (rather than duplicating its effort) and doing a bit of extra “branch prediction in software,” I think that there is.

What if you wrote an AOT compiler that generated instructions that were designed to be changed at runtime? Basically, in psuedocode:

var add(var a, var b) {
	Class type = NULL; // Constant modified at runtime
	if(unlikely(get_type(a) != type || get_type(b) != type)) {
		change_specialization(add, a, b);
	}
	Op op = NULL; // Constant modified at runtime
	return a op b;
}

In this case, Class is probably a class pointer. Op is inline assembly that gets overwritten. The arguments and return value are carefully arranged so that op can be a single CPU instruction (like an add) or a function call (for complicated/custom types).

From this basic setup, you can do a lot of additional optimizations. For example, you can keep two versions of hot functions: one specialized and one generic. That might be useful because change_specialization() is fairly expensive (requiring two calls to mprotect(2) under W^X, although JITs have that same overhead). You can also do more sophisticated runtime profiling to decide exactly when to optimize/deoptimize like JITs do (but that has its own overhead/complexity, so it might not be worth it).

Assuming all goes well, at this point you have a compiled, dynamically typed language that’s basically slightly slower than Golang (or maybe faster when you’re doing lots of dynamic dispatch). If you can replace the garbage collector with reference counting (ARC), you can eliminate the memory bloat too (but you have to deal with cycles… how much overhead does Python’s cycle collector have?).

A statically compiled JavaScript would not be much use in web browsers (unless you target WebAssembly, but that’s pretty ballsy and I don’t know if it lets you do self-modifying code), but it’d be great for Node.js and Electron.

And of course this execution model would work for any dynamic language, including Python, Objective-C, Lisp, etc.

[View] [Short] [Hash] [Raw]

2017-04-13

List of headlines for articles I haven’t written

Programming:

Other:

[View] [Short] [Hash] [Raw]

2017-04-11

I’ve gone back and forth on my opinion of slow, high level scripting languages like Python. Note that I’m considering all such languages together, although Python is the iconic poster snake.

Let me divide languages into four categories:

Note that I am intentionally conflating the language itself with how it is executed. There’s no reason you can’t have a slow C interpreter or a statically compiled, if not fast, version of Python. You can also make JITs for anything (and JITs for Python, Ruby and others exist).

I’ve pretty much conclusively dismissed JITs at this point, since with current technology they are always enormous, complicated resource hogs. I would take a slow interpreter over a bloated JIT any day.

I’d also rather rather use a mix of high and low level languages (like C and Lua) over a “middle level” language like C++ that tends to do everything badly.

Anyway, using one high level and one low level language seems like a pretty sane and reasonable position. So the argument against it is a little bit dubious, but still worth considering.

Basically, with the proper libraries and abstractions build on C, it can approach the convenience of Python (Cello is a somewhat ridiculous existence proof). If you take the time to develop that scaffolding, and your own skill and experience to wield C as efficiently as someone writes Python, won’t that leave you in a generally much stronger and more capable position?

This is an argument against using the right tool for the job. If you make a conscious effort to use the most powerful tool, even when it’s wildly inappropriate for the task at hand, won’t that leave you at an advantage in the long term?

Conversely, whenever you bang something out in Python, sure you can get paid and go home, but aren’t you also eroding your engineering sense and judgment?

It’s true that ultimately, economics trumps engineering. However, I wonder if as engineers, that isn’t a really horrible mindset to have.

When you argue that software can be slow because human reaction times are slow, isn’t that an anti-engineering argument? When you argue that computers are fast, or that developer time is more valuable than CPU time (user time), isn’t that anti-engineering?

If you’re just trying to make a buck on joke apps, it’s true that no engineering is necessary. But where does that leave your career, and you as a person? Even if you’re writing joke apps to make a buck, wouldn’t it be better to be practicing skills that will one day let you make something “good,” that will stand the test of time?

That which doesn’t make you stronger is killing you.

[View] [Short] [Hash] [Raw]

2017-03-31
Originally written 2016-12-08

Significantly advanced technology is indistinguishable from a deceiving god

One conclusion to draw from Ken Thompson’s historic speech, Reflections on Trusting Trust, states that we inherently must rely on tools to control machine code and data, and that in order to build secure systems, we need secure tools. However, the tools themselves cannot be verified by the naked eye, and so we need to trust more tools. Even an extremely simple tool, like a plain LED, could have a tiny computer hidden inside of it, preventing it from lighting up and concealing a larger attack. (With smart light bulbs, this fear is gradually becoming a reality.)

Paranoia over this sort of threat seems to be becoming more common, and probably for good reason. If miniaturization of computer hardware continues apace, we will eventually reach a point where it’s feasible to hide tiny, malicious computers in everything. One might call it an “internet of things.” There are already tiny keyloggers that fit inside the plugs of USB cords.

This concept, of a world full of undetectable forces intent on deceiving us, is surprisingly similar to the one imagined by Rene Descartes. I tried reading him and it was impenetrable, but hopefully we can trust Wikipedia:

https://en.wikipedia.org/wiki/Cogito_ergo_sum
https://en.wikipedia.org/w/index.php?title=Cogito_ergo_sum&oldid=748668175
Cogito ergo sum

At the beginning of the second meditation, having reached what he considers to be the ultimate level of doubt—his argument from the existence of a deceiving god—Descartes examines his beliefs to see if any have survived the doubt. In his belief in his own existence, he finds that it is impossible to doubt that he exists. Even if there were a deceiving god (or an evil demon), one’s belief in their own existence would be secure, for there is no way one could be deceived unless one existed in order to be deceived.

While reasonable people would agree that a deceiving god doesn’t actually exist, it seems like some people are intent on creating one.

Even under a deceiving technological god, let us assume that our skulls, brains and minds are intact (since biology is hard). Under this scenario, let’s say we can still trust our sense of logic and our own sight and touch. How much more can we know?

A few years ago I saw a tech demo for a “magic mirror”: a video camera with facial recognition software, hooked up to a display showing a computer-generated person. The idea was it would “reflect” the head position and facial expression detected by the camera back, in terms of the CG person’s head and face. It could have applications for animated movies, video conferencing, and who knows what else. (It was slightly different from a newer commercial tech called FaceRig, since it really did try to “mirror” its input.)

The reason I bring this up is because there was an interesting effect at the edges of the “mirror.” When a face is cut off at the camera’s edge, facial recognition fails. That means there is a dead zone around the edges.

With a real mirror, photons are reflected individually. Even if something is at the very edge of the mirror, a single photon can be accurately reflected. But because these “magic mirrors” reflect expressions, they can only interpret photons in aggregate. No matter how advanced or capable they become, they can’t accurately reflect an arbitrary portion of a face.

This concept of “cutting off” also applies to deceptive technology trying to disguise itself. Imagine you have a malicious multimeter, and you’re trying to inspect a malicious circuit. If they are continuously connected, the multimeter can detect a signature in the circuit’s signal, and begin displaying benign (fake) data. But if you can suddenly connect and disconnect the multimeter at any time, and if it must begin displaying data as soon as it is connected, it fundamentally can’t know what data to display until it is too late.

This was one of the design principles of the hardware isolated laptop[#]. Under the assumption of a bunch of compromised hardware modules, perhaps you could keep them in check by limiting their communication to low-bandwidth, inspectable channels. Even if you weren’t inspecting them at a given point in time, the threat that you could inspect them without warning would prevent malicious behavior. And because of the edge effect, other malicious tools couldn’t hide their behavior either.

Now, this requires being able to detect, observe and control these “edges,” wherever they are. That brings us to the problem of energy gapping:

https://news.ycombinator.com/item?id=12275826
nickpsecurity

Clive Robinson on Schneier’s blog came up with a thorough notion years ago: energy leaks plus “energy gapping” systems. He said if any form of energy or matter could transfer from one system toward another device then it should be considered a potential leak. So, you have to physically isolate them then block the whole spectrum practically.

Without some method of full isolation, you can’t create an “edge.” So despite all the other problems, this one tempered my enthusiasm for the hardware isolated laptop project.

[View] [Short] [Hash] [Raw]

2017-03-25

An Interstellar Laser Cutter For Long-Distance Massless Transit

I’ve been wanting to write up various ideas I’ve come up with. Depending on motivation levels, this might become a series.

This idea is, I would say, the optimal approach for long-distance space travel, given our current understanding of physics. It lets you travel at the speed of light, plus some constant overhead.

The basic idea is that you use an extremely powerful laser to burn and cut a distant planet, moon or asteroid, thereby (somehow) bootstrapping artificial life. The main downside is that it’s a “post-singularity technology,” which is to say it’s not useful without being able to create general artificial intelligence or biological life from scratch.

The laser would have to be in space, to avoid atmospheric distortion, and a target would need to be chosen without an atmosphere and with some sort of suitable surface material. It would probably emit something much higher energy than visible light.

Advantages:

Limitations:

I’ve posted cynical things about space exploration[#] before. I think that as long as our ideas and technology are rapidly improving, the fastest approach is to wait. That said, a giant space laser will still require conventional rockets, so I’m not opposed to multiple approaches being tried in parallel.

[View] [Short] [Hash] [Raw]

2017-03-10

The separation of law and morality

The separation of church and state has been great. It’s good for the country and good for religion. It’d like to see it taken further.

There are two problems with legislating a moral code:

  1. People disagree on what is and isn’t moral
  2. What works or doesn’t work on an individual scale can be very different from what works or doesn’t for society as a whole

The first point should be self-explanatory. Separating law and morality would get a lot of divisive, hot-button issues out of politics.

The second point means, in part, that we should adopt harm-reduction strategies without necessarily condoning everything.

Basically, bans and prohibition are awful ways to influence people’s behavior, when it comes to things that they want to do. I don’t care if it’s marijuana, abortion, circumcision, or sugar: try to take away something enough people want (or think they want) and they will fight you.

Whether these things (and others) are good or bad, we should legalize and regulate them because it’s the only thing that works. Then you can have a second, social component to discourage them, as you find appropriate.

Some things, like probably heroin, really are genuinely bad. However, simply banning them doesn’t seem to be working that well, and we’ve already tried “tougher enforcement” enough times.

Now, whenever someone points out that prohibition doesn’t work, someone else always points out, “but we prohibit murder!” Murder is different because pretty much everyone agrees it shouldn’t be allowed, and it generally doesn’t develop its own underground communities and black markets. (There are gangs and crime families but they’re usually based around something profitable, rather than people-hunting for sport.)

Like the separation of church and state, the separation of law and morality is good for both sides: laws can be written based around what is effective and enforceable, and morality can be unconstrained by practicality (or other people) to determine absolute right and wrong (if that’s your thing).

Assuming we find this idea mutually agreeable, let’s put it into practice at the earliest opportunity. Thanks.

[View] [Short] [Hash] [Raw]

2017-02-12

What’s been going on lately? Well, progress has been slow.

I mentioned[#] my research into consensus algorithms… almost a month ago? Well, on the one hand, I’ve figured out what I’m trying to do. On the other, I haven’t really been able to do it.

At this point, my goal has moved from “write a consensus algorithm” to “write an abstraction layer for consensus algorithms.” The idea is to write a minimal single-file library that has one universal consensus API, plus a variety of modular back-ends including Paxos, Raft and Bitcoin.

In theory, the interface of a consensus algorithm is extremely simple. You put in local events and get out global events in a uniform order. However, the details are always devilish. You have to worry about peer discovery, membership changes, networking, persistent storage, and weird back-end quirks like Bitcoin being implemented as RPC to a daemon process. All of the existing libraries I’ve seen have obtuse APIs of their own.

For libkvstore, I was able to use the LMDB API almost as-is, just simplifying and generalizing it a bit. For libconsensus, I’m trying to design an API from scratch, in a field where I am obviously far from an expert. It’s tough.

The good news is that the libkvstore distributed back-end is mostly done (barring a few tweaks), which means that once we have a consensus algorithm to plug into it, it’ll be pretty much ready to serve. And the SQLite port is at least functional, albeit buggy. So aside from the hard part, this might be the fastest ever development of a distributed database!

The path to get here: StrongLink -> libkvstore -> libconsensus.

It’s been a long time coming and there’s just one last Everest to summit.

[View] [Short] [Hash] [Raw]

2017-01-29

The Firefox case study

There’s been some consternation over the present and future of Firefox.

I feel like there hasn’t been enough critical analysis of what went wrong, what Mozilla should’ve done instead, and what they should do now to fix it. This should be a case study taught as part of every software engineering curriculum in the land.

First let me say I don’t work an Mozilla, and none of my friends work at Mozilla. On one hand that might make me an uninformed outsider; on the other, it gives me some distance to analyze the situation more objectively.

Also I can’t gloss over the Brendan Eich fiasco. I’ll just say that the situation was more complex than most thought at the time. I think it would’ve been more productive to put aside his personal politics for the sake of the open web and the Mozilla Foundation’s goals. Nobody’s perfect, eating your own, chilling effects, etc.

But it might not’ve made that much difference, because his spinoff browser Brave isn’t that amazing either. Frankly if he had forked Firefox it might’ve been a second-coming-of-Steve-Jobs-type scenario, but by using Electron he basically admitted he was out of shit.

Which, I think, approaches the heart of the matter. Rewriting Netscape killed the company once, and rewriting Firefox is threatening to kill it again. No matter how great Servo will be in five years, Firefox is stagnating today. Project Quantum is good lip service but doesn’t make up for the lack of work on Firefox directly.

Even if Servo is not officially a Firefox replacement, it’s drawing from the same pool. All of the low level, “systems” programmers who might work on fixing Firefox are obviously not going to bother, since everyone has written it off at this point. Instead they’ll either contribute to Servo or work on something else entirely. Unfortunately even with everyone excited to work on Servo, creating a new browser from scratch these days is a bottomless money pit.

This social problem is made worse by what I see as Mozilla having too many web developers. Web developers are ironically useless for developing a web browser, because they can’t think outside the (browser) box. Mozilla’s many GUI experiments and makeovers are symptomatic of this problem. Firefox is a building with structural problems and they hired a lot of interior designers to give it a fresh coat of paint.

While Firefox may already be past the tipping point in terms of developer mindshare, I don’t think it’s too late to fix it from a technical perspective. That said, it’s critical to refactor, not rewrite, and the major stumbling block in that direction is backwards compatibility.

In order for Firefox’s code to not be awful, they have to get rid of XUL. That means dropping support for the historical extension API, which is namely all of XUL. That provokes outrage amongst the current userbase. Strangely, no one minds that the long-awaited Servo won’t support XUL or current Firefox extensions either.

I think the only reasonable way out of this situation is an official fork of Firefox. Preserve the Firefox we know and (almost) love today, while working on a new version that is unconstrained by all of the current bullshit. Sort of like Servo-lite, except that removing bad stuff is a lot easier than creating good stuff from scratch.

I would set a ground rule: no UI changes during this fork. It’s not a playground for happy fun experiments. And yes, I would make it support Chrome extensions out of the box, because they have a relatively sane API, despite its limitations.

I would keep writing it in C++, with much greater security provided by a separate sandboxing layer[#]. But for the sake of argument lets say it has to move to Rust for political reasons. In that case I’d use automatic translation, starting with one module at a time. In my understanding, Corrode can only translate from C to Rust, and there are ABI stability issues when closely integrating between Rust and C++. Those problems would need to be addressed.

Then, once the fork was good enough to be adopted by most techies, and then a safe while longer, I’d rebrand it back to Firefox and push it out through auto-update to existing users who didn’t opt-out.

I’d also go crawling back to Google for the default search engine. Switching to Yahoo was an awful idea even at the time, driven by fear and ideology rather than any sense of strategy.

NewestNewerOlderOldest