HomeLog InSource

notes.public

[View] [Short] [Hash] [Raw]

2017-03-25

An Interstellar Laser Cutter For Long-Distance Massless Transit

I’ve been wanting to write up various ideas I’ve come up with. Depending on motivation levels, this might become a series.

This idea is, I would say, the optimal approach for long-distance space travel, given our current understanding of physics. It lets you travel at the speed of light, plus some constant overhead.

The basic idea is that you use an extremely powerful laser to burn and cut a distant planet, moon or asteroid, thereby (somehow) bootstrapping artificial life. The main downside is that it’s a “post-singularity technology,” which is to say it’s not useful without being able to create general artificial intelligence or biological life from scratch.

The laser would have to be in space, to avoid atmospheric distortion, and a target would need to be chosen without an atmosphere and with some sort of suitable surface material. It would probably emit something much higher energy than visible light.

Advantages:

Limitations:

I’ve posted cynical things about space exploration[#] before. I think that as long as our ideas and technology are rapidly improving, the fastest approach is to wait. That said, a giant space laser will still require conventional rockets, so I’m not opposed to multiple approaches being tried in parallel.

[View] [Short] [Hash] [Raw]

2017-03-10

The separation of law and morality

The separation of church and state has been great. It’s good for the country and good for religion. It’d like to see it taken further.

There are two problems with legislating a moral code:

  1. People disagree on what is and isn’t moral
  2. What works or doesn’t work on an individual scale can be very different from what works or doesn’t for society as a whole

The first point should be self-explanatory. Separating law and morality would get a lot of divisive, hot-button issues out of politics.

The second point means, in part, that we should adopt harm-reduction strategies without necessarily condoning everything.

Basically, bans and prohibition are awful ways to influence people’s behavior, when it comes to things that they want to do. I don’t care if it’s marijuana, abortion, circumcision, or sugar: try to take away something enough people want (or think they want) and they will fight you.

Whether these things (and others) are good or bad, we should legalize and regulate them because it’s the only thing that works. Then you can have a second, social component to discourage them, as you find appropriate.

Some things, like probably heroin, really are genuinely bad. However, simply banning them doesn’t seem to be working that well, and we’ve already tried “tougher enforcement” enough times.

Now, whenever someone points out that prohibition doesn’t work, someone else always points out, “but we prohibit murder!” Murder is different because pretty much everyone agrees it shouldn’t be allowed, and it generally doesn’t develop its own underground communities and black markets. (There are gangs and crime families but they’re usually based around something profitable, rather than people-hunting for sport.)

Like the separation of church and state, the separation of law and morality is good for both sides: laws can be written based around what is effective and enforceable, and morality can be unconstrained by practicality (or other people) to determine absolute right and wrong (if that’s your thing).

Assuming we find this idea mutually agreeable, let’s put it into practice at the earliest opportunity. Thanks.

[View] [Short] [Hash] [Raw]

2017-02-12

What’s been going on lately? Well, progress has been slow.

I mentioned[#] my research into consensus algorithms… almost a month ago? Well, on the one hand, I’ve figured out what I’m trying to do. On the other, I haven’t really been able to do it.

At this point, my goal has moved from “write a consensus algorithm” to “write an abstraction layer for consensus algorithms.” The idea is to write a minimal single-file library that has one universal consensus API, plus a variety of modular back-ends including Paxos, Raft and Bitcoin.

In theory, the interface of a consensus algorithm is extremely simple. You put in local events and get out global events in a uniform order. However, the details are always devilish. You have to worry about peer discovery, membership changes, networking, persistent storage, and weird back-end quirks like Bitcoin being implemented as RPC to a daemon process. All of the existing libraries I’ve seen have obtuse APIs of their own.

For libkvstore, I was able to use the LMDB API almost as-is, just simplifying and generalizing it a bit. For libconsensus, I’m trying to design an API from scratch, in a field where I am obviously far from an expert. It’s tough.

The good news is that the libkvstore distributed back-end is mostly done (barring a few tweaks), which means that once we have a consensus algorithm to plug into it, it’ll be pretty much ready to serve. And the SQLite port is at least functional, albeit buggy. So aside from the hard part, this might be the fastest ever development of a distributed database!

The path to get here: StrongLink -> libkvstore -> libconsensus.

It’s been a long time coming and there’s just one last Everest to summit.

[View] [Short] [Hash] [Raw]

2017-01-29

The Firefox case study

There’s been some consternation over the present and future of Firefox.

I feel like there hasn’t been enough critical analysis of what went wrong, what Mozilla should’ve done instead, and what they should do now to fix it. This should be a case study taught as part of every software engineering curriculum in the land.

First let me say I don’t work an Mozilla, and none of my friends work at Mozilla. On one hand that might make me an uninformed outsider; on the other, it gives me some distance to analyze the situation more objectively.

Also I can’t gloss over the Brendan Eich fiasco. I’ll just say that the situation was more complex than most thought at the time. I think it would’ve been more productive to put aside his personal politics for the sake of the open web and the Mozilla Foundation’s goals. Nobody’s perfect, eating your own, chilling effects, etc.

But it might not’ve made that much difference, because his spinoff browser Brave isn’t that amazing either. Frankly if he had forked Firefox it might’ve been a second-coming-of-Steve-Jobs-type scenario, but by using Electron he basically admitted he was out of shit.

Which, I think, approaches the heart of the matter. Rewriting Netscape killed the company once, and rewriting Firefox is threatening to kill it again. No matter how great Servo will be in five years, Firefox is stagnating today. Project Quantum is good lip service but doesn’t make up for the lack of work on Firefox directly.

Even if Servo is not officially a Firefox replacement, it’s drawing from the same pool. All of the low level, “systems” programmers who might work on fixing Firefox are obviously not going to bother, since everyone has written it off at this point. Instead they’ll either contribute to Servo or work on something else entirely. Unfortunately even with everyone excited to work on Servo, creating a new browser from scratch these days is a bottomless money pit.

This social problem is made worse by what I see as Mozilla having too many web developers. Web developers are ironically useless for developing a web browser, because they can’t think outside the (browser) box. Mozilla’s many GUI experiments and makeovers are symptomatic of this problem. Firefox is a building with structural problems and they hired a lot of interior designers to give it a fresh coat of paint.

While Firefox may already be past the tipping point in terms of developer mindshare, I don’t think it’s too late to fix it from a technical perspective. That said, it’s critical to refactor, not rewrite, and the major stumbling block in that direction is backwards compatibility.

In order for Firefox’s code to not be awful, they have to get rid of XUL. That means dropping support for the historical extension API, which is namely all of XUL. That provokes outrage amongst the current userbase. Strangely, no one minds that the long-awaited Servo won’t support XUL or current Firefox extensions either.

I think the only reasonable way out of this situation is an official fork of Firefox. Preserve the Firefox we know and (almost) love today, while working on a new version that is unconstrained by all of the current bullshit. Sort of like Servo-lite, except that removing bad stuff is a lot easier than creating good stuff from scratch.

I would set a ground rule: no UI changes during this fork. It’s not a playground for happy fun experiments. And yes, I would make it support Chrome extensions out of the box, because they have a relatively sane API, despite its limitations.

I would keep writing it in C++, with much greater security provided by a separate sandboxing layer[#]. But for the sake of argument lets say it has to move to Rust for political reasons. In that case I’d use automatic translation, starting with one module at a time. In my understanding, Corrode can only translate from C to Rust, and there are ABI stability issues when closely integrating between Rust and C++. Those problems would need to be addressed.

Then, once the fork was good enough to be adopted by most techies, and then a safe while longer, I’d rebrand it back to Firefox and push it out through auto-update to existing users who didn’t opt-out.

I’d also go crawling back to Google for the default search engine. Switching to Yahoo was an awful idea even at the time, driven by fear and ideology rather than any sense of strategy.

[View] [Short] [Hash] [Raw]

2017-01-28

Database servers versus embedded databases

To most programmers today, a database is something you connect to. I want to try to explain why that kind of sucks, and what the alternative is.

Common databases like PostgreSQL, MySQL, MongoDB, and others are all “database servers.” That means the database is controlled by some daemon process that you talk to indirectly, usually over Unix domain sockets or TCP.

There is an alternate world of databases, known as embedded databases (or database libraries), which run in your application process and which you talk to using plain old function calls. The best known of these is of course SQLite, but there are also key-value stores like BerkeleyDB, LMDB and LevelDB, another SQL DB called Firebird, and a smattering of others (WiredTiger, Sophia, etc.).

The most important advantage of embedded databases is that they are application-local. By that I mean that they don’t need to be installed or configured separately from the application. Often times they are statically linked (for SQLite you drop two C files into your build system) and they are configured directly from your code. Users don’t have to worry about database management: if your application is running, the DB is running. Multiple applications that want to use the same database system don’t conflict, because they don’t use any global resources (e.g. ports or configuration). Installing your application doesn’t permanently drain memory with a background DB always running.

These advantages are most critical for native and mobile apps, but I think they’re even compelling for server software. The problem of DB server configuration and conflicts is I think part of the reason behind the push for things like Docker. I have nothing against Docker, but in a sense it’s just a static linker for processes, and now you have the additional challenge of maintaining persistent state inside of containers. If you want to use Docker that’s fine, but you shouldn’t be forced to in order to have isolation between DBs.

Embedded databases are also typically more secure, by virtue of having less attack surface. Just in the past couple weeks there have been a rash of “hacks” of thousands of misconfigured (by default) MongoDB, CouchDB and other servers. If your DB doesn’t even bind to TCP, it’s much harder to accidentally leave it exposed. Not to mention the inherent paradox (and hassle) of storing an unencrypted password with your application in order to connect to the DB.

What about performance? Well, remember the ongoing bus-wreck that is KDBus (now BusOne)? The complaint is that DBus is slow because it’s not in the kernel. Your application (in userspace) has to switch to the kernel, then the kernel has to switch to the DB process. Then the same process in reverse to get the response back. If you are reading from the DB, it hopefully has its data cached, meaning these context switches are the biggest overhead involved. If you are writing, nothing should actually touch disk until the transaction commits, and each transaction probably has several statements. If you remember Jeff Dean’s “list of latency numbers every programmer should know,” context switches are ‘not good.’ An embedded DB is even faster than KDBus: it’s in user-space, in your process. No context switch, no syscall (cf. LMDB).

The cost of context switches is more insidious than it might seem. If you have a high fixed cost, the way to increase performance is bigger batching. However that means you are forced to use larger and more complex queries. Whether you think pushing more logic into the DB is a good idea or not, being forced to do it for performance reasons is unpleasant. One example of when it can be annoying is when you want to filter a list of DB results client-side, while simultaneously using a LIMIT clause. Because some results are being discarded, LIMIT can’t be used. Depending on the database server and client library, that may be more painful or less. (A simple key-value store API becomes atrociously slow because each operation is so small.)

The traditional complaint about SQLite is “bad concurrency,” but I think that’s mostly due to confusion and bad defaults. For the sake of backwards compatibility, SQLite3 ships with WAL mode disabled. You should turn it on, always. You should also set a “busy timeout,” so that SQLite will seamlessly handle connection busy errors that make people think it doesn’t scale. (Also check the SQLite pragma docs because there are a lot of neat settings in there.)

Even in WAL mode, SQLite can only handle a single concurrent writer. However, in my experience, the sheer overhead of SQL and the bad write performance of b-trees are much more important bottlenecks. Only once you’ve made your transactions as fast as possible should you start to worry about overlapping them, because the gains of overlapping them are fundamentally limited. These are problems that all popular databases have, whether embedded or server. There are embedded solutions for each of them (SQL performance: embedded key-value stores; b-tree write overhead: LSM-trees or fractal trees; concurrent writes: non-transactional write batches).

What if you are running a large server farm? Well I admit that’s not something I’ve done. But I think there would be a performance advantage to co-locating the app and DB on the same servers. If you need horizontal scaling, then you might want an embedded, distributed database, which is something I’m working on. (I don’t know of any existing ones, probably because the advantages of embedded databases are not widely recognized.)

I’ll grant that if you want to put a security boundary between your application and DB (in other words, if you don’t trust your application), then a database server is the best way to do that. But most DBs aren’t locked down against malicious applications, and it can be difficult to implement sufficiently fine-grained permissions without custom code. (If you’re worried about the attack surface of your application, you should also be worried about the attack surface of your DB, which probably isn’t very well hardened either.)

What are the other advantages of DB servers? Well, your DB can be written in any language, like Golang (surprisingly common, e.g. CockroachDB or etcd). But embedding Go is supposed to be possible.

Are there other reasons to prefer DB servers? Maybe, but I can’t think of them. This may have implications for the current trend of microservices, but I’ll save that for another time.

Even if you’re still convinced DB servers are nicer to use, embedded DBs still have one final advantage: an embedded DB can always be put in a server wrapper. If the DB is developed as a server from the start, it’s a lot more work to turn it into a library.

[View] [Short] [Hash] [Raw]

2017-01-25

The Detector

A woman of about 20 looked blankly up at the scaffolding and metal that towered over her in the hangar she was in. It was a large radar array, still in operation since before the end of the cold war.

She was wearing a plain, knee-length dress and her hair hung lifelessly around her shoulders. Her feet were bare on the concrete floor. She stood staring for another moment when a siren sounded in the distance. She turned slowly and walked to a small door in the corner of the large building.

Outside, the siren echoed into the night. An army jeep with three men in camo uniforms and hats sped past, but she gave no sign of noticing it. Adjacent was a small office building where another jeep parked and men scrambled out. She slowly walked over to them and followed them inside.

There was a commotion inside the control building. Rows of men at monitors were trying to confirm what they had just picked up. The major general stood at the back of the room, looking distinguished and concerned.

“Any report from the other stations?” asked the general.

“Nothing yet, sir,” the comms officer replied.

The girl walked past them down the last row of desks.

“I want a full diagnostic on the primary and secondary radar. See if you can bring the third array online. Everyone at full alert. No one’s going back to bed until I know what happened.”

She left.

At the national laboratory scientists were just arriving for the morning. Men and women in lab coats were grabbing coffee and sitting down at desks to check email.

The girl with bare feet was there in the hallway. She walked past a line of offices and came upon a stairwell. She descended it into the basement.

The basement was where a large and expensive experiment to measure gravity was being performed. Coils for enormous electromagnets were in the center of the room, and cooling equipment lined one wall. A few lab technicians milled about and poked laptops.

She stood in front of the apparatus. Suddenly one of the lab techs became excited.

“Guys, look at this!”

The others gathered around. “Whoa, it’s off the charts!”

“Get Dr. Martin!”

Dr. Martin arrived quickly, holding a coffee with half of it on his shirt. The techs stepped back from the laptop and he started reviewing the data.

“There must be some sort of error.”

The girl disappeared.

In an otherwise empty undergrad physics lab, a man of about 20 was working alone. He was doing some simple experiments with an antenna and an oscilloscope.

The young woman appeared behind him. She walked up to the antenna and looked at it.

It had been two years since he had gotten a scholarship and she had gone to a local school. She had made new friends and a few boys from her classes had even taken interest in her, but that wasn’t what she cared about. She had lost faith in a lot of things since then: school, other people, physics. It seemed like nothing was real anymore.

The wave readout on the oscilloscope became frantic. The young man glanced up from the lab notebook where he had been scribbling a calculation, confused.

He tried tuning the oscilloscope, and then stood up to adjust the antenna. The woman stood next to him as he tilted it left and right.

When that didn’t help, he sat back down again and studied the scope’s digital display. He scratched his head and looked around. He hesitantly called out, “Lindsey?”

“Hey.” She became visible and solid.

[View] [Short] [Hash] [Raw]

2017-01-16

For the past week or two, I’ve been working on designing a new consensus algorithm.

My original intent was to stay as far away from consensus algorithms as possible, since they’re widely known to be Really Hard. But I was working on a distributed back-end for libkvstore, and none of the existing libraries for Raft or Paxos that I found met my requirements. Plus some of them I couldn’t even figure out how to use.

So I started down the dark path of reading the Raft paper, either to learn how to use a Raft library, or to implement my own. Then I had a flash of insight that made me think it’d be easy to design a new algorithm from scratch.

Now a couple weeks later, I just encountered my second “back to the drawing board” moment. So I figured I’d write about these gotchas of consensus.

Let it be known that I am still just a beginner in this field. I only got drawn into it grudgingly.

Gotcha #1: You need two round trips to commit in the general case.

It seems logical that you can just broadcast a write to all of your peers. Once you get approval from a simple majority (N/2+1), the write is committed and you’re free to commit it locally. And, so far as I know, that does in fact work for a single possible writer and a single possible write.

However if you allow multiple writers, which naively you might try to do, you can easily get a split vote where no quorum is possible. Even worse, if one or more peers is unresponsive, you can enter an ambiguous situation where you don’t know which write reached a majority, if any. That can produce a deadlock with just one unavailable peer, when the algorithm should be able to handle almost half.

In order to resolve that generally, you need two round trips, which is why Basic Paxos has a “prepare” phase and an “accept” phase. The prepare phase locks out previous writers so they don’t interfere, and then the accept phase either writes a single value or unambiguously fails.

Gotcha #2: Of course, it isn’t that easy.

The problem is that an “accept” can be interrupted by someone else’s “prepare,” bringing back the ambiguity. You can have a write that is partially accepted and partially rejected because some of the peers promised to only accept someone else’s write instead. In the presence of even one unavailable node, the outcome can be indeterminate.

Naturally it turns out that Basic Paxos has a solution. (To be clear, I’m not sure I totally understand it yet, which is one reason I’m writing it down.) Basically, when you go to write, you are told if there is a write that is potentially ambiguous, and you’re obliged to “force it through” before doing your own write. It’s like if a door closes on you while you’re walking through it, so the person behind you gives you a kick to get through. (Or, you know, reopens it for you.)

A weird thing about this is that even a write that you thought failed (say it was rejected by all peers but one) can end up succeeding anyway. It might succeed or it might not, depending on which peers are up and which are down, or the order they respond in. That violates my understanding of “reached a quorum == committed, otherwise not.”

Anyway, that’s my understanding so far. We’ll see if this new algorithm thing pans out, or whether this was just a time-consuming learning experience.

[View] [Short] [Hash] [Raw]

2017-01-13

What’s wrong with seccomp?

Following up on countless previous sandboxing posts, but especially “what is a sandbox?”[#]

seccomp-bpf is basically a system call firewall. Which is exactly what I call for in the aforementioned post. It’s widely used in Chrome, Firefox, Firejail (a general-purpose application sandbox, like what I wanted), etc.

I think the problem with it is that it is (so far as I know) always used with application-specific rules. Obviously its use in Chrome and Firefox is tuned for those programs, and Firejail uses application-specific profiles to decide which system calls to allow.

This has three negative effects:

  1. Large/complex/powerful applications that use a lot of syscalls get less protection
  2. The allowed syscalls are determined primarily/entirely by the application and what it needs/wants to use, rather than which are most likely to be secure
  3. The syscall profile is less tested because it is unique and changes along with each app

So, what is the alternative?

Well, as hinted at before, start by locking down all of the system calls as much as possible. Then build a “syscall emulator” that reimplements all of the blocked syscalls in terms of the few allowed ones.

So for example, open(2) might be blocked. Instead, the sandbox opens a single file at startup, which the emulator treats as a block device and runs its own (sandboxed) file system inside of. This is obviously a heavy-handed approach, and it might be more convenient to allow native filesystem access, but at least it would be very secure and protect against almost all filesystem bugs (except those that could be triggered by reading or writing to a single file).

Browsers sandbox network access by tunneling traffic through special protocols (WebSockets, WebRTC). However all of these APIs have high overhead, which prevents things like the implementation of DHT in WebTorrent. A sandbox could reduce this overhed by allowing raw UDP, except with a special header. This header would identify the payload as untrusted and prevent it from interfering with applications that didn’t expect it. (Of course you’d still need other restrictions too, to prevent denial of service attacks.)

How do you decide what syscalls to allow? In a CPU sandbox like NaCl, there are three basic concerns:

The bare minimum (for most purposes) is a Turing-complete subset. However, you will probably need to choose/add some instructions for efficiency, too (MOV-only programs are very slow). At that point, the overriding concern becomes how easy it is to create a correct validator.

For other types of sandboxes, like system calls, the concept of Turing completeness doesn’t apply. I like to generalize it to what I call “hardware completeness,” for lack of a better term. That simply means that all of the features of the hardware (disk storage, networking, camera, mic, USB) should be possible to expose to sandboxed applications.

If any bugs in the underlying platform are discovered which affect the security of the sandbox, the sandbox can (hopefully) be changed to a different or more restrictive feature subset, without impacting any (correct, non-malicious) sandboxed programs. Of course if the platform is too broken then eventually sandboxing simply becomes impossible.

In the concrete case of emulating system calls on Linux, there is a problem for some software like Golang and libuv which perform system calls directly. These calls might be hard to intercept efficiently, without slowing down the whole program. Perhaps the simplest approach is to treat the sandbox as its own non-Linux platform, and require that applications for it follow its own syscall ABI. In other words, add special back-ends for libc, Go and libuv. It might even be possible to support applications targeting a different platform, like Windows, as long as there’s a suitable place to add a shim.

In conclusion, I think this is the first “complete” sketch of a sandbox that is efficient, feasible to build, maximally secure, and potentially able to run unmodified applications.

[View] [Short] [Hash] [Raw]

2017-01-07

Following up on the previous post[#] about the difficulty of developing a “successful” technology, this is a post about how technologies do develop.

The basic answer is: gradually.

Guideline one: The problem should precede the solution.

Probably the most common problem for any new technology is not solving the right problem, or worse, not solving a real problem. In the worst case, this is the informercial pattern, like where they show a suburban mom, frazzled after failing to boil an egg for two hours, and then try to sell you the EggEasyDone(TM) to solve all your problems.

Of course, not every problem is widely recognized before a solution is found. Some inventions are ahead of their time, and then the inventor has to market the problem in order to market the solution. That doesn’t mean the solution is invalid.

But I’ll say that if there is an industry built up around a particular domain, and the industry recognizes various problems that they are confronting, and the problem your technology solves isn’t near the top of that list, you’ve probably got a problem in search of a solution.

As a concrete example, I’ll give various alternate CPU architectures, especially VLIW architectures. They do simplify some things and solve problems related to instruction decoding, but the real problems facing CPUs have always been around making smaller transistors, since the gains there have been exponential. Other problems like prefetching and branch prediction have also played a big part. Other improvements and optimizations just aren’t that important.

Guideline two: A new invention should have predecessors, inspirations, and influences.

When you try to pick apart any invention, no matter how revolutionary it may seem, it’s always possible to reduce it to some trivial improvement over whatever came before. To a large extent, I think that doing that is usually unfair.

But new, hyped-up technologies often go too far in the other direction, claiming or implying that they simple but brilliant solutions that no one has ever thought of before. This ties into the first guideline, that if a problem is real, there should’ve been past attempts to solve it.

I hate to draw on the iPhone as an example, since it’s so overused, but it’s simultaneously an example of “out of nowhere” (the original Android was going to look like a Blackberry) and a long history (Steve Jobs’ experience designing new computer interfaces, going back to NeXT, the Mac, and visiting at Xerox PARC).

Guideline three: As a technology develops, there should be known bottlenecks with metrics and benchmarks to measure progress.

This is the “existence proof” issue. Even if a technology is nowhere near ready for prime time, there should be some embryonic version of it with known deficiencies, however serious they may be. In order for the technology to improve, those deficiencies have to be gradually resolved.

Despite my skepticism of self-driving cars, they do have this going for them. The idea of “interruptions per mile,” which just counts the number of times the AI encounters a situation it can’t deal with, makes it fairly straightforward to measure progress and focus on the biggest wins first.

And finally, guideline four: Most technologies should have implications and ramifications aside from those they’re marketed on.

What I mean by this is that technologies often have many possible applications or ways of being used, and that those alternatives should show up in how a technology is initially deployed. If a technology is about to become viable, there should be smaller “toy” versions of it that start appearing first.

I’ve got a lot of examples of this one:

To be fair, sometimes larger problems draw more interest. But if the big problem really is feasible to tackle, then usually these smaller leading problems should be trivial.

[View] [Short] [Hash] [Raw]

2017-01-03

I’ve been keeping a list of major new technologies that might not succeed anyway:

The key thing that all of these technologies have in common is that they all have real existence proofs. They all really work. However, working is (usually) just the minimum bar for a technology to be successful.

(Flying cars didn’t make the list because nothing that would really qualify has ever been built. Aside from like, a Cessna.)

Self-driving cars are already on the streets. SpaceX is already making trips to the International Space Station. VR headsets are already on store shelves. (For the record, I’ve been planning this post since before the recent reports about disappointing Christmas sales. And I’m undeterred by the major announcements about self-driving cars.)

https://www.youtube.com/watch?v=RDWmh0iX7bU
The Airships - Lift Off (1890-1922)

https://www.youtube.com/watch?v=Es3EEEO24E4
HISTORY OF HEAVY DIRIGIBLES & AIRSHIPS HINDENBURG DISASTER 34280

(A couple of documentaries on airships I watched recently. If you’re going to watch one, I recommend the first. The second is subtly biased and spins every problem and setback that airships faced. The pro-airship lobby, man. Don’t get your facts from Big Airship.)

Anyway, dirigibles were making real passenger trips for decades, and played some degree of a role in World War I. And yet, I wouldn’t call them successful, even for the time.

These days a lot of technologists are talking about “moonshots.” But even the Apollo program wasn’t “successful” by this perhaps high but objective standard for a technology. We haven’t been back to the moon since (IIRC) 1974. The famous quote was backwards: “One giant leap for a man, one small step for mankind.” (It should be unsurprising, since progress always comes in small steps.)

For completeness, let me list some technologies that are unambiguously successful:

All of these technologies had doubling times much longer than the average startup lifecycle. In fact, if Charles Babbage had founded a startup, he would’ve gone bankrupt without much to show for it.

“People tend to overestimate exponential curves in the short run, and underestimate them in the long run.”

Not saying none of the technologies on the first list will succeed, just that most probably won’t. The odds aren’t good even once you’ve made it to the starting line. I’d be happy to eat my hat.

…I’ve avoided providing a concrete definition of success because like most things it is hard to pin down. The best definition I’ve got for a successful technology is “universal or else obsolete.”

NewestNewerOlderOldest