HomeLog InSource

notes.public

[View] [Short] [Hash] [Raw]

2017-01-07

Following up on the previous post[#] about the difficulty of developing a “successful” technology, this is a post about how technologies do develop.

The basic answer is: gradually.

Guideline one: The problem should precede the solution.

Probably the most common problem for any new technology is not solving the right problem, or worse, not solving a real problem. In the worst case, this is the informercial pattern, like where they show a suburban mom, frazzled after failing to boil an egg for two hours, and then try to sell you the EggEasyDone(TM) to solve all your problems.

Of course, not every problem is widely recognized before a solution is found. Some inventions are ahead of their time, and then the inventor has to market the problem in order to market the solution. That doesn’t mean the solution is invalid.

But I’ll say that if there is an industry built up around a particular domain, and the industry recognizes various problems that they are confronting, and the problem your technology solves isn’t near the top of that list, you’ve probably got a problem in search of a solution.

As a concrete example, I’ll give various alternate CPU architectures, especially VLIW architectures. They do simplify some things and solve problems related to instruction decoding, but the real problems facing CPUs have always been around making smaller transistors, since the gains there have been exponential. Other problems like prefetching and branch prediction have also played a big part. Other improvements and optimizations just aren’t that important.

Guideline two: A new invention should have predecessors, inspirations, and influences.

When you try to pick apart any invention, no matter how revolutionary it may seem, it’s always possible to reduce it to some trivial improvement over whatever came before. To a large extent, I think that doing that is usually unfair.

But new, hyped-up technologies often go too far in the other direction, claiming or implying that they simple but brilliant solutions that no one has ever thought of before. This ties into the first guideline, that if a problem is real, there should’ve been past attempts to solve it.

I hate to draw on the iPhone as an example, since it’s so overused, but it’s simultaneously an example of “out of nowhere” (the original Android was going to look like a Blackberry) and a long history (Steve Jobs’ experience designing new computer interfaces, going back to NeXT, the Mac, and visiting at Xerox PARC).

Guideline three: As a technology develops, there should be known bottlenecks with metrics and benchmarks to measure progress.

This is the “existence proof” issue. Even if a technology is nowhere near ready for prime time, there should be some embryonic version of it with known deficiencies, however serious they may be. In order for the technology to improve, those deficiencies have to be gradually resolved.

Despite my skepticism of self-driving cars, they do have this going for them. The idea of “interruptions per mile,” which just counts the number of times the AI encounters a situation it can’t deal with, makes it fairly straightforward to measure progress and focus on the biggest wins first.

And finally, guideline four: Most technologies should have implications and ramifications aside from those they’re marketed on.

What I mean by this is that technologies often have many possible applications or ways of being used, and that those alternatives should show up in how a technology is initially deployed. If a technology is about to become viable, there should be smaller “toy” versions of it that start appearing first.

I’ve got a lot of examples of this one:

To be fair, sometimes larger problems draw more interest. But if the big problem really is feasible to tackle, then usually these smaller leading problems should be trivial.