When I first started looking into quantum computing I had a fairly pessimistic view about its near-term commercial prospects, but I’ve come to think we’re only a few years away from seeing serious returns on the technology, and I want to spend a couple of minutes explaining why.
It’s become common to compare the state of QC today with where classical computers were in the era of vacuum-tubes. This is true(ish) in terms of the underlying technology, but it is false in terms of the industry potential.
Viewing quantum computing this way commits two distinct errors, which I’ve come to call:
- The chasmleap error
- Hybrid blindness
The essence of the chasmleap error is a failure to appreciate how much can be accomplished with the buggy, unreliable hardware we’re developing on the way to fully error-corrected quantum computers.
In effect, it’s tacitly asking how far we are from full-bore fully-independent quantum computers and tying the whole estimate of the current technology’s value to *that* distant milestone.
It’s as though critics are assuming we have to cross a chasm all in one leap and then declaring the task impossible, completely missing that we can build a bridge step by step to get to where want to go.
Hence why the error is called ‘the chasmleap error’.
The essence of the second error, hybrid blindness, is a failure to appreciate how much can be accomplished with quantum computers which are integrated into *existing* classical workflows.
In effect, it’s tacitly asking how long it’ll take for fully-general QCs that can play games, surf the internet, and calculate options prices and tying the whole estimate of the current technology’s value to *that* distant milestone.
It’s as though critics are assuming that quantum computers will need to be almost or entirely independent before they can do any useful work, completely missing that they could dramatically speed up individual parts of an existing process.
Hence why the error is called ‘hybrid blindness’.
What both errors miss is that quantum computing is emerging and maturing in a world that’s already saturated with classical computing pipelines.
We use computers to trade stocks, forecast the weather, direct traffic, diagnose illnesses, and do approximately a billion other things.
The pipelines which handle these tasks are shot through with computational bottlenecks.
There are people sitting at major financial institutions waiting on computations which take an entire MONTH to run.
Not all of these bottlenecks are amenable to quantum computing, but many of them are.
And this is where I see the near-future commercial value coming from: early quantum computing will enable us to dramatically speed up problem-solving in domains like risk modeling, drug discovery, materials science, and NLP just by identifying and resolving certain pain points.
Even error-prone, 20-qubit systems can create billions of dollars in value-add if they can accelerate just one or two parts of a complicated workflow in an industry like pharmaceuticals.
What’s more: if a research team manages to build a custom circuit to tackle such a problem, they can just expose it as an API endpoint over the cloud, selling subscriptions to interested parties.
Those parties can access the endpoint when they need the output of a quantum algorithm, otherwise handling everything else with vanilla classical computing resources.
In other words, there are at least two important facts about the world which weren’t true during the first vacuum tube era.
- First, there are lot’s of problems in classical computing pipelines which can be solved with quantum computers.
- Second, there are lot’s of distribution strategies made possible by classical computing.
There has been tremendous hype in the space in recent years. All that aside, I no longer think we’re that far away.
Given what I’ve been reading about companies building hardware, software, and applications, the first major quantum computing breakthrough is likely around the corner.