(FactBasedInsights.QuantumAlgorithmsOutlook:PartOneInSeries)
Contrary to many reports there are set be immediate applications even for early devices. Some will be controversial. However in other areas timelines look over-hyped. 2020 will be a key year to prove the naysayers wrong.
Will clients see early value?
Despite all the fuss, quantum processors actually run more slowly than conventional processors. It’s only the unique quantum algorithms they support that promise significant advantages.
Since the 1990’s, quantum algorithms have been developed and adapted to address a wide range of problems. When combined with powerful enough hardware they typically promise significant (quadratic) speedups, and in some cases a remarkable (exponential) ones.
Much work in 2019 has focused on separating what requires large scale FTQC versus where quantum advantage might be achieved with early NISQ devices. The most promising ideas for the next 2-5 years are typically hybrid classical-quantum heuristic approaches requiring trial and error to develop.
Intriguingly, early movers are showing that in some cases a ‘quantum-inspired’ approach is all it takes even on pre-supremacy and conventional hardware.
Cryptography – already something new?
Much comment following Google’s quantum supremacy announcement welcomed the achievement of an engineering milestone, but concluded that ‘of course there is no immediate application’.
Aaronson’s radically new proposal uses a variation of the Google supremacy algorithm to allow us to prove that the random numbers produced are from a verifiably quantum distribution. Crucially this can be done remotely and openly. This allows new applications where we need to certify publically (without having to trust any central party) that a number is random. Modest use cases would appear to include lotteries and audit. The potential application of this technique in proof-of-stake cryptocurrencies could be much more significant.
The Blockchain Trilemma – Bitcoin hasn’t taken over the world because in the end the proof-of-work consensus algorithm on which it depends does not scale well and leads to transaction settlement times and costs inconsistent with mass adoption (the environmental impact of its excessive energy use is also obscene). Proof-of-stake is a conceptually attractive alternative approach. However practical implementations have always had to compromise across a ‘trilemma’ of security, scalability and decentralisation challenges. Facebook’s controversial Libra cryptocurrency proposes a ‘permissioned’ proof-of-stake scheme. However, ‘permissioned’ means that there is inevitably a central group of operators that, unlike Bitcoin miners, cannot easily escape regulatory oversight; hence the successful political pressure on early Libra backers to withdraw. The underlying purpose of these blockchain consensus protocols is simply to randomly determine who has the right to add the next block of transactions to the distributed ledger. Publically provably random numbers are potentially the key resource missing to allow a truly streamlined permissionless scheme.
It has been common for quantum gurus to muse that ‘the most important applications of quantum computers will be ones that nobody has thought of yet’. They will feel vindicated if the first significant application turns out to be one that literally no one had thought of until Google started messing around with random quantum circuits.