Back in 1994, when quantum computers existed only as so much chalk on a blackboard, mathematician Peter Shor invented what may soon prove to be their killer app.
Shor trained his efforts on a calculation called “factoring,” which ordinarily nobody but a mathematician would care about, except it just happens to be an Achilles heel of the internet. If someone were to invent a computer that could perform this operation quickly, messages that are currently hidden from hackers, terrorists, military adversaries, governments and competitors would be as easy to read as a Stephen King novel.
Shor, of course, didn’t have such a computer. He was writing an algorithm, or program, for a hypothetical machine that might one day exploit the weird properties of atoms and subatomic particles, as described by the theory of quantum mechanics, to perform calculations that conventional computers could only solve in years—maybe hundreds of years, or millions, or more time than the universe is expected to last. Too long, at any rate, to be useful in cracking open an email. Shor’s algorithm was a theoretical exercise. “The question of whether using quantum mechanics in a computer allows one to obtain more computational power,” he wrote in his 1994 paper, “has not yet been satisfactorily answered.”
The answers are now coming in.
Last year a team from Google achieved what it called “quantum supremacy” when its quantum computer performed a calculation faster than a conventional computer could. “Our machine performed the target computation in 200 seconds, and from measurements in our experiment we determined that it would take the world’s fastest supercomputer 10,000 years to produce a similar output,” wrote Google’s John Martinis and Sergio Boixo in a blog post. And earlier this month, a team under the direction of Pan Jianwei at the University of Science and Technology in China (USTC), in the journal Science, said its quantum computer succeeded in performing a calculation 100 trillion times faster than a conventional computer could—surpassing Google’s achievement by a factor of 10 billion, according to the Xinhua.
These two announcements were mere demonstrations, using prototype machines in the lab to perform calculations that are not useful in any practical sense. Nobody is ready to put Shor’s algorithm into practice. But tens of billions of dollars are being invested in a broad-scale effort to make it possible. Dozens of engineering teams, from big companies like Google, IBM and Amazon to universities and startups, are racing to build a full-scale working quantum computer. China is reportedly spending $10 billion on the effort, building a center devoted to quantum computing and artificial intelligence; the U.S. government has committed $1 billion; and corporate and military budgets likely hold many millions more—for instance, Google and IBM are each thought to have spent in excess of $100 million.
These groups are in pursuit not merely of faster computers but a fundamentally different approach to computing, which would open up new vistas in technology and society. Quantum computers could be as transformational as the microchip, which ushered in the internet age and all the attendant effects on the economy and politics. For instance, the vast computational possibilities of quantum technology would turbo-charge artificial intelligence in ways that are difficult to fathom. It’s no accident that China’s new technology center will combine the two fields.
China’s ambition in quantum technology has caused something of a Sputnik moment in the U.S., nearly as ominous as the Russian satellite in 1957 that kicked off the race to the moon. It wasn’t long ago that Chinese engineers were perceived as copycats. That is no longer the case. The long-term worry is that the U.S. loses its technological edge. While China embraces ambitious technology programs, the U.S. has in recent years retreated into a reactive mode, with diminishing budgets for science. Back in the 1960s, the federal government accounted for about two-thirds of R&D spending in the U.S., the rest coming mainly from the private sector. But its role has diminished, says Paul Scharre, director of technology and national security at the Center for a New American Security (CNAS) and author of Army of None: Autonomous Weapons and the Future of War. “Basically the federal government has taken its foot off of the gas pedal in terms of innovation in the U.S.,” he says. “While we’re doing so, other nations like China caught up.”
What tends to focus the mind are the security implications in the near term. When quantum computers go live, what will happen to all our secrets? Will we wake up one day and find that China has been reading our mail? The NSA and other intelligence agencies are already preparing for a world where all their secrets are vulnerable. Shor’s algorithm, once a fanciful conjecture, is beginning to look like a threat. The question is, is the threat imminent?
The meaning of “Quantum Supremacy”
John Martinis got involved in quantum computers back in the 1980s, “before the word ‘qubit’ was even invented,” he says.
A qubit is the fundamental unit of information in a quantum computer—the analogue to a “bit” in conventional computers, but with some important differences. A bit can be either a zero or a one; a qubit can be both numbers at once, and everything in between—a property known as superposition. A bit exists as a tiny pocket of electrical charge in a silicon chip, which classical computers shuffle around like checkers to perform mathematical operations; a qubit is a single atom or subatomic particle, which stores information in a peculiar statistical fashion according to laws of quantum mechanics that are wholly beyond our experience in the macroscopic world. A bit is a discrete unit of information; a qubit is part of a collective, “entangled” with other qubits by a phenomenon that Albert Einstein described as “spooky action at a distance.”
In his early work at the University of California at Santa Barbara, Martinis began by asking basic questions about how to get information out of things as small as atoms and photons, or particles of light. But dealing with single atoms and particles pushes engineering to extreme levels of precision, as Martinis found early on. How do you protect these tiny particles but also allow them to interact with others in such a way that the computer can perform a useful calculation? In other words, how do you harness the qualities of superposition and entanglement to perform a task, such as factoring a large number for the purpose of reading an encrypted message? “You have to isolate qubits to keep them coherent,” he says, “but if you isolate them super well, they can’t talk to other qubits to do computation.”
Martinis’ spent years trying to strike this balance, experimenting with different materials and setups, then turned to the task of getting qubits to work together in a computer. Eventually he found his way to Google, where he began work on what became Sycamore, the quantum computer used in last year’s demonstration. Sycamore’s 54 qubits are kept in a chamber at Google’s lab in Goleta, California cooled to within one degree of absolute zero, the lowest temperature possible—more than 500 degrees F below zero. The machine is “programmed” by beaming faint microwaves into the chamber, which stimulate the qubits.
A big issue that Martinis and every other quantum engineer struggles with is how to keep the qubits intact long enough to perform a calculation. Superposition—the ability of qubits to be both a zero and a one at the same time—is an essential part of the machine’s operation. The slightest disturbance, however, can cause a qubit to collapse into a one or a zero, bringing down the whole delicate entangled constellation of qubits with it. Even cooled at extreme temperatures, the qubits have an annoying tendency to dissipate so quickly that many calculations result in errors. Making a quantum computer is difficult enough; making one that is not riddled with errors has so far proved beyond the reach of the engineers.
“You would like qubits to maintain their superposition of a zero and a one and maintain entangled states even while you’re doing operations on them,” says Scott Aaronson, a computer science professor at the University of Texas at Austin who collaborates with Google and other quantum engineers. “The problem is they’re inherently very fragile. As soon as information leaks into the environment about whether a qubit is a zero or a one, the whole thing collapses. This ‘noisiness’ is the fundamental problem in building a quantum computer. This is what makes it hard.”
Coming up with a way to test Google’s and USTC’s machines was a difficult problem in itself. To do so required overcoming a conundrum: If you ask your quantum computer to solve a problem that no conventional computer can accomplish in a reasonable amount of time, how do you check the results? The simplest way would be to use Shor’s algorithm on an encrypted message; if you can read the message, you know your computer works. But Shor’s algorithm was too difficult for the baby quantum computers of the day to handle.
Back in 2011, Aaronson and his graduate students came up with the idea of “boson sampling,” which involves predicting how particles like photons will behave when they bounce around obstacles. It’s a tough problem for classical computers because it involves lots of calculations about quantum mechanics; but since quantum computers live in that realm, the calculation should be a doddle. Aaronson not only came up with the experiment but also, crucially, with a way of checking the results statistically without having to solve the problems with a classical computer—which, of course, by the definition of “quantum supremacy,” should be impossible.
Both Google and USTC wound up adapting Aaronson’s approach to their specific machines. Pan Jianwei and his colleagues at USTC, in fact, built Jiuzhang literally as a machine for boson sampling—using photons, a type of boson, as qubits. They sent photons, in the form of laser beams, pinging and ponging through a course of mirrors and other obstacles. The setup wasn’t meant to be a general-purpose computer that could be programmed to perform different tasks but to do one thing only: demonstrate that a machine made of photons could perform a calculation of how photons behave when they move through an obstacle course.
The USTC experiment accomplished more than this tautological description captures, of course. It demonstrated that photons could be controlled and used to produce a computational result. Still, engineers have critiqued Jiuzhang on the grounds that it was built for such a narrow purpose. They’ve also tried to show that a classical computer could achieve the same result in a reasonable amount of time, a task known in vernacular as “spoofing.” “The situation is evolving rapidly, from day to day, as people try to knock down the new result by showing how to spoof the outputs classically,” said Aaronson in an email. “We don’t know yet how well they’re going to succeed. Debates about whether, and in what sense, the USTC group achieved quantum supremacy are likely to continue for quite some time.”
Google’s Sycamore test also made big headlines, and it also caught some flak in technical circles. IBM engineers, who are working on their own quantum computer, insisted that it’s possible in theory to spoof Sycamore with a supercomputer, provided it were equipped with tremendous amounts of memory.
“They said, ‘it only took us two seconds, but it would take a crippled supercomputer 10,000 years’,” said Robert Sutor, a mathematician and vice president at IBM Research. “Why are you crippling it? Why would you remove part of its functionality and then say how wonderful you are?”
Many engineers look at the quantum supremacy demonstration more as milestones than a significant developments in their own right. Both Sycamore and Jiuzhang were impressive accomplishments; both are a long way from doing anything remotely useful, claims of “supremacy” to the contrary. “I don’t think that quantum supremacy is completely a done deal,” says Aaronson. “I would like to see quantum supremacy for some problem where we can actually easily recognize the answer.”
To get a quantum computer capable of doing interesting things, engineers will need to figure out how to correct the errors and scale the machines up to thousands of qubits, and perhaps millions. The first practical applications are likely to be in simulating things that involve quantum mechanics, like chemistry, which could have an impact in drug development.
“Shor’s algorithm, breaking cryptographic codes, is one of those things that will happen in the evolution of quantum computers,” says Aaronson. “But by the time you can do that, you can pretty much do any quantum computation. It would surprise me a lot if it was in the next decade.”
So why worry?
After the Sycamore demonstration in 2019, Martinis and Google had a parting of ways. “It was time for me to leave,” he says. In the fall, Martinis joined Michelle Simmons, an old acquaintance who had formed Silicon Quantum Computing, a start-up in Sydney, Australia. Simmons’ company is making qubits out of phosphorous and silicon, which tend to be more stable than other materials, she says, and that means they may not require so much error-correcting.
“Working at Google was great because we had the resources to solve tough problems,” says Martinis. “On the other hand, what’s great right now is there’s an ecosystem where you have the companies, the startups and university groups where people can solve problems. I think that’s better in the end.”
Martinis, though, is under no illusion that a thousand quantum flowers will bloom. The field is crowded now, but that won’t last forever. “All these people have a lot of optimism, but when they go to do the systems engineering, they’re going to find that their ideas might not work so well.” Out of the dozen or so projects underway now, he says, “it’s a question whether one or two could work. Building a quantum computer is really hard, harder than you think.”
The resources required to pull off a quantum computer would seem to favor the Googles and the IBMs of the world—and China. Google’s Hartmut Neven, head of its quantum computing effort, told a gathering of the Center for Strategic and International Studies earlier this year that building an error correcting quantum computer would cost more than $3 billion.
Google is currently committed to seeing the project through and has the cash to do so, but a change in corporate priorities could put such a long-term effort at risk. “What would really secure American leadership,” said Neven, “is if the government would use its enormous purchasing power to reward early risk takers.”
Regardless of whatever shortcomings Jiazhang may have, it clearly demonstrates that China is a formidable innovator. Neven issued a grim warning about the danger of the U.S. being beaten in the race to develop a quantum computer. “We are indeed most worried [about] an as of yet unknown competitor [from] China [who will] beat us to the race to an error-correcting machine, because China has the ability to steer enormous resources in a direction that’s deemed strategically important.”
While China’s ambitions have grown, the technology aims of the U.S. seem diminished. “There is a mentality of complacency,” says Elsa Kania, a china expert at CNAS. “There’s a sense and an ideological commitment to the notion that the market can do it all, that there’s no role for government, and a backlash against investments in science and education. Even if China was doing nothing in quantum science, we should be investing a lot in the basic research, trying to fund some of these new programs, and trying to build up the pipeline of talent.”
How much the U.S. is spending on quantum computing research is difficult to say. Although the government’s share of total R&D spending is lower than it used to be, “when you include U.S. private companies, we still outpace pretty much everyone in the world,” says Todd Harrison, director of defense budget analysis at the Center for Strategic and International Studies.
Corporate research doesn’t include much basic R&D, which is what typically yields the biggest long-term payoffs. The military, which in the past has sowed world-changing technologies like the internet, could wind up playing a crucial role in quantum computing. Funding for unclassified military R&D has in general remained steady, according to Harrison. The Pentagon is probably also funding classified quantum computer research. Documents from the Edward Snowden cache revealed that the National Security Agency was spending about $80 million on a “cryptographically useful quantum computer,” the Washington Post reported, all of it classified.
Even though the quantum computers themselves seem far off, it’s not too soon to start worrying about keeping secrets from prying eyes. The prospect of a code-breaking machine emerging sometime in the next decade is already setting off alarm bells in some quarters.
The National Security Agency and other intelligence organizations are thought to be scooping up reams of encrypted information in anticipation of a day in the not-too-distant future when they can decode them with a quantum computer. And they are also beginning to worry about the day when their adversaries can decipher their collected secrets, Wetoo. In the U.S., plans are afoot to introduce new encryption methods that cannot be broken even by a quantum computer. The NSA announced in 2015 that it intended to switch eventually to an alternative, quantum-resistant scheme, as yet undetermined. “It is now clear that the current Internet security measures and the cryptography behind them will not withstand the new computational capabilities that quantum computers will bring,” an NSA spokesperson told Quanta’s Natalie Wolchover.
A year later, the National Institutes of Standards and Technology announced a technical competition for standards of quantum-resistant encryption. This fall, NIST narrowed a field of 69 contenders to 15. The most popular scheme, it turns out, is “lattice-based encryption,” which would require a computer to find a specific route through grids of billions of numbers —an entirely different mathematical basis than that of current public-key encryption schemes, which rely on factoring large numbers.
Persuading government agencies and other organizations to migrate from the current public-key encryption schemes to new ones won’t be easy. If the threat is not clear and present, complacency can set it. “People are still using web browsers with encryption that was broken in the nineties,” says Aaronson. “It’s sad.”
Correction: 12/14/2020 5:16 pm ET: This article was modified to indicate that Silicon Quantum Computing’s quantum computer does in fact require cryogenic equipment.