分享

Quantum Supremacy Using a Programmable Superconduc...

 仇维堃 2019-09-22
6
On Google Cloud servers, we estimate that perform-
ing the same task for m= 20 with 0:1% delity using
the SFA algorithm would cost 50 trillion core-hours and
consume 1 petawatt hour of energy. To put this in per-
spective, it took 600 seconds to sample the circuit on
the quantum processor 3 million times, where sampling
time is limited by control hardware communications; in
fact, the net quantum processor time is only about 30
seconds. The bitstring samples from this largest circuit
are archived online.
One may wonder to what extent algorithmic innova-
tion can enhance classical simulations. Our assumption,
based on insights from complexity theory, is that the cost
of this algorithmic task is exponential in nas well as m.
Indeed, simulation methods have improved steadily over
the past few years[42{50]. We expect that lower simula-
tion costs than reported here will eventually be achieved,
but we also expect they will be consistently outpaced by
hardware improvements on larger quantum processors.
VERIFYING THE DIGITAL ERROR MODEL
A key assumption underlying the theory of quantum
error correction is that quantum state errors may be con-
sidered digitized and localized [38, 51]. Under such a dig-
ital model, all errors in the evolving quantum state may
be characterized by a set of localized Pauli errors (bit-
and/or phase-
ips) interspersed into the circuit. Since
continuous amplitudes are fundamental to quantum me-
chanics, it needs to be tested whether errors in a quantum
system could be treated as discrete and probabilistic. In-
deed, our experimental observations support the validity
of this model for our processor. Our system delity is
well predicted by a simple model in which the individ-
ually characterized delities of each gate are multiplied
together (Fig 4).
To be successfully described by a digitized error model,
a system should be low in correlated errors. We achieve
this in our experiment by choosing circuits that ran-
domize and decorrelate errors, by optimizing control to
minimize systematic errors and leakage, and by design-
ing gates that operate much faster than correlated noise
sources, such as 1=f
ux noise [37]. Demonstrating a pre-
dictive uncorrelated error model up to a Hilbert space of
size 2
53 shows that we can build a system where quantum
resources, such as entanglement, are not prohibitively
fragile.
WHAT DOES THE FUTURE HOLD?
Quantum processors based on superconducting qubits
can now perform computations in a Hilbert space of di-
mension 2
53 ˇ9 1015, beyond the reach of the fastest
classical supercomputers available today. To our knowl-
edge, this experiment marks the rst computation that
can only be performed on a quantum processor. Quan-
tum processors have thus reached the regime of quantum
supremacy. We expect their computational power will
continue to grow at a double exponential rate: the clas-
sical cost of simulating a quantum circuit increases expo-
nentially with computational volume, and hardware im-
provements will likely follow a quantum-processor equiv-
alent of Moore’s law [52, 53], doubling this computational
volume every few years. To sustain the double exponen-
tial growth rate and to eventually o er the computational
volume needed to run well-known quantum algorithms,
such as the Shor or Grover algorithms [19, 54], the engi-
neering of quantum error correction will have to become
a focus of attention.
The \Extended Church-Turing Thesis' formulated by
Bernstein and Vazirani [55] asserts that any \reasonable'
model of computation can be eciently simulated by a
Turing machine. Our experiment suggests that a model
of computation may now be available that violates this
assertion. We have performed random quantum circuit
sampling in polynomial time with a physically realized
quantum processor (with suciently low error rates), yet
no ecient method is known to exist for classical comput-
ing machinery. As a result of these developments, quan-
tum computing is transitioning from a research topic to a
technology that unlocks new computational capabilities.
We are only one creative algorithm away from valuable
near-term applications.
Acknowledgments
We are grateful to Eric Schmidt,
Sergey Brin, Je Dean, and Jay Yagnik for their executive
sponsorship of the Google AI Quantum team, and for their
continued engagement and support. We thank Peter Norvig
for reviewing a draft of the manuscript, and Sergey Knysh
for useful discussions. We thank Kevin Kissel, Joey Raso,
Davinci Yonge-Mallo, Orion Martin, and Niranjan Sridhar
for their help with simulations. We thank Gina Bortoli and
Lily Laws for keeping our team organized. This research used
resources from the Oak Ridge Leadership Computing Facility,
which is a DOE Oce of Science User Facility supported un-
der Contract DE-AC05-00OR22725. A portion of this work
was performed in the UCSB Nanofabrication Facility, an open
access laboratory.
Author contributions The Google AI Quantum team
conceived of the experiment. The applications and algorithms
team provided the theoretical foundation and the speci cs of
the algorithm. The hardware team carried out the experiment
and collected the data. The data analysis was done jointly
with outside collaborators. All authors wrote and revised the
manuscript and supplement.
Competing Interests The authors declare that they have
no competing nancial interests.
Correspondence and requests for materials should
be addressed to John M. Martinis (jmartinis@google.com).

    本站是提供个人知识管理的网络存储空间,所有内容均由用户发布,不代表本站观点。请注意甄别内容中的联系方式、诱导购买等信息,谨防诈骗。如发现有害或侵权内容,请点击一键举报。
    转藏 分享 献花(0

    0条评论

    发表

    请遵守用户 评论公约

    类似文章 更多