分享一个外网的关于量子计算的学术观点:通用量子计算永远不能够被实现

发布时间 2023-12-22 15:19:00作者: Angry_Panda

外网原文地址:

https://spectrum.ieee.org/the-case-against-quantum-computing



外网原文内容(中文版,由ChatGPT3.5翻译):

量子计算风头正劲。似乎每天都有新闻媒体描述这项技术所承诺的非凡事物。大多数评论员忽略了,或者只是草率地跳过了这样一个事实,即人们已经在量子计算上工作了几十年,却没有任何实际的成果可展示。

我们被告知量子计算机可能会在许多领域取得突破,包括材料和药物发现、复杂系统的优化以及人工智能。我们被确信,量子计算机将“永远改变我们的经济、工业、学术和社会格局”。甚至有人告诉我们,量子计算机可能很快就会“破解保护世界上最敏感数据的加密”技术。已经到了这样一个地步,许多物理学各个领域的研究人员感到有义务通过声称他们的工作与量子计算有关来为他们正在进行的任何工作辩护。

与此同时,政府研究机构、学术部门(其中许多由政府机构资助)、以及企业实验室每年都在花费数十亿美元开发量子计算机。在华尔街,摩根士丹利等金融巨头预计量子计算很快将成熟,并迫切希望弄清楚这项技术如何能够帮助它们。

这已经成为某种自我延续的军备竞赛,许多组织似乎只是为了避免被抛在后面而参与竞赛。一些世界顶级的技术人才,如谷歌、IBM和微软等地,正在努力工作,并在先进实验室中使用丰富资源,以实现他们对量子计算未来的愿景。

鉴于这一切,人们自然会想知道:什么时候将会建造出有用的量子计算机?最乐观的专家估计需要5到10年。更为谨慎的人士预测需要20到30年。(顺便说一句,类似的预测在过去20年中一直存在。)我属于回答“在可预见的未来内不会”这一问题的极少数人。作为在量子和凝聚态物理领域进行数十年研究的人,我形成了非常悲观的观点。这一观点基于对必须克服的巨大技术挑战的理解,以使量子计算得以实现。

量子计算的概念最早出现在将近40年前,即1980年,当时俄罗斯出生的数学家尤里·马宁(现在在德国波恩的马克斯·普朗克数学研究所工作)首次提出了这一概念,尽管当时它还相对模糊。然而,这个概念在接下来的一年里真正引起了关注,当时加利福尼亚理工学院的物理学家理查德·费曼独立提出了这一概念。

理查德·费曼意识到,当受到研究的量子系统变得过于复杂时,对这些系统进行计算机模拟就变得不可能。于是,费曼提出了一个观点,即计算机本身应该以量子模式运行:“天哪,自然不是经典的,如果你想对自然进行模拟,最好让它成为量子力学,哎呀,这是一个很棒的问题,因为它看起来并不那么容易,”他评论道。几年后,牛津大学的物理学家大卫·德沃奇正式描述了一台通用量子计算机,这是通用图灵机的量子模拟。

然而,这个领域直到1994年才引起更多关注,当时数学家彼得·肖尔(当时在贝尔实验室工作,现在在麻省理工学院)提出了一种理想量子计算机的算法,能够比传统计算机更快地因式分解非常大的数字。这一卓越的理论成果引发了对量子计算的广泛兴趣。此后,关于这一主题的研究论文成千上万篇,主要是理论性的,不断涌现并呈增长态势。

量子计算的基本思想是以一种与传统计算机完全不同的方式存储和处理信息,传统计算机基于经典物理学。概括了许多细节,可以说传统计算机的操作是通过操纵大量微小的晶体管来实现的,这些晶体管基本上是作为开关工作的,它们在计算机时钟的周期之间改变状态。

因此,在任何给定时钟周期开始时,经典计算机的状态可以通过一长串位的序列来描述,这个序列在物理上对应于各个晶体管的状态。对于N个晶体管,计算机可能处于2N个可能的状态之一。在这样的计算机上进行计算基本上包括根据预定的程序将其一些晶体管在“开”和“关”状态之间切换。

在量子计算中,经典的两态电路元件(晶体管)被一个称为量子位或量子比特(qubit)的量子元件所取代。与传统位一样,它也有两种基本状态。虽然多种物理对象都可以合理地用作量子位,但最简单的是使用电子的内部角动量或自旋,它具有奇特的量子特性,即在任何坐标轴上只有两个可能的投影:+1/2 或 -1/2(以普朗克常数为单位)。无论选择哪个轴,都可以用↑和↓来表示电子自旋的两种基本量子状态。

事情变得奇怪的地方就在这里。在量子比特中,这两种状态并不是唯一可能的。这是因为电子的自旋状态由一个量子力学波函数描述。而这个函数涉及到两个复数,α和β(称为量子幅度),作为复数,它们有实部和虚部。这两个复数α和β各有一定的大小,根据量子力学的规则,它们的平方大小必须加起来等于1。

这是因为这两个平方大小对应于当你进行测量时电子自旋处于基本状态↑和↓的概率。而且因为这是唯一可能的结果,所以这两个相关的概率必须加起来等于1。例如,如果找到电子处于↑状态的概率是0.6(60%),那么找到它处于↓状态的概率必须是0.4(40%)—没有其他可能。

与经典比特只能处于其两个基本状态之一不同,量子比特(qubit)可以处于由量子幅度α和β的值定义的可能状态的连续范围内。这种特性经常被描述为一个有点神秘和令人生畏的说法,即一个量子比特可以同时存在于其↑和↓两个状态中。

是的,量子力学常常违背直觉。但这个概念不应该用如此令人困惑的语言来表达。相反,可以将其想象为位于x-y平面上且相对于x轴倾斜45度的矢量。有人可能会说,这个矢量同时指向x和y方向。从某种意义上说,这个说法是正确的,但这并不是一个真正有用的描述。在我看来,将一个量子比特描述为同时处于↑和↓状态是同样没有帮助的。然而,记者们几乎已经成为习惯性地这样描述它。

在具有两个量子比特的系统中,有22或4种基本状态,可以写成(↑↑)、(↑↓)、(↓↑)和(↓↓)。理所当然地,这两个量子比特可以由涉及四个复数的量子力学波函数来描述。在一般情况下,具有N个量子比特的系统的状态由2N个复数描述,这些复数受到它们的平方幅值总和必须等于1的限制。

在任意给定时刻,常规计算机的N个位必须处于其2N种可能状态之一,而具有N个量子比特的量子计算机的状态由2N个量子振幅的值描述,这些振幅是连续参数(可以取任何值,而不仅仅是0或1)。这是量子计算机被认为具有强大计算能力的原因,但也是它极为脆弱和易受攻击的原因。

这样的计算机如何处理信息呢?这是通过应用一些称为“量子门”的变换来完成的,这些变换以一种精确和可控的方式改变这些参数。

专家们估计,用于构建一台有用的量子计算机,即可以在解决某些有趣问题方面与您的笔记本电脑竞争的计算机,所需的量子比特数量在1,000到100,000之间。因此,描述这样一台有用量子计算机在任意时刻的状态所需的连续参数数量至少为21,000,也就是说大约为10^300。这是一个非常大的数字。有多大呢?它远远超过可观察宇宙中的亚原子粒子数量。

再强调一下:一台有用的量子计算机需要处理的连续参数集比可观察宇宙中的亚原子粒子数量还要大。

在描述一个可能的未来技术时,一个务实的工程师在这一点上就会失去兴趣。但让我们继续。在任何现实世界的计算机中,都必须考虑错误的影响。在常规计算机中,这些错误出现在一个或多个晶体管在它们应该开启时关闭,反之亦然。这种不希望发生的情况可以通过使用相对简单的纠错方法来处理,这些方法利用硬件中内建的一些冗余水平。

相比之下,维持对一个有用的量子计算机必须处理的 10^300 个连续参数的错误控制是绝对难以想象的。然而,量子计算理论家成功地让公众相信这是可行的。事实上,他们声称某种被称为阈值定理的东西证明了这一点。他们指出,一旦每个量子比特每个量子门的错误率低于某个值,就可以实现无限长的量子计算,但代价是大幅增加所需的量子比特数量。他们认为,通过使用额外的量子比特,你可以通过使用多个物理量子比特形成逻辑量子比特来处理错误。

每个逻辑量子比特需要多少个物理量子比特?没有人真正知道,但估计通常在 1,000 到 100,000 之间。因此,一个有用的量子计算机现在需要一百万个或更多的量子比特。而这个假想的量子计算机的状态定义的连续参数数量,在拥有 1,000 个量子比特时已经超过了天文数字,现在变得更加荒谬。

即使不考虑这些不可思议的庞大数字,令人警醒的是,至今没有人能够找出如何将许多物理量子比特组合成能够计算出有用信息的较小数量的逻辑量子比特。而且这早已是一个长期以来的关键目标。

在21世纪初,应美国情报社区资助的高级研究与开发活动(现为情报先进研究项目活动的一部分)的请求,一组杰出的量子信息专家制定了一项量子计算的路线图。该路线图在2012年设定了一个目标,需要“大约50个物理量子比特”,并且“通过执行所需的容错[量子计算]的全部操作范围,以执行相关量子算法的一个简单实例...” 到了2018年底,这种能力仍然没有被证明。

关于量子计算的大量学术文献在描述实际硬件的实验研究方面明显不足。虽然已经报道的实验证明相当困难,但这些实验必须受到尊重和钦佩。

此类概念验证实验的目标是展示进行基本量子操作的可能性,并演示已经设计的量子算法的一些元素。这些实验中使用的量子比特数量通常不超过10,通常在3到5之间。显然,从5个量子比特到50个(由ARDA专家小组在2012年设定的目标)存在难以克服的实验困难。最有可能的原因与这样一个简单的事实有关,即25 = 32,而250 = 1,125,899,906,842,624。

相比之下,量子计算理论在处理数百万量子比特时似乎没有遇到任何实质性的困难。例如,在误差率的研究中,正在考虑各种噪声模型。已经证明(在某些假设下)由“局部”噪声引起的错误可以通过精心设计且非常巧妙的方法进行纠正,其中包括大规模并行处理,同时对不同的量子比特对应用许多数千个门,并同时进行许多数千次测量。


关于作者
Mikhail Dyakonov 在法国蒙彼利埃大学的Charles Coulomb实验室从事理论物理研究。他的名字与各种物理现象有关,也许最著名的是戴亚科诺夫表面波。


外网原文内容:

Quantum computing is all the rage. It seems like hardly a day goes by without some news outlet describing the extraordinary things this technology promises. Most commentators forget, or just gloss over, the fact that people have been working on quantum computing for decades—and without any practical results to show for it.

We've been told that quantum computers could “provide breakthroughs in many disciplines, including materials and drug discovery, the optimization of complex systems, and artificial intelligence." We've been assured that quantum computers will “forever alter our economic, industrial, academic, and societal landscape." We've even been told that “the encryption that protects the world's most sensitive data may soon be broken" by quantum computers. It has gotten to the point where many researchers in various fields of physics feel obliged to justify whatever work they are doing by claiming that it has some relevance to quantum computing.

Meanwhile, government research agencies, academic departments (many of them funded by government agencies), and corporate laboratories are spending billions of dollars a year developing quantum computers. On Wall Street, Morgan Stanley and other financial giants expect quantum computing to mature soon and are keen to figure out how this technology can help them.

It's become something of a self-perpetuating arms race, with many organizations seemingly staying in the race if only to avoid being left behind. Some of the world's top technical talent, at places like Google, IBM, and Microsoft, are working hard, and with lavish resources in state-of-the-art laboratories, to realize their vision of a quantum-computing future.

In light of all this, it's natural to wonder: When will useful quantum computers be constructed? The most optimistic experts estimate it will take 5 to 10 years. More cautious ones predict 20 to 30 years. (Similar predictions have been voiced, by the way, for the last 20 years.) I belong to a tiny minority that answers, “Not in the foreseeable future." Having spent decades conducting research in quantum and condensed-matter physics, I've developed my very pessimistic view. It's based on an understanding of the gargantuan technical challenges that would have to be overcome to ever make quantum computing work.

The idea of quantum computing first appeared nearly 40 years ago, in 1980, when the Russian-born mathematician Yuri Manin, who now works at the Max Planck Institute for Mathematics, in Bonn, first put forward the notion, albeit in a rather vague form. The concept really got on the map, though, the following year, when physicist Richard Feynman, at the California Institute of Technology, independently proposed it.

Realizing that computer simulations of quantum systems become impossible to carry out when the system under scrutiny gets too complicated, Feynman advanced the idea that the computer itself should operate in the quantum mode: “Nature isn't classical, dammit, and if you want to make a simulation of nature, you'd better make it quantum mechanical, and by golly it's a wonderful problem, because it doesn't look so easy," he opined. A few years later, University of Oxford physicist David Deutsch formally described a general-purpose quantum computer, a quantum analogue of the universal Turing machine.

The subject did not attract much attention, though, until 1994, when mathematician Peter Shor (then at Bell Laboratories and now at MIT) proposed an algorithm for an ideal quantum computer that would allow very large numbers to be factored much faster than could be done on a conventional computer. This outstanding theoretical result triggered an explosion of interest in quantum computing. Many thousands of research papers, mostly theoretical, have since been published on the subject, and they continue to come out at an increasing rate.

The basic idea of quantum computing is to store and process information in a way that is very different from what is done in conventional computers, which are based on classical physics. Boiling down the many details, it's fair to say that conventional computers operate by manipulating a large number of tiny transistors working essentially as on-off switches, which change state between cycles of the computer's clock.

The state of the classical computer at the start of any given clock cycle can therefore be described by a long sequence of bits corresponding physically to the states of individual transistors. With N transistors, there are 2N possible states for the computer to be in. Computation on such a machine fundamentally consists of switching some of its transistors between their “on" and “off" states, according to a prescribed program.

In quantum computing, the classical two-state circuit element (the transistor) is replaced by a quantum element called a quantum bit, or qubit. Like the conventional bit, it also has two basic states. Although a variety of physical objects could reasonably serve as quantum bits, the simplest thing to use is the electron's internal angular momentum, or spin, which has the peculiar quantum property of having only two possible projections on any coordinate axis: +1/2 or –1/2 (in units of the Planck constant). For whatever the chosen axis, you can denote the two basic quantum states of the electron's spin as ↑ and ↓.

Here's where things get weird. With the quantum bit, those two states aren't the only ones possible. That's because the spin state of an electron is described by a quantum-mechanical wave function. And that function involves two complex numbers, α and β (called quantum amplitudes), which, being complex numbers, have real parts and imaginary parts. Those complex numbers, α and β, each have a certain magnitude, and according to the rules of quantum mechanics, their squared magnitudes must add up to 1.

That's because those two squared magnitudes correspond to the probabilities for the spin of the electron to be in the basic states ↑ and ↓ when you measure it. And because those are the only outcomes possible, the two associated probabilities must add up to 1. For example, if the probability of finding the electron in the ↑ state is 0.6 (60 percent), then the probability of finding it in the ↓ state must be 0.4 (40 percent)—nothing else would make sense.

In contrast to a classical bit, which can only be in one of its two basic states, a qubit can be in any of a continuum of possible states, as defined by the values of the quantum amplitudes α and β. This property is often described by the rather mystical and intimidating statement that a qubit can exist simultaneously in both of its ↑ and ↓ states.

Yes, quantum mechanics often defies intuition. But this concept shouldn't be couched in such perplexing language. Instead, think of a vector positioned in the x-y plane and canted at 45 degrees to the x-axis. Somebody might say that this vector simultaneously points in both the x- and y-directions. That statement is true in some sense, but it's not really a useful description. Describing a qubit as being simultaneously in both ↑ and ↓ states is, in my view, similarly unhelpful. And yet, it's become almost de rigueur for journalists to describe it as such.

In a system with two qubits, there are 22 or 4 basic states, which can be written (↑↑), (↑↓), (↓↑), and (↓↓). Naturally enough, the two qubits can be described by a quantum-mechanical wave function that involves four complex numbers. In the general case of N qubits, the state of the system is described by 2N complex numbers, which are restricted by the condition that their squared magnitudes must all add up to 1.

While a conventional computer with N bits at any given moment must be in one of its 2N possible states, the state of a quantum computer with N qubits is described by the values of the 2N quantum amplitudes, which are continuous parameters (ones that can take on any value, not just a 0 or a 1). This is the origin of the supposed power of the quantum computer, but it is also the reason for its great fragility and vulnerability.

How is information processed in such a machine? That's done by applying certain kinds of transformations—dubbed “quantum gates"—that change these parameters in a precise and controlled manner.

Experts estimate that the number of qubits needed for a useful quantum computer, one that could compete with your laptop in solving certain kinds of interesting problems, is between 1,000 and 100,000. So the number of continuous parameters describing the state of such a useful quantum computer at any given moment must be at least 21,000, which is to say about 10300. That's a very big number indeed. How big? It is much, much greater than the number of subatomic particles in the observable universe.

To repeat: A useful quantum computer needs to process a set of continuous parameters that is larger than the number of subatomic particles in the observable universe.

At this point in a description of a possible future technology, a hardheaded engineer loses interest. But let's continue. In any real-world computer, you have to consider the effects of errors. In a conventional computer, those arise when one or more transistors are switched off when they are supposed to be switched on, or vice versa. This unwanted occurrence can be dealt with using relatively simple error-correction methods, which make use of some level of redundancy built into the hardware.

In contrast, it's absolutely unimaginable how to keep errors under control for the 10300 continuous parameters that must be processed by a useful quantum computer. Yet quantum-computing theorists have succeeded in convincing the general public that this is feasible. Indeed, they claim that something called the threshold theorem proves it can be done. They point out that once the error per qubit per quantum gate is below a certain value, indefinitely long quantum computation becomes possible, at a cost of substantially increasing the number of qubits needed. With those extra qubits, they argue, you can handle errors by forming logical qubits using multiple physical qubits.

How many physical qubits would be required for each logical qubit? No one really knows, but estimates typically range from about 1,000 to 100,000. So the upshot is that a useful quantum computer now needs a million or more qubits. And the number of continuous parameters defining the state of this hypothetical quantum-computing machine—which was already more than astronomical with 1,000 qubits—now becomes even more ludicrous.

Even without considering these impossibly large numbers, it's sobering that no one has yet figured out how to combine many physical qubits into a smaller number of logical qubits that can compute something useful. And it's not like this hasn't long been a key goal.

In the early 2000s, at the request of the Advanced Research and Development Activity (a funding agency of the U.S. intelligence community that is now part of Intelligence Advanced Research Projects Activity), a team of distinguished experts in quantum information established a road map for quantum computing. It had a goal for 2012 that “requires on the order of 50 physical qubits" and “exercises multiple logical qubits through the full range of operations required for fault-tolerant [quantum computation] in order to perform a simple instance of a relevant quantum algorithm…." It's now the end of 2018, and that ability has still not been demonstrated.

The huge amount of scholarly literature that's been generated about quantum-computing is notably light on experimental studies describing actual hardware. The relatively few experiments that have been reported were extremely difficult to conduct, though, and must command respect and admiration.

The goal of such proof-of-principle experiments is to show the possibility of carrying out basic quantum operations and to demonstrate some elements of the quantum algorithms that have been devised. The number of qubits used for them is below 10, usually from 3 to 5. Apparently, going from 5 qubits to 50 (the goal set by the ARDA Experts Panel for the year 2012) presents experimental difficulties that are hard to overcome. Most probably they are related to the simple fact that 25 = 32, while 250 = 1,125,899,906,842,624.

By contrast, the theory of quantum computing does not appear to meet any substantial difficulties in dealing with millions of qubits. In studies of error rates, for example, various noise models are being considered. It has been proved (under certain assumptions) that errors generated by “local" noise can be corrected by carefully designed and very ingenious methods, involving, among other tricks, massive parallelism, with many thousands of gates applied simultaneously to different pairs of qubits and many thousands of measurements done simultaneously, too.


About the Author
Mikhail Dyakonov does research in theoretical physics at Charles Coulomb Laboratory at the University of Montpellier, in France. His name is attached to various physical phenomena, perhaps most famously Dyakonov surface waves.