feature article
Subscribe Now

Richard Feynman and Quantum Computing

Forty Years On, an Idea is Becoming Reality

Richard Feynman, probably the most colourful physicist of the twentieth century, as well as one of the most important, was born a hundred years ago, on May 11th 1918. In a long career, there were some significant highlights.  Before he was thirty, he was responsible for running the computing unit at Los Alamos, where rooms full of women working with mechanical calculators carried out the complex calculations behind the first atom bombs. He also teased the military security in place by opening locked filing cabinets and strong boxes, but, perhaps more importantly, he was regarded as a touchstone – a person who would analyse your ideas and respond to them, often in the process clarifying them. One person who sought him out was the great Niels Bohr, because, according to legend, Feynman was the only person who wasn’t in awe of him and so would argue with him.

At the other end of his career, he was part of the Rogers Commission investigating the Challenger disaster – the explosion of a space shuttle that killed all seven crew. During the debate as to whether O-ring seals had failed, causing a fire, which then caused the explosion, he dropped an O-ring into ice water and demonstrated that at freezing point it lost all resilience. He famously concluded, “I believe that has some significance for our problem.” He also spoke to technicians and engineers rather than to the top management of NASA and its suppliers, and his conclusions on the overall reliability of the shuttle appeared as a separate appendix to the report – after he threatened to withdraw his name from the report unless they did appear there. In particular, he was scathing that the figures for reliability given by management were several orders of magnitude more optimistic than those of the engineers. Management put the chance of a failure at 1 in 100,000 while Feynman’s research discovered that the engineering teams rated it at more like 1 in 100.  “For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.”

His Feynman Lectures on Physics was an attempt to provide a coherent introduction to physics for undergraduates, and, while the lectures were discontinued as being too challenging, they are legendarily a valuable source book and are available in a number of on-line versions, and a set of recordings of the original lectures is available as a series of audio books.

But his crowning achievement, for which he received the Nobel Prize for physics, was the work he did in the late 1940s and early 1950s on quantum electrodynamics (QED) – the way in which the elements in the electro-magnetic spectrum, such as electrons and photons, behave and how they interact. Many years later, in a lecture, he said he didn’t expect the audience to understand QED, because he didn’t either – the rules are screwy, and what he found particularly galling was that the theory relied on probability, rather than certainty. What Feynman did do was to describe QED. His description corresponded to observed behaviour with an accuracy of 15 decimal places. This, he suggested, was the difference of the distance to the moon from the top of his head compared to that from his chin. (I haven’t checked this number.) What he was describing is the behaviour of the particles within the electro-magnetic spectrum and the interaction, in particular, of light and electricity. The topic was based on work by Dirac, Heisenberg and Schrödinger in the late 1920s, but any attempt to derive a way of predicting this behaviour failed until Feynman managed to do so. Working elsewhere on the problem and producing similar results were Sin-Itiro Tomonaga and Julian Schwinger, with whom he shared the Nobel Prize in 1965.  A major part of Feynman’s contribution was the creation of what are now called Feynman diagrams, which provide a graphical model of how particles such as electrons and photons behave and interact.

As electronic computers developed in power, so theoretical physicists began to make use of them in calculations and modelling, but they often struggled to develop ways of representing things that could exist in multiple states. For example, an electron might be observable in one of two states. To simulate that one electron is simple, it is in either state A or state B. With two electrons, you have the possibility of having both in state A, both in B, one in A and the other B, or vice versa – a total of four probabilities. With ten electrons, this rises to 1,024 probabilities, and 20 has 1,048,576. But the systems that a physicist wants to investigate may have many millions of electrons, and the number of probabilities becomes unimaginably large. In the late 1970s Feynman began considering this problem, and, in a paper published in 1982, Simulating Physics with Computers, he postulated that to simulate quantum systems you would need to build quantum computers. He does this by first looking at a range of systems that you might want to simulate and showing how they cannot be adequately represented by what he calls a classical computer. As far as I can understand his arguments (there are a lot of equations), it is not an issue that can be solved simply by scaling the classical computer, or by applying massive parallelism.

And we are now seeing work on actually producing quantum computers, partly to cope with QED problems and partly to cope with the fact that we will soon be reaching the physical limitations on how small we can make a transistor that will be deterministic rather than itself being driven by quantum effects. But while a theoretical quantum computer is relatively easy to describe, it has proved much harder to implement. The problems are so great that, nearly 40 years after they were first postulated, they are still very rare beasts. And just as QED is difficult to understand, so quantum computing can also make your brain hurt.  What follows is an attempt to provide a high-level view, and it may be accused of over-simplification. Grab your favourite search engine and start digging if you find it interesting.

We all know that in digital computing we work with bits (binary digits), which are always either 0 or 1. The equivalent in quantum computing is the quantum bit or qubit. This exists as 0 or 1 – or, in the state of quantum superposition – effectively both states at the same time.  (Some descriptions say that the qubit exists in two universes.) A quantum computer of n qubits can be in 2n different states at the same time. A normal computer with the same number of bits can be in only one of 2n states at any one time.

As well as Feynman’s vision of modelling systems at the quantum level, the high number of possible states make a quantum computer a strong candidate for attacking many large data problems, including, for example, analysis of chemical interactions, speeding up searches of very large databases, solving otherwise intractable optimisation problems, and providing very fast cracking of encoded messages. This last is why governments around the world are investing in quantum computing.

Theoretically, there are multiple ways of making a qubit, but all of them are complex. The current favoured technology is to use superconductors. This requires a temperature close to absolute zero, so you need a very large and expensive piece of kit surrounding a minute area of computing. There are a number of other problems inherent in the technology. A particular problem is quantum decoherence, the tendency for a system to lose information through interaction with the environment. Associated with this is the difficulty of reading the state of the system (i.e. getting information output) without changing its state. You may remember those discussions in physics where the very act of observing something caused it to change.

There are a small number of machines in existence – very few, given how long the work has continued – and recently an arms race has begun. Last year IBM announced a 50-qubit machine – but it is not clear when this will enter service – and then in March this year Google announced a 72-qubit machine.

While the physical implementation has been slow, there has been a lot of work on algorithms that will run on machines when they are built. For example, in 1990, Peter Shor, a mathematician at MIT, published what is now known as Shore’s Algorithm for factoring large numbers exponentially – a key tool in breaking public key encryption. Some of these algorithms are now under test, as IBM has opened free access to its 20-qubit machine to researchers, with a wide range of support and tools.

Everyone in the quantum computing world is keen to make it clear that the machines are not a replacement for the classic computer, but are specialists, likely always to be an add-on to existing machines and data farms.

In the early 1970s, digital computers passed a tipping point when transistors began to be widely used, both in the processing unit and in the memory. This caused an explosion in the number of computers, with the first personal computers appearing at the end of the decade. Quantum computing has yet to reach that point. Until it does, quantum computers will be confined to the research lab, but, if the tipping point happens, then there will be a powerful new tool that will transform data processing.

60 thoughts on “Richard Feynman and Quantum Computing”

  1. You wrote, “(Quantum computing) will be a powerful new tool that will transform data processing.”

    True.

    And when cancer is cured, availability of hospital palliative care beds will rise sharply.

    Can somebody below, an imaginative way to describe the implications of Quantum Computational resources?

    Rather unimaginatively, I would say, “the implications are unbounded. Everything will change; particularly if one of the first applications of effective quantum computing is to engineer quantum computers with greater capacity.”

    But to what end? By using large capacity quantum computers, we’ll be able to model — and solve — problems that are currently so far beyond our ability to solve that we don’t even recognize them as problems (yet).

    While that might be typical of many new technologies, I think in the case of Quantum Computing, the leap will be unprecedented; it’s an inflection point.

Leave a Reply

featured blogs
Oct 9, 2024
Have you ever noticed that dogs tend to circle around a few times before they eventually take a weight off their minds?...

featured chalk talk

Advantech Dual Band WiFi
Sponsored by Mouser Electronics and Advantech
In this episode of Chalk Talk, Amelia Dalton and Monica Goode from Advantech investigate the what, where, and how of dual band WiFi. They also explore the benefits that dual band WiFi can bring to a variety of embedded designs and how you can take advantage of Advantech dual band WiFi solutions for your next design.
Jul 31, 2024
46,647 views