feature article
Subscribe Now

Microsoft’s Take on Quantum Computing

Featuring Self-Annihilating Particles

This year’s ICCAD conference featured a keynote from Microsoft’s Dr. Krysta Svore on quantum computing. We’ve looked at the topic before (with me making limited sense of some of the more out-there notions), but Microsoft’s approach appears to be different. So it feels like it’s worth a walk-through.

I must confess, however, that in the time since that prior article, neither the notions of quantum computing nor my brain have evolved to make some of the odder concepts more comprehensible. Trying to dig into those troublesome ideas involves a recursive dive into topics within topics within topics. Not easy.

So, as before, we will proceed with humility (or humiliation; your choice). I will state various facts below as facts; keep in mind that they reflect my conclusions both from Microsoft’s presentation and from my own subsequent rummagings through the internet.

Three-Level Computing

Let’s start with a high-level architecture. Because there’s a problem: quantum computing happens at cryogenic temperatures – like 10-20 mK. So it’s not like we can each tote around a quantum-computing laptop; the refrigerators are large. Like, bigger than the one your milk is souring in as we speak.

In light of this, Microsoft envisions a layered system:

  • At the top will be the applications, running on regular room-temp machines.
  • Below this lies a cryogenic machine – but not one quite at the quantum level. Consider it a transitional system that bridges communications between the top and bottom layers.
  • At the bottom – likely residing in the cloud – will be the quantum computers themselves.

    The trick will be for software to be partitioned between the three. The obvious user-interface stuff will happen at room temp. The remaining algorithms will happen in one of the two lower levels. It will clearly take some work to figure out what goes on to which layer; that will involve some novel partitioning compiler.

    Q-Bits

    Now that we’ve set up a computing environment, let’s rocket from high-level architecture to the lowest of the low levels: the qubit. In the prior piece, we looked at so-called “transmons” as qubits. Microsoft uses a very different approach.

    And this takes us into some eerie territory. Surprise! Let’s dip first into particle physics, where we encounter something called Majorana fermions. These have some interesting characteristics, one of which is that they are self-annihilating. That said, it’s not clear that this particular characteristic plays into the quantum computing thing – it’s just to boggle your mind.

    More relevant is the fact that some Majorana fermions can be localized as so-called zero-mode particles, and they can take on so-called bound states. It turns out that these fermions can implement what are more generically referred to as “anyons.” (Used in a sentence: “Buehler? Anyon?”

    Anyons are quasi-particles, and they can be Abelian or non-Abelian. As far as I can tell, the Abelianness refers, more or less, to whether or not the operations are commutative. Of particular interest are non-Abelian anyons, which are subject to braiding transformations. If you’ve been reading us for a while, this might stimulate some recognition: we’ve dealt with braiding before. More on that in a minute.

    The physical implementation of the qubit is based on electron spin (although most of the materials I read on Majorana fermions deal with magnetic states). Problem is, electron spin is noisy, and so it’s unreliable.

    In order to make a more robust qubit, you can create a chain of electrons with the same spin. These are much less noisy; you’d have to flip all the electrons to change the state – a very unlikely scenario. They’re tolerant of some level of noise in the individual electrons making up the chain. To be clear, however, as with anything quantum, there is no such thing as certainty. “Unlikely” is about as good as you’re going to get.

    This whole thing about creating chains and subjecting them to braiding operations takes us into the realm of topology, or the study of shapes. This electron chain constitutes what’s referred to as a topological superconductor, and the whole branch of quantum computing we’re talking about is referred to as topological quantum computing (TQC). And its claim to fame is hoped-for higher reliability.

    One benefit of this reliability is that you need less error correction. Much less. Dr. Svore made a distinction between logic qubits – the way we’d think about a problem – and physical qubits – which include error correction. She said that conventional approaches require about 10,000 physical qubits per single logical qubit – contrasted with 10:1 for topological computing.

    Correspondingly, 30 qubits can represent 16 Gb of state; 40 represent 16 Tb of state; 50 give you 16 Pb of state; and 250 can cover the entire visible universe. Given, however, that one step of computation done conventionally would take the lifetime of the universe, simulation becomes impractical. So it’s hard to test out algorithms and approaches without an actual system to try them on (which, of course, we don’t have yet).

    I’ll admit that it’s a little hard for me to wrap my head around what these figures mean; how could 250 measly qubits represent the universe?? Again, just leaving it here.

    Upbraided

    Let’s come back to that braiding thing. Consistent with our theme of reliability and the topological superconductor, braiding operations are relatively robust against noise. Operations can survive some noise intact.

    Of course, as soon as I heard the braid thing, my mind went back to the security system that SecureRF touts. They say that it succeeds RSA and elliptic curve cryptography (ECC) by turning to braid theory for a new encryption approach that’s not susceptible to future quantum cracking in the way that the other two are expected to be.

    My immediate thought was, “Wait, can this be a coincidence???” So I asked Dr. Svore. She wasn’t familiar with the cryptographic use of braids, but she speculated that the robustness of braid operations was a link between the two applications. For encryption, one obvious benefit is the trap-door nature of the operations: given a final braid configuration, it’s a very hard problem to figure out how you got there.

    That is, of course, not of value for quantum computing. Likewise, it’s not clear that the noise immunity is a benefit to encryption. So the link still seems somewhat tenuous to me; I’ll simply leave it here for folks with more knowledge than I have to expand and expound upon.

    And This is Useful for…

    An obvious remaining question, then, is what such a computer might help us to do. Dr. Svore listed a number of very specific problems that are very difficult to solve without this type of quantum computer. (Yes, apparently, different types of quantum computer are good for different types of problem.)

  • Modeling ferrodoxin. This class of proteins involves iron and sulfur, and one of them is involved in photosynthesis. The problem has polynomial order of difficulty, but it’s still very slow. By making some algorithmic improvements, they changed the polynomial degree from 12 to 3-5, and the computation time went from – wait for it – 24 billion years to 1 hour. Seriously. That’s what she said. Dang…
  • Modeling nitrogen fixing.
  • Carbon capture.
  • New materials.
  • Richer machine-learning models.
    Then there’s the question of where this all stands in real life. Quantum is one of those things that’s always way in the future, so it’s hard to get out of that habit. That said, Microsoft makes claims of lots of progress with the physics, materials, devices, and even computing. And they’ve got a development flow and runtime layer, along with a language called Q# (Q-sharp) for writing quantum programs.

    As far as I can tell, what constitutes a quantum algorithm – and the concepts involved – remains a specialized domain. I can’t imagine that any random John or Jane will come out with a newly minted computer science degree and immediately start tackling quantum programs. Who knows… maybe it will always be so. After all, paraphrasing Feynman, if you think you understand quantum, then you clearly don’t.

    As far as I can tell, what constitutes a quantum algorithm – and the concepts involved – remains a specialized domain. I can’t imagine that any random John or Jane will come out with a newly minted computer science degree and immediately start tackling quantum programs. Who knows… maybe it will always be so. After all, paraphrasing Feynman, if you think you understand quantum, then you clearly don’t.

    More info:
    Microsoft Quantum Computing

2 thoughts on “Microsoft’s Take on Quantum Computing”

Leave a Reply

featured blogs
Apr 25, 2024
Cadence's seven -year partnership with'¯ Team4Tech '¯has given our employees unique opportunities to harness the power of technology and engage in a three -month philanthropic project to improve the livelihood of communities in need. In Fall 2023, this partnership allowed C...
Apr 24, 2024
Learn about maskless electron beam lithography and see how Multibeam's industry-first e-beam semiconductor lithography system leverages Synopsys software.The post Synopsys and Multibeam Accelerate Innovation with First Production-Ready E-Beam Lithography System appeared fir...
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...

featured video

How MediaTek Optimizes SI Design with Cadence Optimality Explorer and Clarity 3D Solver

Sponsored by Cadence Design Systems

In the era of 5G/6G communication, signal integrity (SI) design considerations are important in high-speed interface design. MediaTek’s design process usually relies on human intuition, but with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver, they’ve increased design productivity by 75X. The Optimality Explorer’s AI technology not only improves productivity, but also provides helpful insights and answers.

Learn how MediaTek uses Cadence tools in SI design

featured paper

Designing Robust 5G Power Amplifiers for the Real World

Sponsored by Keysight

Simulating 5G power amplifier (PA) designs at the component and system levels with authentic modulation and high-fidelity behavioral models increases predictability, lowers risk, and shrinks schedules. Simulation software enables multi-technology layout and multi-domain analysis, evaluating the impacts of 5G PA design choices while delivering accurate results in a single virtual workspace. This application note delves into how authentic modulation enhances predictability and performance in 5G millimeter-wave systems.

Download now to revolutionize your design process.

featured chalk talk

Advancements in Motor Efficiency Enables More Sustainable Manufacturing
Climate change is encouraging the acceleration of sustainable and renewable manufacturing processes and practices and one way we can encourage sustainability in manufacturing is with the use of variable speed drive motor control. In this episode of Chalk Talk, Amelia Dalton chats with Maurizio Gavardoni and Naveen Dhull from Analog Devices about the wide ranging benefits of variable speed motors, the role that current feedback plays in variable speed motor control, and how precision measurement solutions for current feedback can lead to higher motor efficiency, energy saving and enhanced sustainability.
Oct 19, 2023
24,060 views