feature article
Subscribe Now

Microsoft’s Take on Quantum Computing

Featuring Self-Annihilating Particles

This year’s ICCAD conference featured a keynote from Microsoft’s Dr. Krysta Svore on quantum computing. We’ve looked at the topic before (with me making limited sense of some of the more out-there notions), but Microsoft’s approach appears to be different. So it feels like it’s worth a walk-through.

I must confess, however, that in the time since that prior article, neither the notions of quantum computing nor my brain have evolved to make some of the odder concepts more comprehensible. Trying to dig into those troublesome ideas involves a recursive dive into topics within topics within topics. Not easy.

So, as before, we will proceed with humility (or humiliation; your choice). I will state various facts below as facts; keep in mind that they reflect my conclusions both from Microsoft’s presentation and from my own subsequent rummagings through the internet.

Three-Level Computing

Let’s start with a high-level architecture. Because there’s a problem: quantum computing happens at cryogenic temperatures – like 10-20 mK. So it’s not like we can each tote around a quantum-computing laptop; the refrigerators are large. Like, bigger than the one your milk is souring in as we speak.

In light of this, Microsoft envisions a layered system:

  • At the top will be the applications, running on regular room-temp machines.
  • Below this lies a cryogenic machine – but not one quite at the quantum level. Consider it a transitional system that bridges communications between the top and bottom layers.
  • At the bottom – likely residing in the cloud – will be the quantum computers themselves.

    The trick will be for software to be partitioned between the three. The obvious user-interface stuff will happen at room temp. The remaining algorithms will happen in one of the two lower levels. It will clearly take some work to figure out what goes on to which layer; that will involve some novel partitioning compiler.

    Q-Bits

    Now that we’ve set up a computing environment, let’s rocket from high-level architecture to the lowest of the low levels: the qubit. In the prior piece, we looked at so-called “transmons” as qubits. Microsoft uses a very different approach.

    And this takes us into some eerie territory. Surprise! Let’s dip first into particle physics, where we encounter something called Majorana fermions. These have some interesting characteristics, one of which is that they are self-annihilating. That said, it’s not clear that this particular characteristic plays into the quantum computing thing – it’s just to boggle your mind.

    More relevant is the fact that some Majorana fermions can be localized as so-called zero-mode particles, and they can take on so-called bound states. It turns out that these fermions can implement what are more generically referred to as “anyons.” (Used in a sentence: “Buehler? Anyon?”

    Anyons are quasi-particles, and they can be Abelian or non-Abelian. As far as I can tell, the Abelianness refers, more or less, to whether or not the operations are commutative. Of particular interest are non-Abelian anyons, which are subject to braiding transformations. If you’ve been reading us for a while, this might stimulate some recognition: we’ve dealt with braiding before. More on that in a minute.

    The physical implementation of the qubit is based on electron spin (although most of the materials I read on Majorana fermions deal with magnetic states). Problem is, electron spin is noisy, and so it’s unreliable.

    In order to make a more robust qubit, you can create a chain of electrons with the same spin. These are much less noisy; you’d have to flip all the electrons to change the state – a very unlikely scenario. They’re tolerant of some level of noise in the individual electrons making up the chain. To be clear, however, as with anything quantum, there is no such thing as certainty. “Unlikely” is about as good as you’re going to get.

    This whole thing about creating chains and subjecting them to braiding operations takes us into the realm of topology, or the study of shapes. This electron chain constitutes what’s referred to as a topological superconductor, and the whole branch of quantum computing we’re talking about is referred to as topological quantum computing (TQC). And its claim to fame is hoped-for higher reliability.

    One benefit of this reliability is that you need less error correction. Much less. Dr. Svore made a distinction between logic qubits – the way we’d think about a problem – and physical qubits – which include error correction. She said that conventional approaches require about 10,000 physical qubits per single logical qubit – contrasted with 10:1 for topological computing.

    Correspondingly, 30 qubits can represent 16 Gb of state; 40 represent 16 Tb of state; 50 give you 16 Pb of state; and 250 can cover the entire visible universe. Given, however, that one step of computation done conventionally would take the lifetime of the universe, simulation becomes impractical. So it’s hard to test out algorithms and approaches without an actual system to try them on (which, of course, we don’t have yet).

    I’ll admit that it’s a little hard for me to wrap my head around what these figures mean; how could 250 measly qubits represent the universe?? Again, just leaving it here.

    Upbraided

    Let’s come back to that braiding thing. Consistent with our theme of reliability and the topological superconductor, braiding operations are relatively robust against noise. Operations can survive some noise intact.

    Of course, as soon as I heard the braid thing, my mind went back to the security system that SecureRF touts. They say that it succeeds RSA and elliptic curve cryptography (ECC) by turning to braid theory for a new encryption approach that’s not susceptible to future quantum cracking in the way that the other two are expected to be.

    My immediate thought was, “Wait, can this be a coincidence???” So I asked Dr. Svore. She wasn’t familiar with the cryptographic use of braids, but she speculated that the robustness of braid operations was a link between the two applications. For encryption, one obvious benefit is the trap-door nature of the operations: given a final braid configuration, it’s a very hard problem to figure out how you got there.

    That is, of course, not of value for quantum computing. Likewise, it’s not clear that the noise immunity is a benefit to encryption. So the link still seems somewhat tenuous to me; I’ll simply leave it here for folks with more knowledge than I have to expand and expound upon.

    And This is Useful for…

    An obvious remaining question, then, is what such a computer might help us to do. Dr. Svore listed a number of very specific problems that are very difficult to solve without this type of quantum computer. (Yes, apparently, different types of quantum computer are good for different types of problem.)

  • Modeling ferrodoxin. This class of proteins involves iron and sulfur, and one of them is involved in photosynthesis. The problem has polynomial order of difficulty, but it’s still very slow. By making some algorithmic improvements, they changed the polynomial degree from 12 to 3-5, and the computation time went from – wait for it – 24 billion years to 1 hour. Seriously. That’s what she said. Dang…
  • Modeling nitrogen fixing.
  • Carbon capture.
  • New materials.
  • Richer machine-learning models.
    Then there’s the question of where this all stands in real life. Quantum is one of those things that’s always way in the future, so it’s hard to get out of that habit. That said, Microsoft makes claims of lots of progress with the physics, materials, devices, and even computing. And they’ve got a development flow and runtime layer, along with a language called Q# (Q-sharp) for writing quantum programs.

    As far as I can tell, what constitutes a quantum algorithm – and the concepts involved – remains a specialized domain. I can’t imagine that any random John or Jane will come out with a newly minted computer science degree and immediately start tackling quantum programs. Who knows… maybe it will always be so. After all, paraphrasing Feynman, if you think you understand quantum, then you clearly don’t.

    As far as I can tell, what constitutes a quantum algorithm – and the concepts involved – remains a specialized domain. I can’t imagine that any random John or Jane will come out with a newly minted computer science degree and immediately start tackling quantum programs. Who knows… maybe it will always be so. After all, paraphrasing Feynman, if you think you understand quantum, then you clearly don’t.

    More info:
    Microsoft Quantum Computing

2 thoughts on “Microsoft’s Take on Quantum Computing”

Leave a Reply

featured blogs
Dec 2, 2024
The Wi-SUN Smart City Living Lab Challenge names the winners with Farmer's Voice, a voice command app for agriculture use, taking first place. Read the blog....
Dec 3, 2024
I've just seen something that is totally droolworthy, which may explain why I'm currently drooling all over my keyboard....

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Accelerating Tapeouts with Synopsys Cloud and AI
Sponsored by Synopsys
In this episode of Chalk Talk, Amelia Dalton and Vikram Bhatia from Synopsys explore how you can accelerate your next tapeout with Synopsys Cloud and AI. They also discuss new enhancements and customer use cases that leverage AI with hybrid cloud deployment scenarios, and how this platform can help CAD managers and engineers reduce licensing overheads and seamlessly run complex EDA design flows through Synopsys Cloud.
Jul 8, 2024
40,124 views