interview by Ray Kurzweil | A conversation with Michele Reilly of Turing

On advances in quantum computing: architecture, functionality, and scalability.
March 11, 2021



image | above

Artist’s illustration of a control panel for a quantum computer component built by the company Turing.


— letter —

Dear readers,

This is an interview I conducted with Michele Reilly — she’s the forward-thinking CEO of Turing, and a specialist in building the quantum computers of tomorrow.

She’s working to improve quantum machine architecture, functionality, and scalability. Her company Turing is building portable quantum hard drives and long-distance communications equipment.

Ray Kurzweil


ABOUT

company: Turing | website

Founded in year 2016 by Michele Reilly, Turing addresses the quantum computer hardware and software market. The renowned engineer and physicist Seth Lloyd PhD is now joining Turing.


INTRODUCTION

by Ray Kurzweil

Most of the time, when we talk about something growing exponentially — whether it’s a transistor count, a virus, or an investment — we mean that it can be described by a graph where the Y valued doubles after each constant increment of an X value for time. In fact, there is really only one thing that I know of which grows exponentially with an X value other than time.

It’s a thing I have mentioned before, but because its exponential growth isn’t on a schedule, the law of accelerating returns doesn’t have much to say about it. In light of that, although it attracts the attention of serious thinkers — from Richard Feynman PhD and David Deutsch PhD, to Bill Gates and Sergey Brin — I haven’t written very much about it until now.

For a limited range of applications, quantum computers become exponentially more powerful with a linear increase in their number of qubits. That implies the potential for a massive and discontinuous change in available computing power.

Moreover, the range of applications to which quantum computers can be applied has been growing rapidly since 2009 — when Seth Lloyd PhD of MIT and his students showed that quantum computing can exponentially accelerate the solution of linear systems of equations, one of the most common sorts of calculations done in science, including in artificial intelligence. Now Lloyd is joining Turing — founded by Michele Reilly, she’s interviewed below.


QUESTION • by Ray Kurzweil

What sets Turing apart that has attracted a renowned physicist like Seth Lloyd PhD?

ANSWER • by Michele Reilly

Turing is building the first quantum hard drives that are portable and are actually programmable.In the 1940s when the first NP-junctions eventually became transistors, which later became computers. Back then, NP-junctions were less well developed than vacuum tubes and electro-mechanical switches.

We started out by looking at promising — but less well-developed — techniques. When we did this we felt we were kind of in the analogous historical position: vacuum tubes and switches were thought to be advanced, but when it came to manufacturing it was extraordinarily impractical.

I can see the same thing happening today in quantum computing. Today “quantum clouds” don’t do anything that your laptop can’t. And they never will be able to: because when you factor in overhead, and heat loads at scale, the numbers just don’t work in either the stand-alone photonics or super-conducting machines.

From an insight that enabled us to store entanglement while creating more of it, we then envisioned a manufacturing roadmap for scale and distribution, focused on low-qubit cost, portability, replicability — and most importantly scalability. For us, scalability isn’t an academic term, but actually how to concretize a system of distribution for qubits.

In order to achieve long-term human impact goals like nano-technology and radical life extension, we must start with a concrete and plausible vision that can get us to millions of qubits — and address bottlenecks in the computing stack today.

We have a roadmap which takes into account the overhead costs of quantum I/O — and various mission-critical higher parts of the computational stack. We chose materials and an approach to a chip-set with the purpose of achieving a “Quantum Moore’s Law“. The tech we’re developing will be improved upon — rather than superseded.


QUESTION • by Ray Kurzweil

You have a unique background which led you to developing quantum tech — can you tell us about how you came to work in this field?

ANSWER • by Michele Reilly

I started out in college at the Cooper Union studying art and architecture. There I began building robots using artificial intelligence, which led me into mathematics and then to finance. In 2008 I was on the securities floor of JP Morgan when the banks collapsed. It was a peek into the actual governance of the banking system, and it left an indelible impression on me — an experience that reinforced my preoccupation with macro-economics.

I heard about bitcoin when it was just getting started. And learned increasingly more about cryptography and computer security, so I could evaluate the viability of the tech before advising my employer — Victor Niederhoffer PhD — to buy, back in year 2010.

There were a number of economists such as Lawrence White PhD and Scott Sumner PhD who influenced me about the unsoundness of our present practice of interest rate monetary targeting, and bitcoin seemed to present a compelling alternative, but they don’t seem to have felt that it was their place to promote that alternative. The problem of an objective solution for macro-economic targets seemed insolvable. To the best of my knowledge, they don’t have crypto positions even today.

Professionally, I tend to work from fundamentals. Free banking appealed to me because the responsibility is on the individual to do the diligence and check out the fundamentals of an institution — instead of relying on “trusted 3rd parties”. This seemed like a good idea after what I’d seen working in banks and securities firms.

When I started to evaluate quantum technologies in year 2015, I had a similar experience as when I looked at macro-economic proposals. The approaches being pursued were fundamentally unsound in that they couldn’t be programmed to run — even if they got to larger qubit numbers. This is still true today.


QUESTION • by Ray Kurzweil

Why did you start Turing?

ANSWER • by Michele Reilly

Mathematicians can often speed things up in physics by investigating where they’re bottlenecked. This doesn’t happen often enough or systematically enough right now — the disciplines are siloed from one another and there’s a lot of low-hanging fruit to pick.

I pinpointed the major bottlenecks to the quantum computing stack and started to work on them. In other companies’ proposed development plans, no one was able to put prices on qubits — or even seriously considered portability. I saw an opportunity to address critical problems early and systematically.

We knew we had to start from a system whereby, if you can get a handful of qubits, you have a recipe for scaling-up. This is the system we’re prototyping. Since today’s proposals are not doing this, I believe those efforts will collapse in a few years — while we push ourselves on first principles to price-in at under $10/qBit.


QUESTION • by Ray Kurzweil

What would you say is missing in the industry at-large that you’re accounting for?

ANSWER • by Michele Reilly

For a computer to be useful, you have to be able to forget the underlying hardware most of the time — and consider the program and the application. In the case of quantum devices, original equipment manufacturers (OEMs) are ignoring the top-down overhead when building what they call quantum chip-sets. They hope academics will make progress that they will eventually utilize in their development.

The industry postpones: error correction, data I/O, heat load accounting from the classical overhead required to run quantum machines — let alone the operating system, and other fault tolerance features — claiming it should be done in stages. Stages won’t work for quantum development.

When you build that way and later take into account error correction and other I/O, the thermodynamic and resource costs explode, and what’s more, the systems cannot work for large algorithms. Any one of the pieces required has massive and egregious overhead, and practical engineering limitations, so if you don’t do the all-in accounting from the outset, you can easily get stuck in practice.

Most development teams — even within large companies who look to the horizon — are approaching development in a piece-meal way. They start with 1 qubit, then re-develop for 2, 3 — and so on. They might work on some error correction, but only for the system sizes that they already have. It doesn’t mean a thing to count qubits and increase one at a time, if you always have to reboot the development in order to add one more.

The marketing of quantum supremacy, which refers to solving a problem that no classical computer can solve in a reasonable amount of time, alongside the current industry norm of one-by-one qubit count announcements — are actually something of an unproductive distraction, in my opinion. I believe this is popular because it’s convenient for marketing departments, but it’s not pushing the industry in a meaningful way.


QUESTION • by Ray Kurzweil

Can you tell us something about the quantum internet?

ANSWER • by Michele Reilly

The current state of the tech is not very close to a true quantum internet. For instance, China’s Micius satellite is an impressive but extremely expensive quantum communications prototype. If we take that as a starting point, it will be incredibly difficult to make even incremental progress towards a global quantum internet for several reasons — the greatest of which is that the bandwidth can’t be expanded enough to become usefully large.

It operates around 1 Hz, while a usable internet range will be in the megahertz for users — and giga- to even tera-hertz for servers and internet providers. And this is certainly not the only limitation. Most of what I have done professionally is to run the numbers over limiting scenarios.


QUESTION • by Ray Kurzweil

Outside of Turing what do you and Seth talk about?

ANSWER • by Michele Reilly

We consider what memory actually means inside quantum machines, and in the universe in full generality. We’ve been thinking through definitions of causality and we study the science of how to store quantum memories. More concretely, how to lengthen these memory times, using quantum protocols.

More abstractly, how concepts of quantum memory and heat can be combined to explain why we discover ourselves to be near the dawn of the universe — a research program inspired by the idea of a “speed prior” but aiming for a more rigorous and foundational solution.

The concept of memory is very closely related to that of entropy in information theory. These are much better pinned-down concepts than “time” in physics — not to mention concepts such as “consciousness”. If it turns out that memory and entropy are more fundamental concepts — if everything we wish to explain in terms of time, or in terms of consciousness is better explainable in terms of memory and entropy — that would certainly be interesting.


QUESTION • by Ray Kurzweil

You talk about foundations a lot. What kinds of foundational questions motivate you?

ANSWER • by Michele Reilly

It’s generally an interesting question to me, why we remember what we do. To presume that one can ask an AI inside a quantum computer whether it remembers so-called multiple histories of itself — I think it’s an error in parsing the meaning of what’s happening inside of these machines. David Deutsch PhD thinks it’s possible. He famously coined the thought experiment “Wigner’s friend”.

But to say multiple histories will be able to be re-called by the AI — or an AGI inside the quantum machine — while compelling, I suspect is wrong. It’s reifying a metaphor.

Einstein said everything is a miracle — or nothing is. For one, how is it that time causes the same thing to have happened from different perspectives? I think Deutsch is making a mistake — and thinking there’s some way (a way to arrange things in the quantum computer) such that time can have caused different outcomes to have happened from the same perspective.

This implies a confusion, and the replacing of quantum mechanics with a somewhat cruder metaphor. Everett didn’t believe in the multi-verse. He didn’t think it was accurate, but was willing to go along with it — as it was at least less wrong than the Copenhagen Interpretation. Deutsch seems to believe that you can have a quantum computer that contains AIs who can recall different past perspectives: an AI remembering the “sum over histories” so to speak.

BQP is the class of decision problems solvable by a quantum computer in polynomial time, with bounded error. NP is the set of decision problems for which the problem instances — where the answer is “yes” — have proofs verifiable in polynomial time by a deterministic Turing machine.

His belief would seem to imply that BQP = NP or possibly EXP (i.e. the set of all decision problems that are solvable by a deterministic Turing machine in exponential time) — which violates a strong expert consensus — and Deutsch doesn’t indicate that he’s actively critiquing that consensus.

There’s an exponential space of quantum memory registers inside of the quantum computer — but most of that information can’t be brought together by an “agent self” in a quantum space. Agency is in orthogonal bases. You don’t use all bases, it’s only with interference that you bring bits together, from a quantum mechanical world.



image | above

Artist’s illustration of the portable quantum computer hard drives built by the company Turing.



image | above

Portrait of Michele Reilly — founder + CEO of Turing company. Her quantum computer designs are leading-edge.

credit: Turing ~ visit


related viewing


1. |

group: Foresight Institute
tag line: Advancing beneficial technologies.
web: homechannel

featurette title: How to build a quantum internet — with Michele Reilly
watch | featurette

— summary —

How do you build a quantum internet that has the same latency as our current internet?

Today’s best proposals — quantum satellites + quantum repeaters, sending photons through fiber optic cables — don’t have this feature. That’s essential for the tech to be meaningful for future adoption.


– notes + abbreviations —

MIT = Massachusetts Institute of Technology

AI = artificial intelligence
AGI = artificial general intelligence
I/O = input + output
OEM = original equipment manufacturer


[ post file ]

post title: interview by Ray Kurzweil | A conversation with Michele Reilly of Turing
deck: On advances in quantum computing: architecture, functionality, and scalability.

collection: the Kurzweil library
tab: spotlight

[ end of file ]