Dream Machine

David Deutsch believes that quantum computers will give evidence for the existence of parallel universes.Photograph by Hans Gissinger

On the outskirts of Oxford lives a brilliant and distressingly thin physicist named David Deutsch, who believes in multiple universes and has conceived of an as yet unbuildable computer to test their existence. His books have titles of colossal confidence (“The Fabric of Reality,” “The Beginning of Infinity”). He rarely leaves his house. Many of his close colleagues haven’t seen him for years, except at occasional conferences via Skype.

Deutsch, who has never held a job, is essentially the founding father of quantum computing, a field that devises distinctly powerful computers based on the branch of physics known as quantum mechanics. With one millionth of the hardware of an ordinary laptop, a quantum computer could store as many bits of information as there are particles in the universe. It could break previously unbreakable codes. It could answer questions about quantum mechanics that are currently far too complicated for a regular computer to handle. None of which is to say that anyone yet knows what we would really do with one. Ask a physicist what, practically, a quantum computer would be “good for,” and he might tell the story of the nineteenth-century English scientist Michael Faraday, a seminal figure in the field of electromagnetism, who, when asked how an electromagnetic effect could be useful, answered that he didn’t know but that he was sure that one day it could be taxed by the Queen.

In a stairwell of Oxford’s Clarendon Physics Laboratory there is a photo poster from the late nineteen-nineties commemorating the Oxford Center for Quantum Computation. The photograph shows a well-groomed crowd of physicists gathered on the lawn. Photoshopped into a far corner, with the shadows all wrong, is the head of David Deutsch, looking like a time traveller teleported in for the day. It is tempting to interpret Deutsch’s representation in the photograph as a collegial joke, because of Deutsch’s belief that if a quantum computer were built it would constitute near-irrefutable evidence of what is known as the Many Worlds Interpretation of quantum mechanics, a theory that proposes pretty much what one would imagine it does. A number of respected thinkers in physics besides Deutsch support the Many Worlds Interpretation, though they are a minority, and primarily educated in England, where the intense interest in quantum computing has at times been termed the Oxford flu.

But the infection of Deutsch’s thinking has mutated and gone pandemic. Other scientists, although generally indifferent to the truth or falsehood of Many Worlds as a description of the universe, are now working to build these dreamed-up quantum-computing machines. Researchers at centers in Singapore, Canada, and New Haven, in collaboration with groups such as Google and NASA, may soon build machines that will make today’s computers look like pocket calculators. But Deutsch complements the indifference of his colleagues to Many Worlds with one of his own—a professional indifference to the actual building of a quantum computer.

Physics advances by accepting absurdities. Its history is one of unbelievable ideas proving to be true. Aristotle quite reasonably thought that an object in motion, left alone, would eventually come to rest; Newton discovered that this wasn’t true, and from there worked out the foundation of what we now call classical mechanics. Similarly, physics surprised us with the facts that the Earth revolves around the sun, time is curved, and the universe if viewed from the outside is beige.

“Our imagination is stretched to the utmost,” the Nobel Prize-winning physicist Richard Feynman noted, “not, as in fiction, to imagine things which are not really there, but just to comprehend those things which are there.” Physics is strange, and the people who spend their life devoted to its study are more accustomed to its strangeness than the rest of us. But, even to physicists, quantum mechanics—the basis of a quantum computer—is almost intolerably odd.

Quantum mechanics describes the natural history of matter and energy making their way through space and time. Classical mechanics does much the same, but, while classical mechanics is very accurate when describing most of what we see (sand, baseballs, planets), its descriptions of matter at a smaller scale are simply wrong. At a fine enough resolution, all those reliable rules about balls on inclined planes start to fail.

Quantum mechanics states that particles can be in two places at once, a quality called superposition; that two particles can be related, or “entangled,” such that they can instantly coördinate their properties, regardless of their distance in space and time; and that when we look at particles we unavoidably alter them. Also, in quantum mechanics, the universe, at its most elemental level, is random, an idea that tends to upset people. Confess your confusion about quantum mechanics to a physicist and you will be told not to feel bad, because physicists find it confusing, too. If classical mechanics is George Eliot, quantum mechanics is Kafka.

All the oddness would be easier to tolerate if quantum mechanics merely described marginal bits of matter or energy. But it is the physics of everything. Even Einstein, who felt at ease with the idea of wormholes through time, was so bothered by the whole business that, in 1935, he co-authored a paper titled “Can quantum-mechanical description of physical reality be considered complete?” He pointed out some of quantum mechanics’s strange implications, and then answered his question, essentially, in the negative. Einstein found entanglement particularly troubling, denigrating it as “spooky action at a distance,” a telling phrase, which consciously echoed the seventeenth-century disparagement of gravity.

The Danish physicist Niels Bohr took issue with Einstein. He argued that, in quantum mechanics, physics had run up against the limit of what science could hope to know. What seemed like nonsense was nonsense, and we needed to realize that science, though wonderfully good at predicting the outcomes of individual experiments, could not tell us about reality itself, which would remain forever behind a veil. Science merely revealed what reality looked like to us.

Bohr’s stance prevailed over Einstein’s. “Of course, both sides of that dispute were wrong,” Deutsch observed, “but Bohr was trying to obfuscate, whereas Einstein was actually trying to solve the problem.” As Deutsch notes in “The Fabric of Reality,” “To say that prediction is the purpose of a scientific theory is to confuse means with ends. It is like saying that the purpose of a spaceship is to burn fuel.” After Bohr, a “shut up and calculate” philosophy took over physics for decades. To delve into quantum mechanics as if its equations told the story of reality itself was considered sadly misguided, like those earnest inquiries people mail to 221B Baker Street, addressed to Sherlock Holmes.

I met David Deutsch at his home, at four o’clock on a wintry Thursday afternoon. Deutsch grew up in the London area, took his undergraduate degree at Cambridge, stayed there for a master’s in math—which he claims he’s no good at—and went on to Oxford for a doctorate in physics. Though affiliated with the university, he is not on staff and has never taught a course. “I love to give talks,” he told me. “I just don’t like giving talks that people don’t want to hear. It’s wrong to set up the educational system that way. But that’s not why I don’t teach. I don’t teach for visceral reasons—I just dislike it. If I were a biologist, I would be a theoretical biologist, because I don’t like the idea of cutting up frogs. Not for moral reasons but because it’s disgusting. Similarly, talking to a group of people who don’t want to be there is disgusting.” Instead, Deutsch has made money from lectures, grants, prizes, and his books.

In the half-light of the winter sun, Deutsch’s house looked a little shabby. The yard was full of what appeared to be English ivy, and near the entrance was something twiggy and bushlike that was either dormant or dead. A handwritten sign on the door said that deliveries should “knock hard.” Deutsch answered the door. “I’m very much in a rush,” he told me, before I’d even stepped inside. “In a rush about so many things.” His thinness contributed to an oscillation of his apparent age between nineteen and a hundred and nineteen. (He’s fifty-seven.) His eyes, behind thick glasses, appeared outsized, like those of an appealing anime character. His vestibule was cluttered with old phone books, cardboard boxes, and piles of papers. “Which isn’t to say that I don’t have time to talk to you,” he continued. “It’s just that—that’s why the house is in such disarray, because I’m so rushed.”

More than one of Deutsch’s colleagues told me about a Japanese documentary film crew that had wanted to interview Deutsch at his house. The crew asked if they could clean up the house a bit. Deutsch didn’t like the idea, so the film crew promised that after filming they would reconstruct the mess as it was before. They took extensive photographs, like investigators at a crime scene, and then cleaned up. After the interview, the crew carefully reconstructed the former “disorder.” Deutsch said he could still find things, which was what he had been worried about.

Taped onto the walls of Deutsch’s living room were a map of the world, a periodic table, a hand-drawn cartoon of Karl Popper, a poster of the signing of the Declaration of Independence, a taxonomy of animals, a taxonomy of the characters in “The Simpsons,” color printouts of pictures of McCain and Obama, with handwritten labels reading “this one” and “that one,” and two color prints of an actor who looked to me a bit like Hugh Grant. There were also old VHS tapes, an unused fireplace, a stationary exercise bike, and a large flat-screen television whose newness had no visible companion. Deutsch offered me tea and biscuits. I asked him about the Hugh Grant look-alike.

“You obviously don’t watch much television,” he replied. The man in the photographs was Hugh Laurie, a British actor known for his role in the American medical show “House.” Deutsch described “House” to me as “a great program about epistemology, which, apart from fundamental physics, is really my core interest. It’s a program about the myriad ways that knowledge can grow or can fail to grow.” Dr. House is based on Sherlock Holmes, Deutsch informed me. “And House has a friend, Wilson, who is based on Watson. Like Holmes, House is an arch-rationalist. Everything’s got to have a reason, and if he doesn’t know the reason it’s because he doesn’t know it, not because there isn’t one. That’s an essential attitude in fundamental science.” One imagines the ghost of Bohr would disagree.

Deutsch’s reputation as a cloistered genius stems in large part from his foundational work in quantum computing. Since the nineteen-thirties, the field of computer science has held on to the idea of a universal computer, a notion first worked out by the field’s modern founder, the British polymath Alan Turing. A universal computer would be capable of comporting itself as any other computer, just as a synthesizer can make the sounds made by any other musical instrument. In a 1985 paper, Deutsch pointed out that, because Turing was working with classical physics, his universal computer could imitate only a subset of possible computers. Turing’s theory needed to account for quantum mechanics if its logic was to hold. Deutsch proposed a universal computer based on quantum physics, which would have calculating powers that Turing’s computer (even in theory) could not simulate.

According to Deutsch, the insight for that paper came from a conversation in the early eighties with the physicist Charles Bennett, of I.B.M., about computational-complexity theory, at the time a sexy new field that investigated the difficulty of a computational task. Deutsch questioned whether computational complexity was a fundamental or a relative property. Mass, for instance, is a fundamental property, because it remains the same in any setting; weight is a relative property, because an object’s weight depends on the strength of gravity acting on it. Identical baseballs on Earth and on the moon have equivalent masses, but different weights. If computational complexity was like mass—if it was a fundamental property—then complexity was quite profound; if not, then not.

“I was just sounding off,” Deutsch said. “I said they make too much of this”—meaning complexity theory—“because there’s no standard computer with respect to which you should be calculating the complexity of the task.” Just as an object’s weight depends on the force of gravity in which it’s measured, the degree of computational complexity depended on the computer on which it was measured. One could find out how complex a task was to perform on a particular computer, but that didn’t say how complex a task was fundamentally, in reference to the universe. Unless there really was such a thing as a universal computer, there was no way a description of complexity could be fundamental. Complexity theorists, Deutsch reasoned, were wasting their time.

Deutsch continued, “Then Charlie said, quietly, ‘Well, the thing is, there is a fundamental computer. The fundamental computer is physics itself.’ ” That impressed Deutsch. Computational complexity was a fundamental property; its value referenced how complicated a computation was on that most universal computer, that of the physics of the world. “I realized that Charlie was right about that,” Deutsch said. “Then I thought, But these guys are using the wrong physics. They realized that complexity theory was a statement about physics, but they didn’t realize that it mattered whether you used the true laws of physics, or some approximation, i.e., classical physics.” Deutsch began rewriting Turing’s universal-computer work using quantum physics. “Some of the differences are very large,” he said. Thus, at least in Deutsch’s mind, the quantum universal computer was born.

A number of physics journals rejected some of Deutsch’s early quantum-computing work, saying it was “too philosophical.” When it was finally published, he said, “a handful of people kind of got it.” One of them was the physicist Artur Ekert, who had come to Oxford as a graduate student, and who told me, “David was really the first one who formulated the concept of a quantum computer.”

Other important figures early in the field included the reclusive physicist Stephen J. Wiesner, who, with Bennett’s encouragement, developed ideas like quantum money (uncounterfeitable!) and quantum cryptography, and the philosopher of physics David Albert, whose imagining of introspective quantum automata (think robots in analysis) Deutsch describes in his 1985 paper as an example of “a true quantum computer.” Ekert says of the field, “We’re a bunch of odd ducks.”

Although Deutsch was not formally Ekert’s adviser, Ekert studied with him. “He kind of adopted me,” Ekert recalled, “and then, afterwards, I kind of adopted him. My tutorials at his place would start at around 8 P.M., when David would be having his lunch. We’d stay talking and working until the wee hours of the morning. He likes just talking things over. I would leave at 3 or 4 A.M., and then David would start properly working afterwards. If we came up with something, we would write the paper, but sometimes we wouldn’t write the paper, and if someone else also came up with the solution we’d say, ‘Good, now we don’t have to write it up.’ ” It was not yet clear, even in theory, what a quantum computer might be better at than a classical computer, and so Deutsch and Ekert tried to develop algorithms for problems that were intractable on a classical computer but that might be tractable on a quantum one.

One such problem is prime factorization. A holy grail of mathematics for centuries, it is the basis of much current cryptography. It’s easy to take two large prime numbers and multiply them, but it’s very difficult to take a large number that is the product of two primes and then deduce what the original prime factors are. To factor a number of two hundred digits or more would take a regular computer many lifetimes. Prime factorization is an example of a process that is easy one way (easy to scramble eggs) and very difficult the other (nearly impossible to unscramble them). In cryptography, two large prime numbers are multiplied to create a security key. Unlocking that key would be the equivalent of unscrambling an egg. Using prime factorization in this way is called RSA encryption (named for the scientists who proposed it, Rivest, Shamir, and Adleman), and it’s how most everything is kept secret on the Internet, from your credit-card information to I.R.S. records.

In 1992, the M.I.T. mathematician Peter Shor heard a talk about theoretical quantum computing, which brought to his attention the work of Deutsch and other foundational thinkers in what was then still an obscure field. Shor worked on the factorization problem in private. “I wasn’t sure anything would come of it,” Shor explained. But, about a year later, he emerged with an algorithm that (a) could only be run on a quantum computer, and (b) could quickly find the prime factors of a very large number—the grail! With Shor’s algorithm, calculations that would take a normal computer longer than the history of the universe would take a sufficiently powerful quantum computer an afternoon. “Shor’s work was the biggest jump,” the physicist David DiVincenzo, who is considered among the most knowledgeable about the history of quantum computing, says. “It was the moment when we were, like, Oh, now we see what it would be good for.”

Today, quantum computation has the sustained attention of experimentalists; it also has serious public and private funding. Venture-capital companies are already investing in quantum encryption devices, and university research groups around the world have large teams working both to build hardware and to develop quantum-computer applications—for example, to model proteins, or to better understand the properties of superconductors.

Artur Ekert became a key figure in the transition from pure theory to building machines. He founded the quantum computation center at Oxford, as well as a similar center a few years later at Cambridge. He now leads a center in Singapore, where the government has made quantum-computing research one of its top goals. “Today in the field there’s a lot of focus on lab implementation, on how and from what you could actually build a quantum computer,” DiVincenzo said. “From the perspective of just counting, you can say that the majority of the field now is involved in trying to build some hardware. That’s a result of the success of the field.” In 2009, Google announced that it had been working on quantum-computing algorithms for three years, with the aim of having a computer that could quickly identify particular things or people from among vast stores of video and images—David Deutsch, say, from among millions of untagged photographs.

In the early nineteenth century, a “computer” was any person who computed: someone who did the math for building a bridge, for example. Around 1830, the English mathematician and inventor Charles Babbage worked out his idea for an Analytical Engine, a machine that would remove the human from computing, and thus bypass human error. Nearly no one imagined an analytical engine would be of much use, and in Babbage’s time no such machine was ever built to completion. Though Babbage was prone to serious mental breakdowns, and though his bent of mind was so odd that he once wrote to Alfred Lord Tennyson correcting his math (Babbage suggested rewriting “Every minute dies a man / Every minute one is born” as “Every moment dies a man / Every moment one and a sixteenth is born,” further noting that although the exact figure was 1.167, “something must, of course, be conceded to the laws of meter”)—we can now say the guy was on to something.

A classical computer—any computer we know today—transforms an input into an output through nothing more than the manipulation of binary bits, units of information that can be either zero or one. A quantum computer is in many ways like a regular computer, but instead of bits it uses qubits. Each qubit (pronounced “Q-bit”) can be zero or one, like a bit, but a qubit can also be zero and one—the quantum-mechanical quirk known as superposition. It is the state that the cat in the classic example of Schrödinger’s closed box is stuck in: dead and alive at the same time. If one reads quantum-mechanical equations literally, superposition is ontological, not epistemological; it’s not that we don’t know which state the cat is in, but that the cat really is in both states at once. Superposition is like Freud’s description of true ambivalence: not feeling unsure, but feeling opposing extremes of conviction at once. And, just as ambivalence holds more information than any single emotion, a qubit holds more information than a bit.

“Swim across the moat but keep the pizza dry.”

What quantum mechanics calls entanglement also contributes to the singular powers of qubits. Entangled particles have a kind of E.S.P.: regardless of distance, they can instantly share information that an observer cannot even perceive is there. Input into a quantum computer can thus be dispersed among entangled qubits, which lets the processing of that information be spread out as well: tell one particle something, and it can instantly spread the word among all the other particles with which it’s entangled.

There’s information that we can’t perceive when it’s held among entangled particles; that information is their collective secret. As quantum mechanics has taught us, things are inexorably changed by our trying to ascertain anything about them. Once observed, qubits are no longer in a state of entanglement, or of superposition: the cat commits irrevocably to life or death, and this ruins the quantum computer’s distinct calculating power. A quantum computer is the pot that, if watched, really won’t boil. Charles Bennett described quantum information as being “like the information of a dream—we can’t show it to others, and when we try to describe it we change the memory of it.”

But, once the work on the problem has been done among the entangled particles, then we can look. When one turns to a quantum computer for an “answer,” that answer, from having been held in that strange entangled way, among many particles, needs then to surface in just one, ordinary, unentangled place. That transition from entanglement to non-entanglement is sometimes termed “collapse.” Once the system has collapsed, the information it holds is no longer a dream or a secret or a strange cat at once alive and dead; the answer is then just an ordinary thing we can read off a screen.

Qubits are not merely theoretical. Early work in quantum-computer hardware built qubits by manipulating the magnetic nuclei of atoms in a liquid soup with electrical impulses. Later teams, such as the one at Oxford, developed qubits using single trapped ions, a method that confines charged atomic particles to a particular space. These qubits are very precise, though delicate; protecting them from interference is quite difficult. More easily manipulated, albeit less precise, qubits have been built from superconducting materials arranged to model an atom. Typically, the fabrication of a qubit is not all that different from that of a regular chip. At Oxford, I saw something that resembled an oversize air-hockey table chaotically populated with a specialty Lego set, with what looked like a salad-bar sneeze guard hovering over it; this extended apparatus comprised lasers and magnetic-field generators and optical cavities, all arranged at just the right angles to manipulate and protect from interference the eight tiny qubits housed in a steel tube at the table’s center.

Oxford’s eight-qubit quantum computer has significantly less computational power than an abacus, but fifty to a hundred qubits could make something as powerful as any laptop. A team in Bristol, England, has a small, four-qubit quantum computer that can factor the number 15. A Canadian company claims to have built one that can do Sudoku, though that has been questioned by some who say that the processing is effectively being done by normal bits, without any superposition or entanglement.

Increasing the number of qubits, and thus the computer’s power, is more than a simple matter of stacking. “One of the main problems with scaling up is a qubit’s fidelity,” Robert Schoelkopf, a physics professor at Yale who leads a quantum-computing team, explained. By fidelity, he refers to the fact that qubits “decohere”—fall out of their information-holding state—very easily. “Right now, qubits can be faithful for about a microsecond. And our calculations take about one hundred nanoseconds. Either calculations need to go faster or qubits need to be made more faithful.”

What qubits are doing as we avert our gaze is a matter of some dispute, and occasionally—“shut up and calculate”—of some determined indifference, especially for more pragmatically minded physicists. For Deutsch, to really understand the workings of a quantum computer necessitates subscribing to Hugh Everett’s Many Worlds Interpretation of quantum mechanics.

Everett’s theory was neglected upon its publication, in 1957, and is still a minority view. It entails the following counterintuitive reasoning: every time there is more than one possible outcome, all of them occur. So if a radioactive atom might or might not decay at any given second, it both does and doesn’t; in one universe it does, and in another it doesn’t. These small branchings of possibility then ripple out until everything that is possible in fact is. According to Many Worlds theory, instead of a single history there are innumerable branchings. In one universe your cat has died, in another he hasn’t, in a third you died in a sledding accident at age seven and never put your cat in the box in the first place, and so on.

Many Worlds is an ontologically extravagant proposition. But it also bears some comfortingly prosaic implications: in Many Worlds theory, science’s aspiration to explain the world fully remains intact. The strangeness of superposition is, as Deutsch explains it, simply “the phenomenon of physical variables having different values in different universes.” And entanglement, which so bothered Einstein and others, especially for its implication that particles could instantly communicate regardless of their distance in space or time, is also resolved. Information that seemed to travel faster than the speed of light and along no detectable pathway—spookily transmitted as if via E.S.P.—can, in Many Worlds theory, be understood to move differently. Information still spreads through direct contact—the “ordinary” way; it’s just that we need to adjust to that contact being via the tangencies of abutting universes. As a further bonus, in Many Worlds theory randomness goes away, too. A ten-per-cent chance of an atom decaying is not arbitrary at all, but rather refers to the certainty that the atom will decay in ten per cent of the universes branched from that point. (This being science, there’s the glory of nuanced dissent around the precise meaning of each descriptive term, from “chance” to “branching” to “universe.”)

In the nineteen-seventies, Everett’s theory received some of the serious attention it missed at its conception, but today the majority of physicists are not much compelled. “I’ve never myself subscribed to that view,” DiVincenzo says, “but it’s not a harmful view.” Another quantum-computing physicist called it “completely ridiculous,” but Ekert said, “Of all the weird theories out there, I would say Many Worlds is the least weird.” In Deutsch’s view, “Everett’s approach was to look at quantum theory and see what it actually said, rather than hope it said certain things. What we want is for a theory to conform to reality, and, in order to find out whether it does, you need to see what the theory actually says. Which with the deepest theories is actually quite difficult, because they violate our intuitions.”

I told Deutsch that I’d heard that even Everett thought his theory could never be tested.

“That was a catastrophic mistake,” Deutsch said. “Every innovator starts out with the world view of the subject as it was before his innovation. So he can’t be blamed for regarding his theory as an interpretation. But”—and here he paused for a moment—“I proposed a test of the Everett theory.”

Deutsch posited an artificial-intelligence program run on a computer which could be used in a quantum-mechanics experiment as an “observer”; the A.I. program, rather than a scientist, would be doing the problematic “looking,” and, by means of a clever idea that Deutsch came up with, a physicist looking at the A.I. observer would see one result if Everett’s theory was right, and another if the theory was wrong.

It was a thought experiment, though. No A.I. program existed that was anywhere near sophisticated enough to act as the observer. Deutsch argued that theoretically there could be such a program, though it could only be run on radically more advanced hardware—hardware that could model any other hardware, including that of the human brain. The computer on which the A.I. program would run “had to have the property of being universal . . . so I had to postulate this quantum-coherent universal computer, and that was really my first proposal for a quantum computer. Though I didn’t think of it as that. And I didn’t call it a quantum computer. But that’s what it was.” Deutsch had, it seems, come up with the idea for a quantum computer twice: once in devising a way to test the validity of the Many Worlds Interpretation, and a second time, emerging from the complexity-theory conversation, with evidenced argument supporting Many Worlds as a consequence.

To those who find the Many Worlds Interpretation needlessly baroque, Deutsch writes, “the quantum theory of parallel universes is not the problem—it is the solution. . . . It is the explanation—the only one that is tenable—of a remarkable and counterintuitive reality.” The theory also explains how quantum computers might work. Deutsch told me that a quantum computer would be “the first technology that allows useful tasks to be performed in collaboration between parallel universes.” The quantum computer’s processing power would come from a kind of outsourcing of work, in which calculations literally take place in other universes. Entangled particles would function as paths of communication among different universes, sharing information and gathering the results. So, for example, with the case of Shor’s algorithm, Deutsch said, “When we run such an algorithm, countless instances of us are also running it in other universes. The computer then differentiates some of those universes (by creating a superposition) and as a result they perform part of the computation on a huge variety of different inputs. Later, those values affect each other, and thereby all contribute to the final answer, in just such a way that the same answer appears in all the universes.”

Deutsch is mainly interested in the building of a quantum computer for its implications for fundamental physics, including the Many Worlds Interpretation, which would be a victory for the argument that science can explain the world and that, consequently, reality is knowable. (“House cures people,” Deutsch said to me when discussing Hugh Laurie, “because he’s interested in solving problems, not because he’s interested in people.”) Shor’s algorithm excites Deutsch, but here is how his excitement comes through in his book “The Fabric of Reality”:

To those who still cling to a single-universe world-view, I issue this challenge: explain how Shor’s algorithm works. I do not merely mean predict that it will work, which is merely a matter of solving a few uncontroversial equations. I mean provide an explanation. When Shor’s algorithm has factorized a number, using 10500 or so times the computational resources than can be seen to be present, where was the number factorized? There are only about 1080 atoms in the entire visible universe, an utterly minuscule number compared with 10500. So if the visible universe were the extent of physical reality, physical reality would not even remotely contain the resources required to factorize such a large number. Who did factorize it, then? How, and where, was the computation performed?

Deutsch believes that quantum computing and Many Worlds are inextricably bound. He is nearly alone in this conviction, though many (especially around Oxford) concede that the construction of a sizable and stable quantum computer might be evidence in favor of the Everett interpretation. “Once there are actual quantum computers,” Deutsch said to me, “and a journalist can go to the actual labs and ask how does that actual machine work, the physicists in question will then either talk some obfuscatory nonsense, or will explain it in terms of parallel universes. Which will be newsworthy. Many Worlds will then become part of our culture. Really, it has nothing to do with making the computers. But psychologically it has everything to do with making them.”

It’s tempting to view Deutsch as a visionary in his devotion to the Many Worlds Interpretation, for the simple reason that he has been a visionary before. “Quantum computers should have been invented in the nineteen-thirties,” he observed near the end of our conversation. “The stuff that I did in the late nineteen-seventies and early nineteen-eighties didn’t use any innovation that hadn’t been known in the thirties.” That is straightforwardly true. Deutsch went on, “The question is why.”

DiVincenzo offered a possible explanation. “Your average physicists will say, ‘I’m not strong in philosophy and I don’t really know what to think, and it doesn’t matter.’ ” He does not subscribe to Many Worlds, but is reluctant to dismiss Deutsch’s belief in it, partly because it has led Deutsch to come up with his important theories, but also because “quantum mechanics does have a unique place in physics, in that it does have a subcurrent of philosophy you don’t find even in Newton’s laws or gravity. But the majority of physicists say it’s a quagmire they don’t want to get into—they’d rather work out the implications of ideas; they’d rather calculate something.”

At Yale, a team led by Robert Schoelkopf has built a two-qubit quantum computer. “Deutsch is an original thinker and those early papers remain very important,” Schoelkopf told me. “But what we’re doing here is trying to develop hardware, to see if these descriptions that theorists have come up with work.” They have configured their computer to run what is known as a Grover’s algorithm, one that deals with a four-card-monte type of question: Which hidden card is the queen? It’s a sort of Shor’s algorithm for beginners, something that a small quantum computer can take on.

The Yale team fabricates their qubit processor chips in house. “The chip is basically made of a very thin wafer of sapphire or silicon—something that’s a good insulator—that we then lay a patterned film of superconducting metal on to form the wiring and qubits,” Schoelkopf said. What they showed me was smaller than a pinkie nail and looked like a map of a subway system.

Schoelkopf and his colleague Michel Devoret, who leads a separate team, took me to a large room of black lab benches, inscrutable equipment, and not particularly fancy monitors. The aesthetic was inadvertent steampunk. The dust in the room made me sneeze. “We don’t like the janitors to come sweep for fear they’ll disturb something,” Schoelkopf said.

The qubit chip is small, but its supporting apparatus is imposing. The largest piece of equipment is the plumbing of the very high-end refrigerator, which reduces the temperature around the two qubits to ten millidegrees above absolute zero. The cold improves the computer’s fidelity. Another apparatus produces the microwave signals that manipulate the qubits and set them into any degree of superposition that an experimenter chooses.

Running this Grover’s algorithm takes a regular computer three or fewer steps—if after checking the third card you still haven’t found the queen, you know she is under the fourth card—and on average it takes 2.25 steps. A quantum computer can run it in just one step. This is because the qubits can represent different values at the same time. In the four-card-monte example, each of the cards is represented by one of four states: 0,0; 0,1; 1,0; 1,1. Schoelkopf designates one of these states as the queen, and the quantum computer must determine which one. “The magic comes from the initial state of the computer,” he explained. Both of the qubits are set up, via pulses of microwave radiation, in a superposition of zero and one, so that each qubit represents two states at once, and together the two qubits represent all four states.

“Information can, in a way, be holographically represented across the whole computer; that’s what we exploit,” Devoret explained. “This is a property you don’t find in a classical information processor. A bit has to be in one state—it has to be here or there. It’s useful to have the bit be everywhere.”

Through superposition and entanglement, the computer simultaneously investigates each of the four possible queen locations. “Right now we only get the right answer eighty per cent of the time, and we find even that pretty exciting,” Schoelkopf said.

With Grover’s algorithm, or theoretically with Shor’s, calculations are performed in parallel, though not necessarily in parallel worlds. “It’s as if I had a gazillion classical computers that were all testing different prime factors at the same time,” Schoelkopf summarized. “You start with a well-defined state, and you end with a well-defined state. In between, it’s a crazy entangled state, but that’s fine.”

Schoelkopf emphasized that quantum mechanics is a funny system but that it really is correct. “These oddnesses, like superposition and entanglement—they seemed like limitations, but in fact they are exploitable resources. Quantum mechanics is no longer a new or surprising theory that should strike us as odd.”

Schoelkopf seemed to suggest that existential questions like those which Many Worlds poses might be, finally, simply impracticable. “If you have to describe a result in my lab in terms of the computing chip,” he continued, “plus the measuring apparatus, plus the computer doing data collection, plus the experimenter at the bench . . . at some point you just have to give up and say, Now quantum mechanics doesn’t matter anymore, now I just need a classical result. At some point you have to simplify, you have to throw out some of the quantum information.” When I asked him what he thought of Many Worlds and of “collapse” interpretations—in which “looking” provokes a shift from an entangled to an unentangled state—he said, “I have an alternate language which I prefer in describing quantum mechanics, which is that it should really be called Collapse of the Physicist.” He knows it’s a charming formulation, but he does mean something substantive in saying it. “In reality it’s about where to collapse the discussion of the problem.”

I thought Deutsch might be excited by the Yale team’s research, and I e-mailed him about the progress in building quantum computers. “Oh, I’m sure they’ll be useful in all sorts of ways,” he replied. “I’m really just a spectator, though, in experimental physics.”

Sir Arthur Conan Doyle never liked detective stories that built their drama by deploying clues over time. Conan Doyle wanted to write stories in which all the ingredients for solving the crime were there from the beginning, and in which the drama would be, as in the Poe stories that he cited as precedents, in the mental workings of his ideal ratiocinator. The story of quantum computing follows a Holmesian arc, since all the clues for devising a quantum computer have been there essentially since the discovery of quantum mechanics, waiting for a mind to properly decode them.

But writers of detective stories have not always been able to hew to the rationality of their idealized creations. Conan Doyle believed in “spiritualism” and in fairies, even as the most famed spiritualists and fairy photographers kept revealing themselves to be fakes. Conan Doyle was also convinced that his friend Harry Houdini had supernatural powers; Houdini could do nothing to persuade him otherwise. Conan Doyle just knew that there was a spirit world out there, and he spent the last decades of his life corralling evidence ex post facto to support his unshakable belief.

Physicists are ontological detectives. We think of scientists as wholly rational, open to all possible arguments. But to begin with a conviction and then to use one’s intellectual prowess to establish support for that conviction is a methodology that really has worked for scientists, including Deutsch. One could argue that he dreamed up quantum computing because he was devoted to the idea that science can explain the world. Deutsch would disagree.

In “The Fabric of Reality,” Deutsch writes, “I remember being told, when I was a small child, that in ancient times it was still possible to know everything that was known. I was also told that nowadays so much is known that no one could conceivably learn more than a tiny fraction of it, even in a long lifetime. The latter proposition surprised and disappointed me. In fact, I refused to believe it.” Deutsch’s life’s work has been an attempt to support that intuitive disbelief—a gathering of argument for a conviction he held because he just knew.

Deutsch is adept at dodging questions about where he gets his ideas. He joked to me that they came from going to parties, though I had the sense that it had been years since he’d been to one. He said, “I don’t like the style of science reporting that goes over that kind of thing. It’s misleading. So Brahms lived on black coffee and forced himself to write a certain number of lines of music a day. Look,” he went on, “I can’t stop you from writing an article about a weird English guy who thinks there are parallel universes. But I think that style of thinking is kind of a put-down to the reader. It’s almost like saying, If you’re not weird in these ways, you’ve got no hope as a creative thinker. That’s not true. The weirdness is only superficial.”

Talking to Deutsch can feel like a case study of reason following desire; the desire is to be a creature of pure reason. As he said in praise of Freud, “He did a good service to the world. He made it O.K. to speak about the mechanisms of the mind, some of which we may not be aware of. His actual theory was all false, there’s hardly a single true thing he said, but that’s not so bad. He was a pioneer, one of the first who tried to think about things rationally.” ♦