0%
Still working...

How Calculated Risk-Taking Can Lead to Scientific Innovation ‹ Literary Hub


In the late 1990s, I caught a nasty case of the so-called Oxford flu. I was a grad student at the University of Cambridge at the time, working toward a PhD in condensed matter physics at the hallowed Cavendish Laboratory. It’s an inspiring but daunting place to be a young physicist. You’re always conscious of the shadow of immortals like Isaac Newton, whose apple tree (or at least a descendant grafted from the original) you walk by every day on Trinity Street. Thirty members of the Cavendish—and counting—have won Nobel Prizes, and along the corridors the muttonchopsed visages of textbook names like James Clerk Maxwell, Lord Rayleigh, J. J. Thomson, and Ernest Rutherford gaze down at you expectantly.

Article continues after advertisement

The most daunting part of all, though, is the fundamental requirement for attaining a doctorate at the university: “the creation and interpretation of new knowledge.” I’d earned my place at Cambridge by excelling at solving the riddles posed by my undergraduate professors. A single question in the weekly problem sets might require three or four pages of dense mathematical manipulations. But we always knew that the answer existed, and that the route to the answer could be found somewhere in that week’s lecture notes.

We were traversing terrain that had already been exhaustively mapped. To create new knowledge, on the other hand, would require a plunge into the unknown—“Voyaging through strange seas of Thought, alone,” as William Wordsworth imagined Isaac Newton doing. How, I wondered, would I even determine which direction to sail in, let alone whether something interesting awaited on the other side?

Choosing the uncertain option gives you a chance of dramatic success but also raises the likelihood of abject failure.

By 1999, I was two years into my PhD and painfully aware that I had yet to create any knowledge. That was when I started to notice a fresh new topic popping up repeatedly in talks and journal clubs and, perhaps most important, in the Cavendish’s famous teatime conversations. Every day at 10:30 a.m. and again at 3:00 p.m., virtually everyone in the building would converge on the tearoom, where staff served tea and coffee starting at ten pence a cup.

It was a tradition started by J.J. Thomson, who discovered the electron in 1897, and the semi-random seating meant that you’d find yourself chatting not just with your fellow students but also with grizzled lab technicians, visiting foreign academics, and eminent professors. The topic was usually either soccer or physics, and in the latter case I began to hear more and more about a mysterious entity called a quantum computer.

Article continues after advertisement

A computer, at its core, is a device that manipulates data stored in the form of “bits,” each of which can take on a value of either zero or one. A quantum computer is one that obeys the peculiar laws of quantum mechanics. Most notably: instead of bits, it has qubits (pronounced “kew bits”) that can exist in a superposition of zero and one simultaneously. Richard Feynman, a Nobel Prize-winning physicist, mused about the concept in a famous 1981 lecture at MIT.

Four years later, the Oxford University physicist David Deutsch drew up a formal definition of how such a machine might work, and a few researchers around the world began to explore what it would be good for. In 1994, a mathematician at Bell Labs in New Jersey named Peter Shor published an algorithm showing that, if you could actually build a quantum computer, it would be able to solve certain problems like factoring large numbers—the core of modern encryption methods—exponentially faster than any classical computer.

It was Rolf Landauer, a physicist at IBM’s research center in New York, who coined the term “Oxford flu” to describe the sudden surge of interest that followed Shor’s breakthrough. What had once been an esoteric brain game played mostly by theoretical physicists and mathematicians affiliated with the University of Oxford became the talk of teatime in physics departments around the world. Grant money began to flow, including from organizations concerned with cryptography like the U.S. National Security Agency. My own research at that point was focused on the curious properties of certain exotic semiconductors, but I began to reconsider what these structures might be good for. I added a slide to my talks reframing my experiments as part of the grand pursuit of a quantum computer.

I still hadn’t created any knowledge, though. I was following a trail blazed by Feynman and Deutsch and others. I didn’t know how they had managed to find this potentially fruitful new territory to explore. Neither, it turned out, did Deutsch. “The stuff that I did in the late nineteen-seventies and early nineteen-eighties didn’t use any innovation that hadn’t been known in the thirties,” he later told a reporter. “The question is why.”

*

Article continues after advertisement

Coming up with new ideas is a more abstract concept than discovering a new continent, but the underlying processes of physical and conceptual exploration have more in common than you might think. “There’s a lot of evidence that the same neural machinery we use for exploring the physical world around us is also leveraged to explore more abstract concepts,” says Charley Wu, a computational cognitive scientist.

Some of the parallels are obvious: the explore-exploit dilemma, for example, recurs across domains. Just as European spice merchants once had to choose between navigating the well-known but arduous Silk Road or sailing west to seek a new route to Asia, companies have to choose whether to devote their research dollars to incremental improvements of their existing offerings or blue-sky attempts to come up with completely new products.

Artists have to decide whether to create new work within the parameters of existing genres, or break the rules. Movie studios have to weigh the benefits of a fresh but untried story versus yet another sequel. Like pulling the lever in a multi-armed bandit experiment, choosing the uncertain option gives you a chance of dramatic success but also raises the likelihood of abject failure.

But the parallels run deeper than risk-reward calculations. Since the discovery of cognitive maps in our brains in the 1970s, scientists have been debating whether this neural circuitry is merely a sort of internal GPS system, or whether it has broader uses. The latter view—the intellectual offspring of Edward Tolman’s idea that broad conceptual cognitive maps help us navigate “that great God-given maze which is our human world”—has gained the upper hand in recent years. We don’t just map places in our hippocampus; we also map ideas. We keep track of our social networks, for example, by charting how near or far people are from us in a two-dimensional space defined by how powerful the other person is and what sort of experiences we have with them. That map too is plotted in the hippocampus.

One telltale sign of how we map ideas is the language we use to describe them. Spatial metaphors “structure some of our most fundamental concepts, including our concepts of time, quantity, similarity, good, and evil,” psychologists Benjamin Pitt and Daniel Casasanto point out in a 2022 paper. “For example, in English, we use vertical space to talk about high and low numbers, lateral space to talk about the left and right poles of the political spectrum, and sagittal space to talk about moving meetings forward or back in time. Quantities can be big or small; vacations can be long or short; acquaintances can be close or distant.” Even the internet is encoded as a physical space in our brains: we point our thumbs upward to signify approval, and click a leftward-pointing arrow to go “back” to a previous page.

Article continues after advertisement

The spatial organization of ideas isn’t just a quirk of language. Similar patterns show up in unexpected and unspoken ways. For example, we tend to have more positive associations with words that contain more letters typed with the right hand than the left hand—an effect that has been demonstrated not just in English but also in Dutch, Spanish, Portuguese, and German, and occurs in both right-handers and left-handers. Clearly there’s no evolutionary benefit to this “QWERTY effect.” It’s just a byproduct of how we map concepts like “good” and “bad” in our minds: if something is getting better, we tend to imagine it moving to the right on some imaginary scale. Ten out of ten is on the right (or at the top), while zero out of ten is on the left (or at the bottom). We can even find shortcuts between ideas.

A 1996 study by brain scientist Howard Eichenbaum trained rats to associate certain odors with other odors, or with rewards buried under a mix of sand and ground-up rat chow. He found that if rats learned to associate odor A with odor B, and also learned to associate odor B with the presence of a buried reward, then they could make the cognitive leap to assume that odor A also implied a reward. This is exactly analogous to the role of a cognitive map in physical wayfinding: if you know how to get from home to school, and from school to the library, your hippocampus enables you to plot a route directly from home to the library. Tellingly, Eichenbaum found that rats with a damaged hippocampus couldn’t figure out the conceptual shortcut.

A 2020 study from Charley Wu and his colleagues tested the parallels between exploring in space and exploring ideas by having subjects complete two different treasure hunts. In the spatial treasure hunt, subjects used the arrow keys to navigate around a two-dimensional map searching for pockets of high reward, choosing whether to exploit whatever rewards they’d found in one region or explore other regions in search of richer yields. The conceptual treasure hunt involved using the arrow keys to change the shape and orientation of a striped geometrical pattern called a Gabor patch.

Subjects had to explore the various possible combinations of features to figure out which ones had been assigned the highest reward value. In both cases, the subjects used a combination of random and uncertainty-directed exploration to zero in on the treasure. We search for new ideas in much the same way as we wander through the streets of an unfamiliar city, integrating clues from what we’ve seen before to predict what we’ll find around the next corner, and tracking it all in our hippocampus.

*

Article continues after advertisement

Quantum computing’s origin story is a perfect illustration of a common source of new ideas: the intersection of two old ones. The familiar idea, for David Deutsch, was quantum mechanics. Deutsch is a physicist’s physicist, a famously eccentric Englishman who, despite his affiliation with Oxford, has never held an actual academic job there—largely because he hates teaching, but also because he prefers not to leave his cluttered but comfortable house.

When a Japanese film crew once wanted to clean up the mess before filming an interview, they had to promise to take extensive photographs and reconstruct the chaos after they’d finished. His PhD, which he completed in 1978, dealt with quantum field theory in curved space-time. But it was a brush with ideas from a completely separate field—computer science—that pointed him in a new direction.

At the time, computer scientists were excited about the emerging new subfield of computational complexity, which seeks to understand and classify the difficulty of computational problems. Deutsch was skeptical of the whole endeavor. How could you measure the difficulty of a problem, he asked a colleague, without a universal standard computer to run the calculations on? “Well, the thing is, there is a fundamental computer,” the colleague replied. “The fundamental computer is physics itself.”

Deutsch was struck by this insight—but it also occurred to him that computer scientists were still using the wrong computer. They were making their calculations based on classical Newtonian physics, which is a simplified approximation of the more complex rules of quantum mechanics fleshed out by physicists like Erwin Schrödinger and Werner Heisenberg in the 1930s.

We search for new ideas in much the same way as we wander through the streets of an unfamiliar city, integrating clues from what we’ve seen before to predict what we’ll find around the next corner.

Deutsch began to wonder what a universal computer—a notional construct devised by Alan Turing in the 1930s—would look like if it followed the rules of quantum mechanics. That question ultimately led to the 1985 paper in which Deutsch introduced the concept of a qubit and effectively created the field of quantum computing. Half a century had passed since Turing, Schrödinger, and Heisenberg had presented their respective ideas, but no one had connected them before.

Similarly, Richard Feynman’s early insights about quantum computing were prompted in part by his son, a student at MIT, switching focus from philosophy to computer science. Feynman already had the physics; now he was curious about computers, too. As with Deutsch, it was the relationship between two distinct bodies of knowledge—a newly charted pathway in his cognitive map of ideas—that generated Feynman’s fresh insights. This is no coincidence, it turns out.

In 2017, a research team from Belgium and the United States published an analysis of all 785,000 articles that appeared in a widely used scientific database in 2001. They were interested in whether studies that connect previously separate bodies of knowledge are more likely to produce breakthroughs. They assessed “combinatorial novelty” by looking at whether a paper cited previous research from two or more journals that had never before been cited together in a single paper. The vast majority of studies in the database were exploitative rather than exploratory: just 11 percent made at least one new combination, and these novel studies initially got a cold reception. They tended to be published in lower-impact journals, and were less likely to be cited by subsequent papers in the first few years after publication.

But after about three years, they caught up. Ultimately, the most novel one percent of papers were 40 percent more likely to end up as a “big hit,” racking up large numbers of citations and influencing multiple fields. There are similar findings in the patent literature: filings that combine fields of expertise are more likely to produce breakthroughs. On the other hand, the novel papers were also more likely to end up as duds, rarely cited by subsequent researchers. Generating genuinely new ideas is a high-risk, high-reward enterprise.

In a sense, bridging the gap between two distant fields is the conceptual equivalent of looking for a new route between Europe and India, or crossing the uncharted interior of a continent. The start and end points are already known, but mapping the route between them creates new knowledge. Henri Poincaré, the brilliant nineteenth-century French polymath who is sometimes said to be the last person to know all of mathematics, argued that the greatest insights come from linking “domains which are far apart.” That’s easier said than done, of course.

Big-data analyses of how we discover new music in online catalogs and how new pages get added to Wikipedia fit with a model of “expanding the adjacent possible.” We tend to discover things on the border of what we already know, the researchers write, “mostly retracing well-worn paths, but every so often stepping somewhere new, and in the process, breaking through to a new piece of the space.” We’re like John McDouall Stuart, crossing Australia one waterhole at a time and never venturing too far from our last camp.

The other approach—call it the Burke and Wills strategy—involves greater leaps and greater risks. But the payoff is sometimes worthwhile. In 2015, a University of Chicago team analyzed millions of scientific articles and patents over a thirty-five-year period in order to map the network of relationships between pairs of chemicals and quantify the “cognitive distance” between them. The overwhelming majority of biomedical scientists, they found, stuck to “exploring the local neighborhood”: if they made a new connection between two chemicals at all, it was only a slight variation on previous combinations.

The biggest breakthroughs—those that won Nobel and other prizes and racked up the most citations—tended to come from bridging bigger cognitive distances, but relatively few scientists followed this approach. “When the concepts under study are more distant, more effort is required to imagine and coordinate their combinations,” the researchers conclude. “More risk is involved in testing distant claims, because no similar claims have been successful.”

__________________________________

How Calculated Risk-Taking Can Lead to Scientific Innovation ‹ Literary Hub

Excerpted from The Explorer’s Gene: Why We Seek Big Challenges, New Flavors, and the Blank Spots on the Map by Alex Hutchinson. Copyright © 2025 by Alex Hutchinson. From Mariner Books, an imprint of HarperCollins Publishers. Reprinted by permission.



Source link

Recommended Posts