Download high-resolution image
Listen to a clip from the audiobook
audio pause button
0:00
0:00

How We Got to Now

Six Innovations That Made the Modern World

Listen to a clip from the audiobook
audio pause button
0:00
0:00
Johnson explores the history of innovation over centuries, tracing facets of modern life from their creation by hobbyists, amateurs, and entrepreneurs to their unintended historical consequences. "You're apt to find yourself exhilarated....Johnson is not composing an etiology of particular inventions, but doing something broader and more imaginative....I particularly like the cultural observations Johnson draws along the way....[He] has a deft and persuasive touch....[A] graceful and compelling book."—The New York Times Book Review.

Introduction

A little more than two decades ago, the Mexican-American artist and philosopher Manuel De Landa published a strange and wonderful book called War in the Age of Intelligent Machines. The book was, technically speaking, a history of military technology, but it had nothing in common with what you might naturally expect from the genre. Instead of heroic accounts of submarine engineering written by some Naval Academy professor, De Landa’s book wove chaos theory, evolutionary biology, and French post-structuralist philosophy into histories of the conoidal bullet, radar, and other military innovations. I remember reading it as a grad student in my early twenties and thinking that it was one of those books that seemed completely sui generis, as though De Landa had arrived on Earth from some other intellectual planet. It seemed mesmerizing and deeply confusing at the same time.

De Landa began the book with a brilliant interpretative twist. Imagine, he suggested, a work of history written sometime in the future by some form of artificial intelligence, mapping out the history of the preceding millennium. “We could imagine,” De Landa argued, “that such a robot historian would write a different kind of history than would its human counterpart.” Events that loom large in human accounts—the European conquest of the Americas, the fall of the Roman Empire, the Magna Carta—would be footnotes from the robot’s perspective. Other events that seem marginal to traditional history—the toy automatons that pretended to play chess in the eighteenth century, the Jacquard loom that inspired the punch cards of early computing—would be watershed moments to the robot historian, turning points that trace a direct line to the present. “While a human historian might try to understand the way people assembled clockworks, motors and other physical contraptions,” De Landa explained, “a robot historian would likely place a stronger emphasis on the way these machines affected human evolution. The robot would stress the fact that when clockworks once represented the dominant technology on the planet, people imagined the world around them as a similar system of cogs and wheels.”

There are no intelligent robots in this book, alas. The innovations here belong to everyday life, not science fiction: lightbulbs, sound recordings, air-conditioning, a glass of clean tap water, a wristwatch, a glass lens. But I have tried to tell the story of these innovations from something like the perspective of De Landa’s robot historian. If the lightbulb could write a history of the past three hundred years, it too would look very different. We would see how much of our past was bound up in the pursuit of artificial light, how much ingenuity and struggle went into the battle against darkness, and how the inventions we came up with triggered changes that, at first glance, would seem to have nothing to do with lightbulbs.

This is a history worth telling, in part, because it allows us to see a world we generally take for granted with fresh eyes. Most of us in the developed world don’t pause to think how amazing it is that we drink water from a tap and never once worry about dying forty-eight hours later from cholera. Thanks to air-conditioning, many of us live comfortably in climates that would have been intolerable just fifty years ago. Our lives are surrounded and supported by a whole class of objects that are enchanted with the ideas and creativity of thousands of people who came before us: inventors and hobbyists and reformers who steadily hacked away at the problem of making artificial light or clean drinking water so that we can enjoy those luxuries today without a second thought, without even thinking of them as luxuries in the first place. As the robot historians would no doubt remind us, we are indebted to those people every bit as much as, if not more than, we are to the kings and conquerors and magnates of traditional history.

But the other reason to write this kind of history is that these innovations have set in motion a much wider array of changes in society than you might reasonably expect. Innovations usually begin life with an attempt to solve a specific problem, but once they get into circulation, they end up triggering other changes that would have been extremely difficult to predict. This is a pattern of change that appears constantly in evolutionary history. Think of the act of pollination: sometime during the Cretaceous age, flowers began to evolve colors and scents that signaled the presence of pollen to insects, who simultaneously evolved complex equipment to extract the pollen and, inadvertently, fertilize other flowers with pollen. Over time, the flowers supplemented the pollen with even more energy-rich nectar to lure the insects into the rituals of pollination. Bees and other insects evolved the sensory tools to see and be drawn to flowers, just as the flowers evolved the properties that attract bees. This is a different kind of survival of the fittest, not the usual zero-sum competitive story that we often hear in watered-down versions of Darwinism, but something more symbiotic: the insects and flowers succeed because they, physically, fit well with each other. (The technical term for this is coevolution.) The importance of this relationship was not lost on Charles Darwin, who followed up the publication of On the Origin of Species with an entire book on orchid pollination.

These coevolutionary interactions often lead to transformations in organisms that would seem to have no immediate connection to the original species. The symbiosis between flowering plants and insects that led to the production of nectar ultimately created an opportunity for much larger organisms—the hummingbirds—to extract nectar from plants, though to do that they evolved an extremely unusual form of flight mechanics that enables them to hover alongside the flower in a way that few birds can even come close to doing. Insects can stabilize themselves midflight because they have fundamental flexibility to their anatomy that vertebrates lack. Yet despite the restrictions placed on them by their skeletal structure, hummingbirds evolved a novel way of rotating their wings, giving power to the upstroke as well as the downstroke, enabling them to float midair while extracting nectar from a flower. These are the strange leaps that evolution makes constantly: the sexual reproduction strategies of plants end up shaping the design of a hummingbird’s wings. Had there been naturalists around to observe the insects first evolving pollination behavior alongside the flowering plants, they would have logically assumed that this strange new ritual had nothing to do with avian life. And yet it ended up precipitating one of the most astonishing physical transformations in the evolutionary history of birds.

The history of ideas and innovation unfolds the same way. Johannes Gutenberg’s printing press created a surge in demand for spectacles, as the new practice of reading made Europeans across the continent suddenly realize that they were farsighted; the market demand for spectacles encouraged a growing number of people to produce and experiment with lenses, which led to the invention of the microscope, which shortly thereafter enabled us to perceive that our bodies were made up of microscopic cells. You wouldn’t think that printing technology would have anything to do with the expansion of our vision down to the cellular scale, just as you wouldn’t have thought that the evolution of pollen would alter the design of a hummingbird’s wing. But that is the way change happens.

This may sound, at first blush, like a variation on the famous “butterfly effect” from chaos theory, where the flap of a butterfly’s wing in California ends up triggering a hurricane in the mid-Atlantic. But in fact, the two are fundamentally different. The extraordinary (and unsettling) property of the butterfly effect is that it involves a virtually unknowable chain of causality; you can’t map the link between the air molecules bouncing around the butterfly and the storm system brewing in the Atlantic. They may be connected, because everything is connected on some level, but it is beyond our capacity to parse those connections or, even harder, to predict them in advance. But something very different is at work with the flower and the hummingbird: while they are very different organisms, with very different needs and aptitudes, not to mention basic biological systems, the flower clearly influences the hummingbird’s physiognomy in direct, intelligible ways.

This book is then partially about these strange chains of influence, the “hummingbird effect.” An innovation, or cluster of innovations, in one field ends up triggering changes that seem to belong to a different domain altogether. Hummingbird effects come in a variety of forms. Some are intuitive enough: orders-of-magnitude increases in the sharing of energy or information tend to set in motion a chaotic wave of change that easily surges over intellectual and social boundaries. (Just look at the story of the Internet over the past thirty years.) But other hummingbird effects are more subtle; they leave behind less conspicuous causal fingerprints. Breakthroughs in our ability to measure a phenomenon—time, temperature, mass—often open up new opportunities that seem at first blush to be unrelated. (The pendulum clock helped enable the factory towns of the industrial revolution.) Sometimes, as in the story of Gutenberg and the lens, a new innovation creates a liability or weakness in our natural toolkit, that sets us out in a new direction, generating new tools to fix a “problem” that was itself a kind of invention. Sometimes new tools reduce natural barriers and limits to human growth, the way the invention of air-conditioning enabled humans to colonize the hotspots of the planet at a scale that would have startled our ancestors just three generations ago. Sometimes the new tools influence us metaphorically, as in the robot historian’s connection between the clock and the mechanistic view of early physics, the universe imagined as a system of “cogs and wheels.”

Observing hummingbird effects in history makes it clear that social transformations are not always the direct result of human agency and decision-making. Sometimes change comes about through the actions of political leaders or inventors or protest movements, who deliberately bring about some kind of new reality through their conscious planning. (We have an integrated national highway system in the United States in large part because our political leaders decided to pass the Federal-Aid Highway Act of 1956.) But in other cases, the ideas and innovations seem to have a life of their own, engendering changes in society that were not part of their creators’ vision. The inventors of air-conditioning were not trying to redraw the political map of America when they set about to cool down living rooms and office buildings, but, as we will see, the technology they unleashed on the world enabled dramatic changes in American settlement patterns, which in turn transformed the occupants of Congress and the White House.

I have resisted the understandable temptation to assess these changes with some kind of value judgment. Certainly this book is a celebration of our ingenuity, but just because an innovation happens, that doesn’t mean there aren’t, in the end, mixed consequences as it ripples through society. Most ideas that get “selected” by culture are demonstrably improvements in terms of local objectives: the cases where we have chosen an inferior technology or scientific principle over a more productive or accurate one are the exceptions that prove the rule. And even when we do briefly choose the inferior VHS over Betamax, before long we have DVDs that outperform either option. So when you look at the arc of history from that perspective, it does trend toward better tools, better energy sources, better ways to transmit information.

The problem lies with the externalities and unintended consequences. When Google launched its original search tool in 1999, it was a momentous improvement over any previous technique for exploring the Web’s vast archive. That was cause for celebration on almost every level: Google made the entire Web more useful, for free. But then Google started selling advertisements tied into the search requests it received, and within a few years, the efficiency of the searches (along with a few other online services like Craigslist) had hollowed out the advertising base of local newspapers around the United States. Almost no one saw that coming, not even the Google founders. You can make the argument—as it happens, I would probably make the argument—that the trade-off was worth it, and that the challenge from Google will ultimately unleash better forms of journalism, built around the unique opportunities of the Web instead of the printing press. But certainly there is a case to be made that the rise of Web advertising has been, all told, a negative development for the essential public resource of newspaper journalism. The same debate rages over just about every technological advance: Cars moved us more efficiently through space than did horses, but were they worth the cost to the environment or the walkable city? Air-conditioning allowed us to live in deserts, but at what cost to our water supplies?

This book is resolutely agnostic on these questions of value. Figuring out whether we think the change is better for us in the long run is not the same as figuring out how the change came about in the first place. Both kinds of figuring are essential if we are to make sense of history and to map our path into the future. We need to be able to understand how innovation happens in society; we need to be able to predict and understand, as best as we can, the hummingbird effects that will transform other fields after each innovation takes root. And at the same time we need a value system to decide which strains to encourage and which benefits aren’t worth the tangential costs. I have tried to spell out the full range of consequences with the innovations surveyed in this book, the good and the bad. The vacuum tube helped bring jazz to a mass audience, and it also helped amplify the Nuremberg rallies. How you ultimately feel about these transformations—Are we ultimately better off thanks to the invention of the vacuum tube?—will depend on your own belief systems about politics and social change.

I should mention one additional element of the book’s focus: The “we” in this book, and in its title, is largely the “we” of North Americans and Europeans. The story of how China or Brazil got to now would be a different one, and every bit as interesting. But the European/North American story, while finite in its scope, is nonetheless of wider relevance because certain critical experiences—the rise of the scientific method, industrialization—happened in Europe first, and have now spread across the world. (Why they happened in Europe first is of course one of the most interesting questions of all, but it’s not one this book tries to answer.) Those enchanted objects of everyday life—those lightbulbs and lenses and audio recordings—are now a part of life just about everywhere on the planet; telling the story of the past thousand years from their perspective should be of interest no matter where you happen to live. New innovations are shaped by geopolitical history; they cluster in cities and trading hubs. But in the long run, they don’t have a lot of patience for borders and national identities, never more so than now in our connected world.

I have tried to adhere to this focus because, within these boundaries, the history I’ve written here is in other respects as expansive as possible. Telling the story of our ability to capture and transmit the human voice, for instance, is not just a story about a few brilliant inventors, the Edisons and Bells whose names every schoolchild has already memorized. It’s also a story about eighteenth-century anatomical drawings of the human ear, the sinking of the Titanic, the civil rights movement, and the strange acoustic properties of a broken vacuum tube. This is an approach I have elsewhere called “long zoom” history: the attempt to explain historical change by simultaneously examining multiple scales of experience—from the vibrations of sound waves on the eardrum all the way out to mass political movements. It may be more intuitive to keep historical narratives on the scale of individuals or nations, but on some fundamental level, it is not accurate to remain between those boundaries. History happens on the level of atoms, the level of planetary climate change, and all the levels in between. If we are trying to get the story right, we need an interpretative approach that can do justice to all those different levels.

The physicist Richard Feynman once described the relationship between aesthetics and science in a similar vein:

I have a friend who’s an artist and has sometimes taken a view which I don’t agree with very well. He’ll hold up a flower and say “Look how beautiful it is,” and I’ll agree. Then he says “I as an artist can see how beautiful this is but you as a scientist take this all apart and it becomes a dull thing,” and I think that he’s kind of nutty. First of all, the beauty that he sees is available to other people and to me too, I believe. Although I may not be quite as refined aesthetically as he is . . . I can appreciate the beauty of a flower. At the same time, I see much more about the flower than he sees. I could imagine the cells in there, the complicated actions inside, which also have a beauty. I mean it’s not just beauty at this dimension, at one centimeter; there’s also beauty at smaller dimensions, the inner structure, also the processes. The fact that the colors in the flower evolved in order to attract insects to pollinate it is interesting; it means that insects can see the color. It adds a question: does this aesthetic sense also exist in the lower forms? Why is it aesthetic? All kinds of interesting questions which shows that a science knowledge only adds to the excitement, the mystery and the awe of a flower. It only adds. I don’t understand how it subtracts.

There is something undeniably appealing about the story of a great inventor or scientist—Galileo and his telescope, for instance—working his or her way toward a transformative idea. But there is another, deeper story that can be told as well: how the ability to make lenses also depended on the unique quantum mechanical properties of silicon dioxide and on the fall of Constantinople. Telling the story from that long-zoom perspective doesn’t subtract from the traditional account focused on Galileo’s genius. It only adds.

Marin County, California

February 2014

1. Glass

Roughly 26 million years ago, something happened over the sands of the Libyan Desert, the bleak, impossibly dry landscape that marks the eastern edge of the Sahara. We don’t know exactly what it was, but we do know that it was hot. Grains of silica melted and fused under an intense heat that must have been at least a thousand degrees. The compounds of silicon dioxide they formed have a number of curious chemical traits. Like H2O, they form crystals in their solid state, and melt into a liquid when heated. But silicon dioxide has a much higher melting point than water; you need temperatures above 500 degrees Fahrenheit instead of 32. But the truly peculiar thing about silicon dioxide is what happens when it cools. Liquid water will happily re-form the crystals of ice if the temperature drops back down again. But silicon dioxide for some reason is incapable of rearranging itself back into the orderly structure of crystal. Instead, it forms a new substance that exists in a strange limbo between solid and liquid, a substance human beings have been obsessed with since the dawn of civilization. When those superheated grains of sand cooled down below their melting point, a vast stretch of the Libyan Desert was coated with a layer of what we now call glass.

About ten thousand years ago, give or take a few millennia, someone traveling through the desert stumbled across a large fragment of this glass. We don’t know anything more about that fragment, only that it must have impressed just about everyone who came into contact with it, because it circulated through the markets and social networks of early civilization, until it ended up as a centerpiece of a brooch, carved into the shape of a scarab beetle. It sat there undisturbed for four thousand years, until archeologists unearthed it in 1922 while exploring the tomb of an Egyptian ruler. Against all odds, that small sliver of silicon dioxide had found its way from the Libyan Desert into the burial chamber of Tutankhamun.

Glass first made the transition from ornament to advanced technology during the height of the Roman Empire, when glassmakers figured out ways to make the material sturdier and less cloudy than naturally forming glass like that of King Tut’s scarab. Glass windows were built during this period for the first time, laying the groundwork for the shimmering glass towers that now populate city skylines around the world. The visual aesthetics of drinking wine emerged as people consumed it in semitransparent glass vessels and stored it in glass bottles. But, in a way, the early history of glass is relatively predictable: craftsmen figured out how to melt the silica into drinking vessels or windowpanes, exactly the sort of typical uses we instinctively associate with glass today. It wasn’t until the next millennium, and the fall of another great empire, that glass became what it is today: one of the most versatile and transformative materials in all of human culture.

Pectoral in gold cloissoné with semiprecious stones and glass paste, with winged scarab, symbol of resurrection, in center, from the tomb of Pharaoh Tutankhamun

THE SACKING of Constantinople in 1204 was one of those historical quakes that send tremors of influence rippling across the globe. Dynasties fall, armies surge and retreat, the map of the world is redrawn. But the fall of Constantinople also triggered a seemingly minor event, lost in the midst of that vast reorganization of religious and geopolitical dominance and ignored by most historians of the time. A small community of glassmakers from Turkey sailed westward across the Mediterranean and settled in Venice, where they began practicing their trade in the prosperous new city growing out of the marshes on the shores of the Adriatic Sea.

Circa 1900: Roman civilization, first–second century AD glass containers for ointments

It was one of a thousand migrations set in motion by Constantinople’s fall, but looking back over the centuries, it turned out to be one of the most significant. As they settled into the canals and crooked streets of Venice, at that point arguably the most important hub of commercial trade in the world, their skills at blowing glass quickly created a new luxury good for the merchants of the city to sell around the globe. But lucrative as it was, glassmaking was not without its liabilities. The melting point of silicon dioxide required furnaces burning at temperatures near 1,000 degrees, and Venice was a city built almost entirely out of wooden structures. (The classic stone Venetian palaces would not be built for another few centuries.) The glassmakers had brought a new source of wealth to Venice, but they had also brought the less appealing habit of burning down the neighborhood.

In 1291, in an effort to both retain the skills of the glassmakers and protect public safety, the city government sent the glassmakers into exile once again, only this time their journey was a short one—a mile across the Venetian Lagoon to the island of Murano. Unwittingly, the Venetian doges had created an innovation hub: by concentrating the glassmakers on a single island the size of a small city neighborhood, they triggered a surge of creativity, giving birth to an environment that possessed what economists call “information spillover.” The density of Murano meant that new ideas were quick to flow through the entire population. The glassmakers were in part competitors, but their family lineages were heavily intertwined. There were individual masters in the group that had more talent or expertise than the others, but in general the genius of Murano was a collective affair: something created by sharing as much as by competitive pressures.

A section of a fifteenth-century map of Venice, showing the island of Murano

By the first years of the next century, Murano had become known as the Isle of Glass, and its ornate vases and other exquisite glassware became status symbols throughout Western Europe. (The glassmakers continue to work their trade today, many of them direct descendants of the original families that emigrated from Turkey.) It was not exactly a model that could be directly replicated in modern times: mayors looking to bring the creative class to their cities probably shouldn’t consider forced exile and borders armed with the death penalty. But somehow it worked. After years of trial and error, experimenting with different chemical compositions, the Murano glassmaker Angelo Barovier took seaweed rich in potassium oxide and manganese, burned it to create ash, and then added these ingredients to molten glass. When the mixture cooled, it created an extraordinarily clear type of glass. Struck by its resemblance to the clearest rock crystals of quartz, Barovier called it cristallo. This was the birth of modern glass.

WHILE GLASSMAKERS such as Barovier were brilliant at making glass transparent, we didn’t understand scientifically why glass is transparent until the twentieth century. Most materials absorb the energy of light. On a subatomic level, electrons orbiting the atoms that made up the material effectively “swallow” the energy of the incoming photon of light, causing those electrons to gain energy. But electrons can gain or lose energy only in discrete steps, known as “quanta.” But the size of the steps varies from material to material. Silicon dioxide happens to have very large steps, which means that the energy from a single photon of light is not sufficient to bump up the electrons to the higher level of energy. Instead, the light passes through the material. (Most ultraviolet light, however, does have enough energy to be absorbed, which is why you can’t get a suntan through a glass window.) But light doesn’t simply pass through glass; it can also be bent and distorted or even broken up into its component wavelengths. Glass could be used to change the look of the world, by bending light in precise ways. This turned out to be even more revolutionary than simple transparency.

In the monasteries of the twelfth and thirteenth centuries, monks laboring over religious manuscripts in candlelit rooms used curved chunks of glass as a reading aid. They would run what were effectively bulky magnifiers over the page, enlarging the Latin inscriptions. No one is sure exactly when or where it happened, but somewhere around this time in Northern Italy, glassmakers came up with an innovation that would change the way we see the world, or at least clarify it: shaping glass into small disks that bulge in the center, placing each one in a frame, and joining the frames together at the top, creating the world’s first spectacles.

Those early spectacles were called roidi da ogli, meaning “disks for the eyes.” Thanks to their resemblance to lentil beans—lentes in Latin—the disks themselves came to be called “lenses.” For several generations, these ingenious new devices were almost exclusively the province of monastic scholars. The condition of “hyperopia”—farsightedness—was widely distributed through the population, but most people didn’t notice that they suffered from it, because they didn’t read. For a monk, straining to translate Lucretius by the flickering light of a candle, the need for spectacles was all too apparent. But the general population—the vast majority of them illiterate—had almost no occasion to discern tiny shapes like letterforms as part of their daily routine. People were farsighted; they just didn’t have any real reason to notice that they were farsighted. And so spectacles remained rare and expensive objects.

The earliest image of a monk with glasses, 1342

What changed all of that, of course, was Gutenberg’s invention of the printing press in the 1440s. You could fill a small library with the amount of historical scholarship that has been published documenting the impact of the printing press, the creation of what Marshall McLuhan famously called “the Gutenberg galaxy.” Literacy rates rose dramatically; subversive scientific and religious theories routed around the official channels of orthodox belief; popular amusements like the novel and printed pornography became commonplace. But Gutenberg’s great breakthrough had another, less celebrated effect: it made a massive number of people aware for the first time that they were farsighted. And that revelation created a surge in demand for spectacles.

What followed was one of the most extraordinary cases of the hummingbird effect in modern history. Gutenberg made printed books relatively cheap and portable, which triggered a rise in literacy, which exposed a flaw in the visual acuity of a sizable part of the population, which then created a new market for the manufacture of spectacles. Within a hundred years of Gutenberg’s invention, thousands of spectacle makers around Europe were thriving, and glasses became the first piece of advanced technology—since the invention of clothing in Neolithic times—that ordinary people would regularly wear on their bodies.

But the coevolutionary dance did not stop there. Just as the nectar of flowering plants encouraged a new kind of flight in the hummingbird, the economic incentive created by the surging market for spectacles engendered a new pool of expertise. Europe was not just awash in lenses, but also in ideas about lenses. Thanks to the printing press, the Continent was suddenly populated by people who were experts at manipulating light through slightly convex pieces of glass. These were the hackers of the first optical revolution. Their experiments would inaugurate a whole new chapter in the history of vision.

Fifteenth-century glasses

In 1590 in the small town of Middleburg in the Netherlands, father and son spectacle makers Hans and Zacharias Janssen experimented with lining up two lenses, not side by side like spectacles, but in line with each other, magnifying the objects they observed, thereby inventing the microscope. Within seventy years, the British scientist Robert Hooke had published his groundbreaking illustrated volume Micrographia, with gorgeous hand-drawn images re-creating what Hooke had seen through his microscope. Hooke analyzed fleas, wood, leaves, even his own frozen urine. But his most influential discovery came by carving off a thin sheaf of cork and viewing it through the microscope lens. “I could exceeding plainly perceive it to be all perforated and porous, much like a Honey-comb,” Hooke wrote, “but that the pores of it were not regular; yet it was not unlike a Honey-comb in these particulars . . . these pores, or cells, were not very deep, but consisted of a great many little Boxes.” With that sentence, Hooke gave a name to one of life’s fundamental building blocks—the cell—leading the way to a revolution in science and medicine. Before long the microscope would reveal the invisible colonies of bacteria and viruses that both sustain and threaten human life, which in turn led to modern vaccines and antibiotics.

The Flea (engraving from Robert Hooke’s Micrographia, London)

The microscope took nearly three generations to produce truly transformative science, but for some reason the telescope generated its revolutions more quickly. Twenty years after the invention of the microscope, a cluster of Dutch lensmakers, including Zacharias Janssen, more or less simultaneously invented the telescope. (Legend has it that one of them, Hans Lippershey, stumbled upon the idea while watching his children playing with his lenses.) Lippershey was the first to apply for a patent, describing a device “for seeing things far away as if they were nearby.” Within a year, Galileo got word of this miraculous new device, and modified the Lippershey design to reach a magnification of ten times normal vision. In January of 1610, just two years after Lippershey had filed for his patent, Galileo used the telescope to observe that moons were orbiting Jupiter, the first real challenge to the Aristotelian paradigm that assumed all heavenly bodies circled the Earth.

This is the strange parallel history of Gutenberg’s invention. It has long been associated with the scientific revolution, for several reasons. Pamphlets and treatises from alleged heretics like Galileo could circulate ideas outside the censorious limits of the Church, ultimately undermining its authority; at the same time, the system of citation and reference that evolved in the decades after Gutenberg’s Bible became an essential tool in applying the scientific method. But Gutenberg’s creation advanced the march of science in another, less familiar way: it expanded possibilities of lens design, of glass itself. For the first time, the peculiar physical properties of silicon dioxide were not just being harnessed to let us see things that we could already see with our own eyes; we could now see things that transcended the natural limits of human vision.

The lens would go on to play a pivotal role in nineteenth- and twentieth-century media. It was first utilized by photographers to focus beams of light on specially treated paper that captured images, then by filmmakers to both record and subsequently project moving images for the first time. Starting in the 1940s, we began coating glass with phosphor and firing electrons at it, creating the hypnotic images of television. Within a few years, sociologists and media theorists were declaring that we had become a “society of the image,” the literate Gutenberg galaxy giving way to the blue glow of the TV screen and the Hollywood glamour shot. Those transformations emerged out of a wide range of innovations and materials, but all of them, in one way or another, depended on the unique ability of glass to transmit and manipulate light.

An early microscope designed by Robert Hooke, 1665

To be sure, the story of the modern lens and its impact on media is not terribly surprising. There’s an intuitive line that you can follow from the lenses of the first spectacles, to the lens of a microscope, to the lens of a camera. Yet glass would turn out to have another bizarre physical property, one that even the master glassblowers of Murano had failed to exploit.

AS PROFESSORS GO, the physicist Charles Vernon Boys was apparently a lousy one. H. G. Wells, who was briefly one of Boys’s students at London’s Royal College of Science, later described him as “one of the worst teachers who has ever turned his back on a restive audience. . . . [He] messed about with the blackboard, galloped through an hour of talk, and bolted back to the apparatus in his private room.”

But what Boys lacked in teaching ability he made up for in his gift for experimental physics, designing and building scientific instruments. In 1887, as part of his physics experiments, Boys wanted to create a very fine shard of glass to measure the effects of delicate physical forces on objects. He had an idea that he could use a thin fiber of glass as a balance arm. But first he had to make one.

Hummingbird effects sometimes happen when an innovation in one field exposes a flaw in some other technology (or in the case of the printed book, in our own anatomy) that can be corrected only by another discipline altogether. But sometimes the effect arrives thanks to a different kind of breakthrough: a dramatic increase in our ability to measure something, and an improvement in the tools we build for measuring. New ways of measuring almost always imply new ways of making. Such was the case with Boys’s balance arm. But what made Boys such an unusual figure in the annals of innovation is the decidedly unorthodox tool he used in pursuit of this new measuring device. To create his thin string of glass, Boys built a special crossbow in his laboratory, and created lightweight arrows (or bolts) for it. To one bolt he attached the end of a glass rod with sealing wax. Then he heated glass until it softened, and he fired the bolt. As the bolt hurtled toward its target, it pulled a tail of fiber from the molten glass clinging to the crossbow. In one of his shots, Boys produced a thread of glass that stretched almost ninety feet long.

Charles Vernon Boys standing in a laboratory, 1917

  • SHORTLIST | 2014
    PEN/E. O. Wilson Literary Science Writing Award
Praise for Steven Johnson

“A great science writer.” — Bill Clinton, speaking at the Health Matters conference

“Mr. Johnson, who knows a thing or two about the history of science, is a first-rate storyteller.” — The New York Times

“You’re apt to find yourself exhilarated…Johnson is not composing an etiology of particular inventions, but doing something broader and more imaginative…I particularly like the cultural observations Johnson draws along the way…[he] has a deft and persuasive touch…[a] graceful and compelling book.” — The New York Times Book Review

“Johnson is a polymath. . . .  [It’s] exhilarating to follow his unpredictable trains of thought. To explain why some ideas upend the world, he draws upon many disciplines: chemistry, social history, geography, even ecosystem science.” — Los Angeles Times

“Steven Johnson is a maven of the history of ideas... How We Got to Now is readable, entertaining, and a challenge to any jaded sensibility that has become inured to the everyday miracles all around us.” — The Guardian

“[Johnson's] point is simple, important and well-timed: During periods of rapid innovation, there is always tumult as citizens try to make sense of it....Johnson is an engaging writer, and he takes very complicated and disparate subjects and makes their evolution understandable.”  The Washington Post

“Through a series of elegant books about the history of technological innovation, Steven Johnson has become one of the most persuasive advocates for the role of collaboration in innovation….Mr. Johnson's erudition can be quite gobsmacking.” – The Wall Street Journal

“An unbelievable book…it’s an innovative way to talk about history.” — Jon Stewart

"What makes this book such a mind-expanding read is Johnson’s ability to appreciate human advancement as a vast network of influence, rather than a simple chain of one invention leading to another, and result is nothing less than a celebration of the human mind." — The Daily Beast

“Fascinating…it’s an amazing book!” — CBS This Morning

“A full three cheers for Steven Johnson. He is, by no means, the only writer we currently have in our era of technological revolution who devotes himself to innovation, invention and creativity but he is, far and away, the most readable.” — The Buffalo News 

"The reader of How We Got to Now cannot fail to be impressed by human ingenuity, including Johnson’s, in determining these often labyrinthine but staggeringly powerful developments of one thing to the next." — San Francisco Chronicle

"A rapid but interesting tour of the history behind many of the comforts and technologies that comprise our world." — Christian Science Monitor

"How We Got to Now... offers a fascinating glimpse at how a handful of basic inventions--such as the measurement of time, reliable methods of sanitation, the benefits of competent refrigeration, glassmaking and the faithful reproduction of sound--have evolved, often in surprising ways." — Shelf Awareness 

"[Johnson] writes about science and technology elegantly and accessibly, he evinces an infectious delight in his subject matter...Each chapter is full of strange and fascinating connections." — Barnes and Noble Review

"From the sanitation engineering that literally raised nineteenth-century Chicago to the 23 men who partially invented the light bulb before Thomas Edison, [How We Got to Now] is a many-layered delight."— Nature Review

“A highly readable and fascinating account of science, invention, accident and genius that gave us the world we live in today.” —Minneapolis Star Tribune

 

Steven Johnson is the bestselling author of thirteen books, including Where Good Ideas Come From, How We Got to Now, The Ghost Map, and Extra Life. He’s the host and cocreator of the Emmy-winning PBS/BBC series How We Got to Now, the host of the podcast The TED Interview, and the author of the newsletter Adjacent Possible. He lives in Brooklyn, New York, and Marin County, California, with his wife and three sons. View titles by Steven Johnson

About

Johnson explores the history of innovation over centuries, tracing facets of modern life from their creation by hobbyists, amateurs, and entrepreneurs to their unintended historical consequences. "You're apt to find yourself exhilarated....Johnson is not composing an etiology of particular inventions, but doing something broader and more imaginative....I particularly like the cultural observations Johnson draws along the way....[He] has a deft and persuasive touch....[A] graceful and compelling book."—The New York Times Book Review.

Excerpt

Introduction

A little more than two decades ago, the Mexican-American artist and philosopher Manuel De Landa published a strange and wonderful book called War in the Age of Intelligent Machines. The book was, technically speaking, a history of military technology, but it had nothing in common with what you might naturally expect from the genre. Instead of heroic accounts of submarine engineering written by some Naval Academy professor, De Landa’s book wove chaos theory, evolutionary biology, and French post-structuralist philosophy into histories of the conoidal bullet, radar, and other military innovations. I remember reading it as a grad student in my early twenties and thinking that it was one of those books that seemed completely sui generis, as though De Landa had arrived on Earth from some other intellectual planet. It seemed mesmerizing and deeply confusing at the same time.

De Landa began the book with a brilliant interpretative twist. Imagine, he suggested, a work of history written sometime in the future by some form of artificial intelligence, mapping out the history of the preceding millennium. “We could imagine,” De Landa argued, “that such a robot historian would write a different kind of history than would its human counterpart.” Events that loom large in human accounts—the European conquest of the Americas, the fall of the Roman Empire, the Magna Carta—would be footnotes from the robot’s perspective. Other events that seem marginal to traditional history—the toy automatons that pretended to play chess in the eighteenth century, the Jacquard loom that inspired the punch cards of early computing—would be watershed moments to the robot historian, turning points that trace a direct line to the present. “While a human historian might try to understand the way people assembled clockworks, motors and other physical contraptions,” De Landa explained, “a robot historian would likely place a stronger emphasis on the way these machines affected human evolution. The robot would stress the fact that when clockworks once represented the dominant technology on the planet, people imagined the world around them as a similar system of cogs and wheels.”

There are no intelligent robots in this book, alas. The innovations here belong to everyday life, not science fiction: lightbulbs, sound recordings, air-conditioning, a glass of clean tap water, a wristwatch, a glass lens. But I have tried to tell the story of these innovations from something like the perspective of De Landa’s robot historian. If the lightbulb could write a history of the past three hundred years, it too would look very different. We would see how much of our past was bound up in the pursuit of artificial light, how much ingenuity and struggle went into the battle against darkness, and how the inventions we came up with triggered changes that, at first glance, would seem to have nothing to do with lightbulbs.

This is a history worth telling, in part, because it allows us to see a world we generally take for granted with fresh eyes. Most of us in the developed world don’t pause to think how amazing it is that we drink water from a tap and never once worry about dying forty-eight hours later from cholera. Thanks to air-conditioning, many of us live comfortably in climates that would have been intolerable just fifty years ago. Our lives are surrounded and supported by a whole class of objects that are enchanted with the ideas and creativity of thousands of people who came before us: inventors and hobbyists and reformers who steadily hacked away at the problem of making artificial light or clean drinking water so that we can enjoy those luxuries today without a second thought, without even thinking of them as luxuries in the first place. As the robot historians would no doubt remind us, we are indebted to those people every bit as much as, if not more than, we are to the kings and conquerors and magnates of traditional history.

But the other reason to write this kind of history is that these innovations have set in motion a much wider array of changes in society than you might reasonably expect. Innovations usually begin life with an attempt to solve a specific problem, but once they get into circulation, they end up triggering other changes that would have been extremely difficult to predict. This is a pattern of change that appears constantly in evolutionary history. Think of the act of pollination: sometime during the Cretaceous age, flowers began to evolve colors and scents that signaled the presence of pollen to insects, who simultaneously evolved complex equipment to extract the pollen and, inadvertently, fertilize other flowers with pollen. Over time, the flowers supplemented the pollen with even more energy-rich nectar to lure the insects into the rituals of pollination. Bees and other insects evolved the sensory tools to see and be drawn to flowers, just as the flowers evolved the properties that attract bees. This is a different kind of survival of the fittest, not the usual zero-sum competitive story that we often hear in watered-down versions of Darwinism, but something more symbiotic: the insects and flowers succeed because they, physically, fit well with each other. (The technical term for this is coevolution.) The importance of this relationship was not lost on Charles Darwin, who followed up the publication of On the Origin of Species with an entire book on orchid pollination.

These coevolutionary interactions often lead to transformations in organisms that would seem to have no immediate connection to the original species. The symbiosis between flowering plants and insects that led to the production of nectar ultimately created an opportunity for much larger organisms—the hummingbirds—to extract nectar from plants, though to do that they evolved an extremely unusual form of flight mechanics that enables them to hover alongside the flower in a way that few birds can even come close to doing. Insects can stabilize themselves midflight because they have fundamental flexibility to their anatomy that vertebrates lack. Yet despite the restrictions placed on them by their skeletal structure, hummingbirds evolved a novel way of rotating their wings, giving power to the upstroke as well as the downstroke, enabling them to float midair while extracting nectar from a flower. These are the strange leaps that evolution makes constantly: the sexual reproduction strategies of plants end up shaping the design of a hummingbird’s wings. Had there been naturalists around to observe the insects first evolving pollination behavior alongside the flowering plants, they would have logically assumed that this strange new ritual had nothing to do with avian life. And yet it ended up precipitating one of the most astonishing physical transformations in the evolutionary history of birds.

The history of ideas and innovation unfolds the same way. Johannes Gutenberg’s printing press created a surge in demand for spectacles, as the new practice of reading made Europeans across the continent suddenly realize that they were farsighted; the market demand for spectacles encouraged a growing number of people to produce and experiment with lenses, which led to the invention of the microscope, which shortly thereafter enabled us to perceive that our bodies were made up of microscopic cells. You wouldn’t think that printing technology would have anything to do with the expansion of our vision down to the cellular scale, just as you wouldn’t have thought that the evolution of pollen would alter the design of a hummingbird’s wing. But that is the way change happens.

This may sound, at first blush, like a variation on the famous “butterfly effect” from chaos theory, where the flap of a butterfly’s wing in California ends up triggering a hurricane in the mid-Atlantic. But in fact, the two are fundamentally different. The extraordinary (and unsettling) property of the butterfly effect is that it involves a virtually unknowable chain of causality; you can’t map the link between the air molecules bouncing around the butterfly and the storm system brewing in the Atlantic. They may be connected, because everything is connected on some level, but it is beyond our capacity to parse those connections or, even harder, to predict them in advance. But something very different is at work with the flower and the hummingbird: while they are very different organisms, with very different needs and aptitudes, not to mention basic biological systems, the flower clearly influences the hummingbird’s physiognomy in direct, intelligible ways.

This book is then partially about these strange chains of influence, the “hummingbird effect.” An innovation, or cluster of innovations, in one field ends up triggering changes that seem to belong to a different domain altogether. Hummingbird effects come in a variety of forms. Some are intuitive enough: orders-of-magnitude increases in the sharing of energy or information tend to set in motion a chaotic wave of change that easily surges over intellectual and social boundaries. (Just look at the story of the Internet over the past thirty years.) But other hummingbird effects are more subtle; they leave behind less conspicuous causal fingerprints. Breakthroughs in our ability to measure a phenomenon—time, temperature, mass—often open up new opportunities that seem at first blush to be unrelated. (The pendulum clock helped enable the factory towns of the industrial revolution.) Sometimes, as in the story of Gutenberg and the lens, a new innovation creates a liability or weakness in our natural toolkit, that sets us out in a new direction, generating new tools to fix a “problem” that was itself a kind of invention. Sometimes new tools reduce natural barriers and limits to human growth, the way the invention of air-conditioning enabled humans to colonize the hotspots of the planet at a scale that would have startled our ancestors just three generations ago. Sometimes the new tools influence us metaphorically, as in the robot historian’s connection between the clock and the mechanistic view of early physics, the universe imagined as a system of “cogs and wheels.”

Observing hummingbird effects in history makes it clear that social transformations are not always the direct result of human agency and decision-making. Sometimes change comes about through the actions of political leaders or inventors or protest movements, who deliberately bring about some kind of new reality through their conscious planning. (We have an integrated national highway system in the United States in large part because our political leaders decided to pass the Federal-Aid Highway Act of 1956.) But in other cases, the ideas and innovations seem to have a life of their own, engendering changes in society that were not part of their creators’ vision. The inventors of air-conditioning were not trying to redraw the political map of America when they set about to cool down living rooms and office buildings, but, as we will see, the technology they unleashed on the world enabled dramatic changes in American settlement patterns, which in turn transformed the occupants of Congress and the White House.

I have resisted the understandable temptation to assess these changes with some kind of value judgment. Certainly this book is a celebration of our ingenuity, but just because an innovation happens, that doesn’t mean there aren’t, in the end, mixed consequences as it ripples through society. Most ideas that get “selected” by culture are demonstrably improvements in terms of local objectives: the cases where we have chosen an inferior technology or scientific principle over a more productive or accurate one are the exceptions that prove the rule. And even when we do briefly choose the inferior VHS over Betamax, before long we have DVDs that outperform either option. So when you look at the arc of history from that perspective, it does trend toward better tools, better energy sources, better ways to transmit information.

The problem lies with the externalities and unintended consequences. When Google launched its original search tool in 1999, it was a momentous improvement over any previous technique for exploring the Web’s vast archive. That was cause for celebration on almost every level: Google made the entire Web more useful, for free. But then Google started selling advertisements tied into the search requests it received, and within a few years, the efficiency of the searches (along with a few other online services like Craigslist) had hollowed out the advertising base of local newspapers around the United States. Almost no one saw that coming, not even the Google founders. You can make the argument—as it happens, I would probably make the argument—that the trade-off was worth it, and that the challenge from Google will ultimately unleash better forms of journalism, built around the unique opportunities of the Web instead of the printing press. But certainly there is a case to be made that the rise of Web advertising has been, all told, a negative development for the essential public resource of newspaper journalism. The same debate rages over just about every technological advance: Cars moved us more efficiently through space than did horses, but were they worth the cost to the environment or the walkable city? Air-conditioning allowed us to live in deserts, but at what cost to our water supplies?

This book is resolutely agnostic on these questions of value. Figuring out whether we think the change is better for us in the long run is not the same as figuring out how the change came about in the first place. Both kinds of figuring are essential if we are to make sense of history and to map our path into the future. We need to be able to understand how innovation happens in society; we need to be able to predict and understand, as best as we can, the hummingbird effects that will transform other fields after each innovation takes root. And at the same time we need a value system to decide which strains to encourage and which benefits aren’t worth the tangential costs. I have tried to spell out the full range of consequences with the innovations surveyed in this book, the good and the bad. The vacuum tube helped bring jazz to a mass audience, and it also helped amplify the Nuremberg rallies. How you ultimately feel about these transformations—Are we ultimately better off thanks to the invention of the vacuum tube?—will depend on your own belief systems about politics and social change.

I should mention one additional element of the book’s focus: The “we” in this book, and in its title, is largely the “we” of North Americans and Europeans. The story of how China or Brazil got to now would be a different one, and every bit as interesting. But the European/North American story, while finite in its scope, is nonetheless of wider relevance because certain critical experiences—the rise of the scientific method, industrialization—happened in Europe first, and have now spread across the world. (Why they happened in Europe first is of course one of the most interesting questions of all, but it’s not one this book tries to answer.) Those enchanted objects of everyday life—those lightbulbs and lenses and audio recordings—are now a part of life just about everywhere on the planet; telling the story of the past thousand years from their perspective should be of interest no matter where you happen to live. New innovations are shaped by geopolitical history; they cluster in cities and trading hubs. But in the long run, they don’t have a lot of patience for borders and national identities, never more so than now in our connected world.

I have tried to adhere to this focus because, within these boundaries, the history I’ve written here is in other respects as expansive as possible. Telling the story of our ability to capture and transmit the human voice, for instance, is not just a story about a few brilliant inventors, the Edisons and Bells whose names every schoolchild has already memorized. It’s also a story about eighteenth-century anatomical drawings of the human ear, the sinking of the Titanic, the civil rights movement, and the strange acoustic properties of a broken vacuum tube. This is an approach I have elsewhere called “long zoom” history: the attempt to explain historical change by simultaneously examining multiple scales of experience—from the vibrations of sound waves on the eardrum all the way out to mass political movements. It may be more intuitive to keep historical narratives on the scale of individuals or nations, but on some fundamental level, it is not accurate to remain between those boundaries. History happens on the level of atoms, the level of planetary climate change, and all the levels in between. If we are trying to get the story right, we need an interpretative approach that can do justice to all those different levels.

The physicist Richard Feynman once described the relationship between aesthetics and science in a similar vein:

I have a friend who’s an artist and has sometimes taken a view which I don’t agree with very well. He’ll hold up a flower and say “Look how beautiful it is,” and I’ll agree. Then he says “I as an artist can see how beautiful this is but you as a scientist take this all apart and it becomes a dull thing,” and I think that he’s kind of nutty. First of all, the beauty that he sees is available to other people and to me too, I believe. Although I may not be quite as refined aesthetically as he is . . . I can appreciate the beauty of a flower. At the same time, I see much more about the flower than he sees. I could imagine the cells in there, the complicated actions inside, which also have a beauty. I mean it’s not just beauty at this dimension, at one centimeter; there’s also beauty at smaller dimensions, the inner structure, also the processes. The fact that the colors in the flower evolved in order to attract insects to pollinate it is interesting; it means that insects can see the color. It adds a question: does this aesthetic sense also exist in the lower forms? Why is it aesthetic? All kinds of interesting questions which shows that a science knowledge only adds to the excitement, the mystery and the awe of a flower. It only adds. I don’t understand how it subtracts.

There is something undeniably appealing about the story of a great inventor or scientist—Galileo and his telescope, for instance—working his or her way toward a transformative idea. But there is another, deeper story that can be told as well: how the ability to make lenses also depended on the unique quantum mechanical properties of silicon dioxide and on the fall of Constantinople. Telling the story from that long-zoom perspective doesn’t subtract from the traditional account focused on Galileo’s genius. It only adds.

Marin County, California

February 2014

1. Glass

Roughly 26 million years ago, something happened over the sands of the Libyan Desert, the bleak, impossibly dry landscape that marks the eastern edge of the Sahara. We don’t know exactly what it was, but we do know that it was hot. Grains of silica melted and fused under an intense heat that must have been at least a thousand degrees. The compounds of silicon dioxide they formed have a number of curious chemical traits. Like H2O, they form crystals in their solid state, and melt into a liquid when heated. But silicon dioxide has a much higher melting point than water; you need temperatures above 500 degrees Fahrenheit instead of 32. But the truly peculiar thing about silicon dioxide is what happens when it cools. Liquid water will happily re-form the crystals of ice if the temperature drops back down again. But silicon dioxide for some reason is incapable of rearranging itself back into the orderly structure of crystal. Instead, it forms a new substance that exists in a strange limbo between solid and liquid, a substance human beings have been obsessed with since the dawn of civilization. When those superheated grains of sand cooled down below their melting point, a vast stretch of the Libyan Desert was coated with a layer of what we now call glass.

About ten thousand years ago, give or take a few millennia, someone traveling through the desert stumbled across a large fragment of this glass. We don’t know anything more about that fragment, only that it must have impressed just about everyone who came into contact with it, because it circulated through the markets and social networks of early civilization, until it ended up as a centerpiece of a brooch, carved into the shape of a scarab beetle. It sat there undisturbed for four thousand years, until archeologists unearthed it in 1922 while exploring the tomb of an Egyptian ruler. Against all odds, that small sliver of silicon dioxide had found its way from the Libyan Desert into the burial chamber of Tutankhamun.

Glass first made the transition from ornament to advanced technology during the height of the Roman Empire, when glassmakers figured out ways to make the material sturdier and less cloudy than naturally forming glass like that of King Tut’s scarab. Glass windows were built during this period for the first time, laying the groundwork for the shimmering glass towers that now populate city skylines around the world. The visual aesthetics of drinking wine emerged as people consumed it in semitransparent glass vessels and stored it in glass bottles. But, in a way, the early history of glass is relatively predictable: craftsmen figured out how to melt the silica into drinking vessels or windowpanes, exactly the sort of typical uses we instinctively associate with glass today. It wasn’t until the next millennium, and the fall of another great empire, that glass became what it is today: one of the most versatile and transformative materials in all of human culture.

Pectoral in gold cloissoné with semiprecious stones and glass paste, with winged scarab, symbol of resurrection, in center, from the tomb of Pharaoh Tutankhamun

THE SACKING of Constantinople in 1204 was one of those historical quakes that send tremors of influence rippling across the globe. Dynasties fall, armies surge and retreat, the map of the world is redrawn. But the fall of Constantinople also triggered a seemingly minor event, lost in the midst of that vast reorganization of religious and geopolitical dominance and ignored by most historians of the time. A small community of glassmakers from Turkey sailed westward across the Mediterranean and settled in Venice, where they began practicing their trade in the prosperous new city growing out of the marshes on the shores of the Adriatic Sea.

Circa 1900: Roman civilization, first–second century AD glass containers for ointments

It was one of a thousand migrations set in motion by Constantinople’s fall, but looking back over the centuries, it turned out to be one of the most significant. As they settled into the canals and crooked streets of Venice, at that point arguably the most important hub of commercial trade in the world, their skills at blowing glass quickly created a new luxury good for the merchants of the city to sell around the globe. But lucrative as it was, glassmaking was not without its liabilities. The melting point of silicon dioxide required furnaces burning at temperatures near 1,000 degrees, and Venice was a city built almost entirely out of wooden structures. (The classic stone Venetian palaces would not be built for another few centuries.) The glassmakers had brought a new source of wealth to Venice, but they had also brought the less appealing habit of burning down the neighborhood.

In 1291, in an effort to both retain the skills of the glassmakers and protect public safety, the city government sent the glassmakers into exile once again, only this time their journey was a short one—a mile across the Venetian Lagoon to the island of Murano. Unwittingly, the Venetian doges had created an innovation hub: by concentrating the glassmakers on a single island the size of a small city neighborhood, they triggered a surge of creativity, giving birth to an environment that possessed what economists call “information spillover.” The density of Murano meant that new ideas were quick to flow through the entire population. The glassmakers were in part competitors, but their family lineages were heavily intertwined. There were individual masters in the group that had more talent or expertise than the others, but in general the genius of Murano was a collective affair: something created by sharing as much as by competitive pressures.

A section of a fifteenth-century map of Venice, showing the island of Murano

By the first years of the next century, Murano had become known as the Isle of Glass, and its ornate vases and other exquisite glassware became status symbols throughout Western Europe. (The glassmakers continue to work their trade today, many of them direct descendants of the original families that emigrated from Turkey.) It was not exactly a model that could be directly replicated in modern times: mayors looking to bring the creative class to their cities probably shouldn’t consider forced exile and borders armed with the death penalty. But somehow it worked. After years of trial and error, experimenting with different chemical compositions, the Murano glassmaker Angelo Barovier took seaweed rich in potassium oxide and manganese, burned it to create ash, and then added these ingredients to molten glass. When the mixture cooled, it created an extraordinarily clear type of glass. Struck by its resemblance to the clearest rock crystals of quartz, Barovier called it cristallo. This was the birth of modern glass.

WHILE GLASSMAKERS such as Barovier were brilliant at making glass transparent, we didn’t understand scientifically why glass is transparent until the twentieth century. Most materials absorb the energy of light. On a subatomic level, electrons orbiting the atoms that made up the material effectively “swallow” the energy of the incoming photon of light, causing those electrons to gain energy. But electrons can gain or lose energy only in discrete steps, known as “quanta.” But the size of the steps varies from material to material. Silicon dioxide happens to have very large steps, which means that the energy from a single photon of light is not sufficient to bump up the electrons to the higher level of energy. Instead, the light passes through the material. (Most ultraviolet light, however, does have enough energy to be absorbed, which is why you can’t get a suntan through a glass window.) But light doesn’t simply pass through glass; it can also be bent and distorted or even broken up into its component wavelengths. Glass could be used to change the look of the world, by bending light in precise ways. This turned out to be even more revolutionary than simple transparency.

In the monasteries of the twelfth and thirteenth centuries, monks laboring over religious manuscripts in candlelit rooms used curved chunks of glass as a reading aid. They would run what were effectively bulky magnifiers over the page, enlarging the Latin inscriptions. No one is sure exactly when or where it happened, but somewhere around this time in Northern Italy, glassmakers came up with an innovation that would change the way we see the world, or at least clarify it: shaping glass into small disks that bulge in the center, placing each one in a frame, and joining the frames together at the top, creating the world’s first spectacles.

Those early spectacles were called roidi da ogli, meaning “disks for the eyes.” Thanks to their resemblance to lentil beans—lentes in Latin—the disks themselves came to be called “lenses.” For several generations, these ingenious new devices were almost exclusively the province of monastic scholars. The condition of “hyperopia”—farsightedness—was widely distributed through the population, but most people didn’t notice that they suffered from it, because they didn’t read. For a monk, straining to translate Lucretius by the flickering light of a candle, the need for spectacles was all too apparent. But the general population—the vast majority of them illiterate—had almost no occasion to discern tiny shapes like letterforms as part of their daily routine. People were farsighted; they just didn’t have any real reason to notice that they were farsighted. And so spectacles remained rare and expensive objects.

The earliest image of a monk with glasses, 1342

What changed all of that, of course, was Gutenberg’s invention of the printing press in the 1440s. You could fill a small library with the amount of historical scholarship that has been published documenting the impact of the printing press, the creation of what Marshall McLuhan famously called “the Gutenberg galaxy.” Literacy rates rose dramatically; subversive scientific and religious theories routed around the official channels of orthodox belief; popular amusements like the novel and printed pornography became commonplace. But Gutenberg’s great breakthrough had another, less celebrated effect: it made a massive number of people aware for the first time that they were farsighted. And that revelation created a surge in demand for spectacles.

What followed was one of the most extraordinary cases of the hummingbird effect in modern history. Gutenberg made printed books relatively cheap and portable, which triggered a rise in literacy, which exposed a flaw in the visual acuity of a sizable part of the population, which then created a new market for the manufacture of spectacles. Within a hundred years of Gutenberg’s invention, thousands of spectacle makers around Europe were thriving, and glasses became the first piece of advanced technology—since the invention of clothing in Neolithic times—that ordinary people would regularly wear on their bodies.

But the coevolutionary dance did not stop there. Just as the nectar of flowering plants encouraged a new kind of flight in the hummingbird, the economic incentive created by the surging market for spectacles engendered a new pool of expertise. Europe was not just awash in lenses, but also in ideas about lenses. Thanks to the printing press, the Continent was suddenly populated by people who were experts at manipulating light through slightly convex pieces of glass. These were the hackers of the first optical revolution. Their experiments would inaugurate a whole new chapter in the history of vision.

Fifteenth-century glasses

In 1590 in the small town of Middleburg in the Netherlands, father and son spectacle makers Hans and Zacharias Janssen experimented with lining up two lenses, not side by side like spectacles, but in line with each other, magnifying the objects they observed, thereby inventing the microscope. Within seventy years, the British scientist Robert Hooke had published his groundbreaking illustrated volume Micrographia, with gorgeous hand-drawn images re-creating what Hooke had seen through his microscope. Hooke analyzed fleas, wood, leaves, even his own frozen urine. But his most influential discovery came by carving off a thin sheaf of cork and viewing it through the microscope lens. “I could exceeding plainly perceive it to be all perforated and porous, much like a Honey-comb,” Hooke wrote, “but that the pores of it were not regular; yet it was not unlike a Honey-comb in these particulars . . . these pores, or cells, were not very deep, but consisted of a great many little Boxes.” With that sentence, Hooke gave a name to one of life’s fundamental building blocks—the cell—leading the way to a revolution in science and medicine. Before long the microscope would reveal the invisible colonies of bacteria and viruses that both sustain and threaten human life, which in turn led to modern vaccines and antibiotics.

The Flea (engraving from Robert Hooke’s Micrographia, London)

The microscope took nearly three generations to produce truly transformative science, but for some reason the telescope generated its revolutions more quickly. Twenty years after the invention of the microscope, a cluster of Dutch lensmakers, including Zacharias Janssen, more or less simultaneously invented the telescope. (Legend has it that one of them, Hans Lippershey, stumbled upon the idea while watching his children playing with his lenses.) Lippershey was the first to apply for a patent, describing a device “for seeing things far away as if they were nearby.” Within a year, Galileo got word of this miraculous new device, and modified the Lippershey design to reach a magnification of ten times normal vision. In January of 1610, just two years after Lippershey had filed for his patent, Galileo used the telescope to observe that moons were orbiting Jupiter, the first real challenge to the Aristotelian paradigm that assumed all heavenly bodies circled the Earth.

This is the strange parallel history of Gutenberg’s invention. It has long been associated with the scientific revolution, for several reasons. Pamphlets and treatises from alleged heretics like Galileo could circulate ideas outside the censorious limits of the Church, ultimately undermining its authority; at the same time, the system of citation and reference that evolved in the decades after Gutenberg’s Bible became an essential tool in applying the scientific method. But Gutenberg’s creation advanced the march of science in another, less familiar way: it expanded possibilities of lens design, of glass itself. For the first time, the peculiar physical properties of silicon dioxide were not just being harnessed to let us see things that we could already see with our own eyes; we could now see things that transcended the natural limits of human vision.

The lens would go on to play a pivotal role in nineteenth- and twentieth-century media. It was first utilized by photographers to focus beams of light on specially treated paper that captured images, then by filmmakers to both record and subsequently project moving images for the first time. Starting in the 1940s, we began coating glass with phosphor and firing electrons at it, creating the hypnotic images of television. Within a few years, sociologists and media theorists were declaring that we had become a “society of the image,” the literate Gutenberg galaxy giving way to the blue glow of the TV screen and the Hollywood glamour shot. Those transformations emerged out of a wide range of innovations and materials, but all of them, in one way or another, depended on the unique ability of glass to transmit and manipulate light.

An early microscope designed by Robert Hooke, 1665

To be sure, the story of the modern lens and its impact on media is not terribly surprising. There’s an intuitive line that you can follow from the lenses of the first spectacles, to the lens of a microscope, to the lens of a camera. Yet glass would turn out to have another bizarre physical property, one that even the master glassblowers of Murano had failed to exploit.

AS PROFESSORS GO, the physicist Charles Vernon Boys was apparently a lousy one. H. G. Wells, who was briefly one of Boys’s students at London’s Royal College of Science, later described him as “one of the worst teachers who has ever turned his back on a restive audience. . . . [He] messed about with the blackboard, galloped through an hour of talk, and bolted back to the apparatus in his private room.”

But what Boys lacked in teaching ability he made up for in his gift for experimental physics, designing and building scientific instruments. In 1887, as part of his physics experiments, Boys wanted to create a very fine shard of glass to measure the effects of delicate physical forces on objects. He had an idea that he could use a thin fiber of glass as a balance arm. But first he had to make one.

Hummingbird effects sometimes happen when an innovation in one field exposes a flaw in some other technology (or in the case of the printed book, in our own anatomy) that can be corrected only by another discipline altogether. But sometimes the effect arrives thanks to a different kind of breakthrough: a dramatic increase in our ability to measure something, and an improvement in the tools we build for measuring. New ways of measuring almost always imply new ways of making. Such was the case with Boys’s balance arm. But what made Boys such an unusual figure in the annals of innovation is the decidedly unorthodox tool he used in pursuit of this new measuring device. To create his thin string of glass, Boys built a special crossbow in his laboratory, and created lightweight arrows (or bolts) for it. To one bolt he attached the end of a glass rod with sealing wax. Then he heated glass until it softened, and he fired the bolt. As the bolt hurtled toward its target, it pulled a tail of fiber from the molten glass clinging to the crossbow. In one of his shots, Boys produced a thread of glass that stretched almost ninety feet long.

Charles Vernon Boys standing in a laboratory, 1917

Awards

  • SHORTLIST | 2014
    PEN/E. O. Wilson Literary Science Writing Award

Praise

Praise for Steven Johnson

“A great science writer.” — Bill Clinton, speaking at the Health Matters conference

“Mr. Johnson, who knows a thing or two about the history of science, is a first-rate storyteller.” — The New York Times

“You’re apt to find yourself exhilarated…Johnson is not composing an etiology of particular inventions, but doing something broader and more imaginative…I particularly like the cultural observations Johnson draws along the way…[he] has a deft and persuasive touch…[a] graceful and compelling book.” — The New York Times Book Review

“Johnson is a polymath. . . .  [It’s] exhilarating to follow his unpredictable trains of thought. To explain why some ideas upend the world, he draws upon many disciplines: chemistry, social history, geography, even ecosystem science.” — Los Angeles Times

“Steven Johnson is a maven of the history of ideas... How We Got to Now is readable, entertaining, and a challenge to any jaded sensibility that has become inured to the everyday miracles all around us.” — The Guardian

“[Johnson's] point is simple, important and well-timed: During periods of rapid innovation, there is always tumult as citizens try to make sense of it....Johnson is an engaging writer, and he takes very complicated and disparate subjects and makes their evolution understandable.”  The Washington Post

“Through a series of elegant books about the history of technological innovation, Steven Johnson has become one of the most persuasive advocates for the role of collaboration in innovation….Mr. Johnson's erudition can be quite gobsmacking.” – The Wall Street Journal

“An unbelievable book…it’s an innovative way to talk about history.” — Jon Stewart

"What makes this book such a mind-expanding read is Johnson’s ability to appreciate human advancement as a vast network of influence, rather than a simple chain of one invention leading to another, and result is nothing less than a celebration of the human mind." — The Daily Beast

“Fascinating…it’s an amazing book!” — CBS This Morning

“A full three cheers for Steven Johnson. He is, by no means, the only writer we currently have in our era of technological revolution who devotes himself to innovation, invention and creativity but he is, far and away, the most readable.” — The Buffalo News 

"The reader of How We Got to Now cannot fail to be impressed by human ingenuity, including Johnson’s, in determining these often labyrinthine but staggeringly powerful developments of one thing to the next." — San Francisco Chronicle

"A rapid but interesting tour of the history behind many of the comforts and technologies that comprise our world." — Christian Science Monitor

"How We Got to Now... offers a fascinating glimpse at how a handful of basic inventions--such as the measurement of time, reliable methods of sanitation, the benefits of competent refrigeration, glassmaking and the faithful reproduction of sound--have evolved, often in surprising ways." — Shelf Awareness 

"[Johnson] writes about science and technology elegantly and accessibly, he evinces an infectious delight in his subject matter...Each chapter is full of strange and fascinating connections." — Barnes and Noble Review

"From the sanitation engineering that literally raised nineteenth-century Chicago to the 23 men who partially invented the light bulb before Thomas Edison, [How We Got to Now] is a many-layered delight."— Nature Review

“A highly readable and fascinating account of science, invention, accident and genius that gave us the world we live in today.” —Minneapolis Star Tribune

 

Author

Steven Johnson is the bestselling author of thirteen books, including Where Good Ideas Come From, How We Got to Now, The Ghost Map, and Extra Life. He’s the host and cocreator of the Emmy-winning PBS/BBC series How We Got to Now, the host of the podcast The TED Interview, and the author of the newsletter Adjacent Possible. He lives in Brooklyn, New York, and Marin County, California, with his wife and three sons. View titles by Steven Johnson

2025 Catalog for First-Year & Common Reading

We are delighted to present our new First-Year & Common Reading Catalog for 2025! From award-winning fiction, poetry, memoir, and biography to new books about science, technology, history, student success, the environment, public health, and current events, the titles presented in our common reading catalog will have students not only eagerly flipping through the pages,

Read more

Videos from the 2024 First-Year Experience® Conference are now available

We’re pleased to share videos from the 2024 First-Year Experience® Conference. Whether you weren’t able to join us at the conference or would simply like to hear the talks again, please take a moment to view the clips below.   Penguin Random House Author Breakfast Monday, February 19th, 7:15 – 8:45 am PST This event

Read more