Книга - The Invisible Century: Einstein, Freud and the Search for Hidden Universes

a
A

The Invisible Century: Einstein, Freud and the Search for Hidden Universes
Richard Panek


A book which offers fresh perspectives on the scientific developments of the past hundred years through the complementary work of two of the century’s greatest thinkers, Einstein and Freud.At the turn of the century there was a widespread assumption in scientific circles that the pursuit of knowledge was nearing its end and that all available evidence had been exhausted. However, by 1916 both Einstein and Freud had exploded the myth by leading exploration into the science of the invisible and the unconscious. These men were more than just contemporaries – their separate pursuits were in fact complementary. Freud’s science of psychoanalysis found its cosmological counterpart in the Astronomy of Invisible Light pioneered by Einstein. Together they questioned the little inconsistencies of Newton’s ordered cosmos to reveal a different reality, a natural order that was anything but ordered, a cosmos that was volatile and vast – an organism alive in time. These men inspired a fundamental shift in the history of human thought. They began a revolution that is still in progress and provided one of the past century’s greatest contributions to the history of science.













THE INVISIBLE CENTURY


EINSTEIN, FREUD

AND

THE SEARCH FOR

HIDDEN UNIVERSES




RICHARD PANEK










DEDICATION (#ulink_308b1e44-e8ef-5e68-9905-d4fd173b4f19)


Once again, for Meg Wolitzer, with love




Epigraph (#ulink_75585d99-78d3-52b9-bf3d-d53179e2c1ee)


Q: “Is the invisible visible?”

A: “Not to the eye.”



—from an 1896 interview with

Wilhelm Conrad Röntgen,

the discoverer of the X-ray




CONTENTS






Cover (#u2fe669b3-4bfa-55b7-b258-5dfd23a806e3)

Title Page (#u3d085584-928f-5278-8392-e3a7b44835e9)

Dedication (#u928687d5-603b-5594-af1c-1eeb9a1da278)

Epigraph (#uf5bada4c-20ec-5c29-9701-12246ac741dc)

Prologue (#u37de7540-2dde-5e3a-abb6-f05c35bb721b)

I. MIND OVER MATTER (#u8a284d46-d695-5224-8c6e-6a851903e941)

One: More Things in Heaven (#u74858eae-b9a6-5660-9674-cd804beb4303)

Two: More Things on Earth (#u0540a529-9f90-5fb5-b671-4a0f17b7666a)

Three: Going to Extremes (#u279a8023-8d2c-5637-9c2b-2a8e5b06f1d2)

II. MATTER OVER MIND (#litres_trial_promo)

Four: A Leap of Faith (#litres_trial_promo)

Five: The Descent of a Man (#litres_trial_promo)

III. THE TREMBLING OF THE DEWDROP (#litres_trial_promo)

Six: A Discourse Concerning Two New Sciences (#litres_trial_promo)

Notes (#litres_trial_promo)

Bibliography (#litres_trial_promo)

Index (#litres_trial_promo)

Acknowledgments (#litres_trial_promo)

About the Author (#litres_trial_promo)

Other Book by (#litres_trial_promo)

Copyright (#litres_trial_promo)

About the Publisher (#litres_trial_promo)




PROLOGUE (#ulink_3bad2712-9542-5ebd-9074-5be7f4775415)


They met only once. During the New Year’s holiday season of 1927, Albert Einstein called on Sigmund Freud, who was staying at the home of one of his sons in Berlin. Einstein, at forty-seven, was the foremost living symbol of the physical sciences, while Freud, at seventy, was his equal in the social sciences, but the evening was hardly a meeting of the minds. When a friend wrote Einstein just a few weeks later suggesting that he allow himself to undergo psychoanalysis, Einstein answered, “I regret that I cannot accede to your request, because I should like very much to remain in the darkness of not having been analyzed.” Or, as Freud wrote to a friend regarding Einstein immediately after their meeting in Berlin, “He understands as much about psychology as I do about physics, so we had a very pleasant talk.”

Freud and Einstein shared a native language, German, but their respective professional vocabularies had long since diverged, to the point that they now seemed virtually irreconcilable. Even so, Freud and Einstein had more in common than they might have imagined. Many years earlier, at the beginning of their respective scientific investigations, they both had reached what would prove to be the same pivotal juncture. Each had been exploring one of the foremost problems in his field. Each had found himself confronting an obstacle that had defeated everyone else exploring the problem. In both their cases, this obstacle was the same: a lack of more evidence. Yet rather than retreat from this absence and look elsewhere or concede defeat and stop looking, Einstein and Freud had kept looking anyway.

Looking, after all, was what scientists did. It was what defined the scientific method. It was what had precipitated the Scientific Revolution, some three centuries earlier. In 1610, Galileo Galilei reported that upon looking through a new instrument into the celestial realm he saw forty stars in the Pleiades cluster where previously everyone else had seen only six, five hundred new stars in the constellation of Orion, “a congeries of innumerable stars” in another stretch of the night sky, and then, around Jupiter, moons. Beginning in 1674, Antonius von Leeuwenhoek reported that upon looking at terrestrial objects through another new instrument he saw “upwards of one million living creatures” in a drop of water, “animals” numbering more than “there were human beings in the united Netherlands” in the white matter on his gums, and then, in the plaque from the mouth of an old man who’d never cleaned his teeth, “an unbelievably great number of living animalcules, a-swimming more nimbly than any I had ever seen up to this time.”

Such discoveries were not without precedent. They came, in fact, at the end of the Age of Discovery. If an explorer of the seas could discover a New World, then why should an explorer of the heavens not discover new worlds? And if those same sea voyages proved that the Earth could house innumerable creatures previously unknown, then why not earth itself or water or flesh?

What was without precedent in the discoveries of Galileo and Leeuwenhoek, however, was the means by which they reached them. Between 1595 and 1609, spectacle makers in the Netherlands had fit combinations of lenses together in two new instruments that performed similar, though distinct, optical tricks. The combination of lenses in one instrument made distant objects appear nearer, the combination in the other made small objects appear larger; and for the first time in history investigators of nature had at their disposal tools that served as an extension of one of the five human senses. As much as the discoveries themselves, what revolutionized science over the course of the seventeenth century was a new means of discovery and what it signified: There is more to the universe than meets the naked eye.

Who knew? After all, these instruments might easily have revealed nothing beyond what we already knew to be there, and what we already knew to be there might easily have been all there was to know. The naked eye alone didn’t have to be inadequate as a means of investigating nature; the invention of these instruments didn’t have to open two new frontiers. But it was; and they did.

For thousands of years, the number of objects in the heavens had been fixed at six thousand or so. Now, there were … more. Since the Creation, or at least since the Flood, the number of kinds of creatures on Earth, however incalculable as a practical matter, had nonetheless been fixed. Now, there were … more. “There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy”: When Shakespeare wrote these words in 1598 or 1599, at the very cusp of the turn of the seventeenth century, he was referring to the understandable assumption among practitioners of what would soon become the old philosophy that much of what was as yet unknown must remain unknown forever, and for the next three hundred years the practitioners of what they themselves came to call the New Philosophy frequently cited it as the last time in history that someone could have written so confidently about civilization’s continuing ignorance of, and estrangement from, the universe.

Because now all you had to do was look. Through the telescope you could see farther than with the naked eye alone and, by seeing farther, discover new worlds without. Through the microscope you could see deeper than with the naked eye alone and, by seeing deeper, discover new worlds within. By seeing more than meets the naked eye and then seeing yet more, you could discover more.

How much more? It was a logical question for natural philosophers to ask themselves, and the search for an answer that ensued over the next three centuries was nothing if not logical: a systematic pursuit of the truths of nature to the outermost and innermost realms of the universe, until by the turn of the twentieth century the search had reached the very limits of human perception even with the aid of optical instruments, and investigators of nature had begun to wonder: What now? What if there was no more more?

Specifically: Was the great scientific program that had begun three centuries earlier coming to a close? Or would an increasingly fractional examination of the existing evidence continue to reward investigators with further truths?

Some researchers, however, unexpectedly found themselves confronting a third option. Pushing the twin frontiers of scientific research—the inner universe and the outer—they had arrived at an impasse. Then, they’d spanned it. They’d kept looking until they discovered an entirely new kind of scientific evidence: evidence that no manner of mere looking was going to reveal; evidence that lay beyond the realm of the visible; evidence that was, to all appearances, invisible.

The invisible had always been part of humanity’s interactions with nature. Attempting to explain otherwise inexplicable phenomena, the ancients had invented spirits, forms and gods. In the Western world during the medieval era, those various causes of mysterious effects had coalesced into the idea of one God. Even after the inception of the modern era and the inauguration of the scientific method, investigators working at the two extremes of the universe had resorted to two new forms of the invisible. When Isaac Newton reached the limits of his understanding of the outer universe, he had invoked the concept of gravity. When René Descartes reached the limits of his understanding of the inner universe, he had invoked the concept of consciousness.

But by the turn of the twentieth century the kind of invisibility that certain investigators were beginning to invoke was new. These were scientists for whom any appeal to the supernatural, superstitious, or metaphysical would have been anathema. But now, here it was: evidence that was invisible yet scientifically incontrovertible, to their minds, anyway.

Although Einstein and Freud didn’t initiate this second scientific revolution all by themselves, they did come to represent it and in large measure embody it. This is the story of how their respective investigations reached unprecedented realms, relativity and the unconscious; how their further pursuits led to the somewhat inadvertent creation of two new sciences, cosmology and psychoanalysis; and how in Einstein’s case, a new way of doing science has become the dominant methodology throughout the sciences, while in Freud’s case, an alternative way of doing science has become the dominant exception, the key to the very question of what qualifies an intellectual endeavor as a science. This is also the story of what cosmology and psychoanalysis have allowed us to explore: universes, without and within, as vast in comparison to the ones they replaced as those had been to the ones they replaced.

And in that regard Einstein and Freud’s is a story, just as Galileo and Leeuwenhoek’s was, of a revolution in thought. The difference between our vision of the universe and its nineteenth-century counterpart has turned out to be not a question of what had distinguished each previous era from the preceding one for nearly three hundred years: of seeing farther or deeper, of seeing more—of perspective, of how much we see. Instead, it is a question of seeing itself—of perception, of how we see. It is also, then, a question of thinking about seeing—of conception, of how we think about how we see. As much as any discovery, this is what has changed the way we try to make sense of our existence in the twenty-first century—the way we struggle to investigate our circumstances as sentient creatures in a particular setting: Who are these creatures? What is this setting? It is a new means of discovery—the significance of which, a hundred years later, we are still only beginning to comprehend: that there is more to the universe than we would ever find, if all we ever did was look.



I MIND OVER MATTER (#ulink_a3bc5c17-5a69-508e-8986-cf4eea61ab9b)




ONE MORE THINGS IN HEAVEN (#ulink_ff4036c5-38f9-5159-9d90-4b278e02d2cf)


Look.

And so the boy looked. His father had something to show him. It was small and round like a miniature clock, the boy saw, but instead of two hands pointing outward from the center of the face it had one iron needle. As the boy continued to look, his father rotated the object. He turned it first one way, then the other, and as he did so the most amazing thing happened. No matter how the boy’s father moved the object, the needle continued pointing in the same direction—not the same direction relative to the rest of the device, as the boy might have expected, but the same direction relative to … something else. Something out there, outside the device, that the boy couldn’t see. The needle was shaking now. It trembled with the effort. Some six decades later, when Albert Einstein recalled this scene, he couldn’t remember whether he had been four or five at the time, but the lesson he’d learned that day he could still summon and summarize crisply: “Something deeply hidden had to be behind things.”

Some things deeply hidden, actually. As the boy grew older, he learned what a few of those deeply hidden somethings were: magnetism, the subject of his father’s demonstration on that memorable day; electricity; and the relationship between the two. He learned that the existence of a relationship between magnetism and electricity was still so recent a discovery that nobody yet understood how it worked, and then he learned that within his lifetime physicists had demonstrated that this relationship manifested itself to our eyes as light. And he learned that even though nobody yet understood how light worked, what everybody did know was that it traveled along the biggest deeply hidden something of them all, one that had so far eluded the greatest minds of the age but one that was now, as a prominent physicist of the era proclaimed, “all but in our grasp.”

That something was the ether. Einstein himself sought it, in a paper he wrote in 1895, “Über die Untersuchung des Ätherzustandes im magnetischen Felde” (“On the Investigation of the State of the Ether in a Magnetic Field”). This contribution to the literature, however, wasn’t so much original scholarship as a five-finger exercise that wound up pretty much reiterating current thinking, since Einstein was only sixteen at the time and, as he cautioned prospective readers (such as the doting uncle to whom he sent the paper), “I was completely lacking in materials that would have enabled me to delve into the subject more deeply than by merely meditating on it.”

Still, it was a start. Over the following decade, Einstein would graduate from self-consciously precocious adolescent, speculating beyond his abilities, to willfully arrogant student at the Swiss Polytechnic in Zurich, to humble (if not quite humbled) clerk at the Swiss Office for Intellectual Property in Bern, where he wound up in part because his professors had refused to write letters of recommendation for someone so dismissive of their authority. As Einstein reported in a letter in May 1901, “From what I have been told, I am not in the good books of any of my former teachers.” Yet even as a patent clerk, Einstein continued to seek the ether, for the same reason that physicists everywhere were seeking the ether. When electromagnetic waves of light departed from a star that was there and hadn’t yet arrived here, they had to be traveling along something. So: What was it? Find that something, as physicists understood, and maybe electricity and magnetism and the relationship between the two wouldn’t seem so deeply hidden after all.

Among the seekers of the ether, one was without equal: the Scottish physicist William Thomson, eventually Baron Kelvin of Largs. As one of the most prominent and illustrious physicists of the century, Lord Kelvin had made the pursuit of the ether the primary focus of his scientific investigations for literally the entire length of his long career. He’d first thought he found it on November 28, 1846, during his initial term as a professor of natural history at the University of Glasgow. He was mistaken. As he wrote a friend in 1896, on the occasion of the golden jubilee of his service to the university, a three-day celebration that attracted two thousand representatives of scientific societies and academies of higher learning from around the world, “I have not had a moment’s peace or happiness in respect to electromagnetic theory since Nov. 28, 1846.”

Part of the problem with the ether was how to picture it. “I never satisfy myself unless I can make a mechanical model of a thing,” Kelvin once told a group of students. “If I can make a mechanical model, I can understand it.” In one such demonstration that was a perennial favorite of his students, he would draw geometrical shapes on a piece of india rubber, stretch the rubber across the ten-inch mouth of a long brass funnel, and, having hung the funnel upside down over a tub, direct water from a supply pipe into the thin tube at the top. As the water collected in the mouth of the funnel, the india rubber bulged, and it drooped, and it gradually assumed the shape of a globule. Soon the blob had expanded to a width nearly double the diameter of the mouth from which it appeared to be emerging, and just when the rubber seemed unable to stretch any thinner, it did anyway. All the while Kelvin continued to lecture, calmly commenting on the subject of surface tension as well as on the transformations the simple Euclidean shapes on the rubber were now undergoing. Then, at precisely the moment Kelvin calculated that neither india rubber nor the ten benches of physics students could endure any greater tension, he would raise his pointer, poke the gelatinous mass hanging before him, and, turning to the class, announce, “The trembling of the dewdrop, gentlemen!”

The trembling of the dewdrop, the angling of the gas molecule, the orbiting of a planet: the least matter in the universe to the greatest, and all operating according to the same unifying laws. Here was the whole of modern science, in one easy lesson. More than two hundred years earlier, René Descartes had expressed the philosophical hope that a full description of the material universe would require nothing but matter and motion, and several decades after that Isaac Newton had expressed the physical principles that described the motion of matter. The rest, in a way, had been a process of simply filling in the blanks—plugging measurements of matter into equations for motions, and watching the universe tumble out piecemeal yet unmistakably all of a single great mechanistic piece. The lecture hall where for half a century Kelvin demonstrated his models was a monument of sorts to this vision: the triple-spiral spring vibrator he’d hung from one end of the blackboard; the thirty-foot pendulum, consisting of a steel wire and a twelve-pound cannonball, that he’d suspended from the apex of the dome roof; two clocks, those universal symbols of the workings of the universe. Matter and motion, motion and matter, one acting upon the other; causes leading inexorably to effects that, by dint of more and more rigorous and precise examination, were equally predictable and verifiable to whatever degree of accuracy anyone might care to name: Here was a cosmos complete, almost.

The exception was the ether. When numerous experiments in the early nineteenth century began showing that light travels in waves, physicists naturally tried to describe a substance capable of carrying those waves. The consensus: an absolutely incompressible, or elastic, solid. For Cambridge physicist George Gabriel Stokes, that description suggested a combination of glue and water that would act as a conduit for rapid vibrations of waves and also allow the passage of slowly moving bodies. For British physicist Charles Wheatstone, it meant white beads, which he used in his Wheatstone wave machine of the early 1840s—a visual aid that vividly demonstrated how ether particles might move at right angles to a wave coursing through their midst and an inspiration for numerous similar teaching aids of the era.

And for Kelvin, “the nearest analogy I can give you,” as he once said during a lecture, “is this jelly which you see.” On other occasions, he might begin his demonstration with Scotch shoemakers’ wax. If he shaped the wax into a tuning fork or bell and struck it, a sound emanated. Then he would take that same sound-wave-conveying wax and suspend it in a glass jar filled with water. If he first placed corks under the substance, then laid bullets across the top of it, in time the positions of the objects would reverse themselves. The bullets would sink through the wax to the bottom while the corks would pop out the top. “The application of this to the luminiferous ether is immediate,” he concluded: a substance rigid enough to conduct waves traveling at fixed speeds in straight lines from one end of the universe to the other, if need be, yet porous enough not to block the passage of bullets, corks, or even—by the same application of scale that rendered minuscule dewdrops and giant rubber globules analogous—planets.

Not to block—but surely to impede? Surely at least to slow the passage of a planet? An elastic solid occupying all of space would have to present a degree of resistance to a (in the parlance of the day) “ponderable body” such as Earth. But to what degree precisely? In an effort to determine the exact extent of the luminiferous (or light-bearing) ether’s drag on Earth, the American physicist Albert A. Michelson devised an experiment that he first conducted in Berlin in 1881. His idea was to send two beams of light along paths at 90-degree angles to each other. Presumably the beam following one path would be fighting against the current as Earth plowed through the ether, while the beam on the other path would be swimming with the current. Michelson designed an ingenious instrument, which he called an interferometer, that he hoped would allow him to make measurements that, through a series of calculations, would determine the velocity of the Earth through the ether. The Berlin reading, however, suffered from the vibrations of the horse cabs passing outside the Physical Institute. So he moved his apparatus to the relative isolation of the Astrophysical Observatory in Potsdam, where he repeated the experiment. The reading, to his surprise, indicated nothing.

Which was impossible. An interaction between a massive planet and even the most elastic of solids surely couldn’t pass undetected or remain undetectable. “One thing we are sure of,” Kelvin told an audience in Philadelphia three years later, while on his way to lecture at Johns Hopkins University in Baltimore, “and that is the reality and substantiality of the luminiferous ether.” And if experiments of unprecedented refinement and sophistication failed to detect it, there was only one reasonable alternative course of action. As Kelvin wrote in his preface to the published volume of those Baltimore Lectures, “It is to be hoped that farther experiments will be made.”

They were. In 1887 Michelson tried again, this time with the help of the chemist Edward W. Morley. Together they constructed an interferometer far more elaborate and sensitive than the ones Michelson had used in Germany, secured it in an essentially tremor-free basement at the Case School of Applied Science in Cleveland, and set it floating on a bed of mercury for, literally, good measure. Michelson had in mind a specific number for the wavelength displacement he expected the ether would produce, and he further decided that a reading 10 percent of that number would conclusively indicate a null result. What he got was a reading of 5 percent of the displacement he thought the ether might produce—a blip attributable to observational error, if anything. Michelson found himself forced to reach the same conclusion he’d previously reported: “that the luminiferous ether is entirely unaffected by the motion of the matter which it permeates.”

“I cannot see any flaw,” said Kelvin of this experiment, in a lecture he delivered in the summer of 1900. “But a possibility of escaping from the conclusion which it seemed to prove may be found in a brilliant suggestion made independently by FitzGerald, and by Lorentz of Leiden.” Kelvin was referring to the physicists George Francis FitzGerald of Dublin, who had submitted a brief conjecture regarding the ether to the American journal Science in 1889, and Hendrik Antoon Lorentz, who in an 1892 paper and then in an 1895 book-length treatise had elaborated an entire argument along nearly identical lines: The ether compresses the molecules of the interferometer—as well as those of the Earth, for that matter—to the exact degree necessary to render a null result. In which case, the two beams of light in Cleveland actually did travel at two separate speeds, as the measurements of their multiple-mirror-deflected journeys would have shown, if only the machinery hadn’t contracted just enough to make up the difference. “Thus,” Lorentz concluded, “one would have to imagine that the motion of a solid body (such as a brass rod or the stone disc employed in the later experiments) through the resting ether exerts upon the dimensions of that body an influence which varies according to the orientation of the body with respect to the direction of motion.”

“An explanation was necessary, and was forthcoming; they always are,” the French mathematician and philosopher Henri Poincaré wrote of Lorentz in 1902 in his Science and Hypothesis; “hypotheses are what we lack the least.” Lorentz himself conceded as much. Two years later he proposed a mathematical basis for his argument while virtually sighing at the futility of the whole enterprise: “Surely this course of inventing special hypotheses for each new experimental result is somewhat artificial.”

Like other physicists at the time, Einstein thought about ways to describe the ether, as in the precocious paper he had sent to his uncle in 1895. Also like other physicists, Einstein thought about ways to detect the ether. During his second year at college, 1897–98, he proposed an experiment: “I predicted that if light from a source is reflected by a mirror,” he later recalled, “it should have different energies depending on whether it is propagated parallel or antiparallel to the direction of motion of the Earth.” In other words: the Michelson-Morley experiment, more or less—though news of that effort, a decade earlier, had reached Einstein only indirectly if at all, and then only as a passing reference in a paper he read. In any case, the particular professor he’d approached with this proposal treated it in “a stepmotherly fashion,” as Einstein reported bitterly in a letter. Then, during a brief but busy job-hunting period in 1901, after he’d left school but hadn’t yet secured a position at the patent office, Einstein proposed to a more receptive professor at the University of Zurich, “a very much simpler method of investigating the relative motion of matter against the luminiferous ether.” On this occasion it was Einstein who didn’t deliver. As he wrote to a friend, “If only relentless fate would give me the necessary time and peace!”

Like a few other physicists at the time, Einstein was even beginning to wonder just what purpose the ether served. What purpose it was supposed to serve was clear enough. Physicists had inferred the ether’s existence in order to make the discovery of light waves conform to the laws of mechanics. If the universe operated only through matter moving immediately adjacent matter in an endless succession of cause-and-effect ricochet shots—like balls on a billiard table, in the popular analogy of the day—then the ether would serve as the necessary matter facilitating the motion of waves of light across the vast and otherwise empty reaches of space. But to say that the ether is the substance along which electromagnetic waves must be moving because electromagnetic waves must be moving along something was as unsatisfactory a definition as it was circular. As Einstein concluded during this period in a letter to the fellow physics student who later became his first wife, Mileva Maric, “The introduction of the term ‘ether’ into the theories of electricity led to the notion of a medium of whose motion one can speak without being able, I believe, to associate a physical meaning with this statement.”

The problem of the ether was starting to seem more than a little familiar. It was, in a way, the same problem that had been haunting physics since the inception of the modern era three centuries earlier: space. To be precise, it was absolute space—a frame of reference against which, in theory, you could measure the motion of any matter in the universe.

For most of human history, such a concept would have been more or less meaningless, or at least superfluous. As long as Earth was standing still at the center of the universe, the center of the Earth was the rightful place toward which terrestrial objects must fall. After all, as Aristotle pointed out in establishing a comprehensive physics, that’s precisely what terrestrial objects did. An Earth in motion, however, presented another set of circumstances altogether, one that—as Galileo appreciated—required a whole other set of explanations.

Nicolaus Copernicus wasn’t the first to suggest that the Earth goes around the sun, not vice versa, but the mathematics in his 1543 treatise De revolutionibus orbium coelestium (On the Revolutions of Celstial Orbs) had the advantage of being comprehensive and even useful—for instance, in instituting the calendar reform of 1582. Still, for many natural philosophers its heliocentric thesis remained difficult, or at least politically unwise, to believe. Galileo, however, not only found it easy to believe but, in time, learned it had to be true because he had seen the evidence for himself, through a new instrument that made distant objects appear near. His evidence was not the mountains on the moon that he first observed in the autumn of 1609, though they did challenge one ancient belief, the physical perfection of heavenly bodies; nor the sight of far more stars than were visible with the naked eye, though they did hint that the two-dimensional celestial vault of old might possess a third dimension; not even his January 1610 discovery around Jupiter of “four wandering stars, known or observed by no one before us,” because all they proved was that Earth wasn’t unique as a host of moons or, therefore, as a center of rotation. Instead, what finally decided the matter for Galileo was the phases of Venus. From October to December 1610, Galileo mounted a nightly vigil to observe Venus as it mutated from “a round shape, and very small,” to “a semicircle” and much larger, to “sickle-shaped” and very large—exactly the set of appearances the planet would manifest if it were circling around, from behind the sun to in front of the sun, while also drawing nearer to Earth.

Galileo’s discovery of the phases of Venus didn’t definitively prove the existence of a sun-centered universe. It didn’t even necessarily disprove an Earth-centered universe. After all, just because Venus happens to revolve around the sun doesn’t mean that the sun itself can’t still revolve around Earth. But such a contortionistic interpretation of the cosmos—a Venus-encircled sun in turn circling Earth—had nothing to recommend it other than an undying allegiance to Earth’s central position in it. And so “Venus revolves around the Sun,” Galileo finally declared with virtual certainty, in a letter he wrote in January 1612 and published the following year, “just as do all the other planets”—a category of celestial object that, he could now state with a confidence verging on nonchalance, included the heretofore terrestrial-by-definition Earth.

An Earth spinning and speeding through space, however, required not only a rethinking of religious beliefs. It also required new interpretations of old physical data—a new physics. Galileo himself got to work on one, and in 1632 he published it: Dialogue Concerning the Two Chief World Systems. In arguing on behalf of a Copernican view of the universe, Galileo knew he was going to have to explain certain phenomena that in the Aristotelian view of the universe needed no further explanation. Actually, he was going to have to explain the absence of certain phenomena: If Earth were turning and if this turning Earth were orbiting the sun, as Copernicus contended, then wouldn’t birds be rent asunder, cannonballs be sent off course, and even simple stones, dropped from a modest height, be flung far from their points of departure, all according to the several motions of the planet?

No, Galileo said. And here’s why. He asked you, his reader, to imagine yourself on a dock, observing a ship anchored in a harbor. If someone at the top of the ship’s mast were to drop a stone, where would it land? Simple: at the base of the mast. Now imagine instead that the ship is moving in the water at a steady rate across your field of vision as you observe from the dock. If the person at the top of the ship’s mast were to drop another stone, where would it land now? At the base of the mast? Or some small distance back, behind the mast—a measurement corresponding to the distance on the water that the ship would have covered in the time between the release of the stone at the top of the mast and its arrival on the deck of the ship?

The intuitive, Aristotelian answer: some small distance back. The correct—and, Galileo argued, Copernican—answer: the base of the mast, because the movement of the ship and the movement of the stone together constitute a single motion. From the point of view of the person at the top of the mast, the motion of the stone alone might indeed seem a perpendicular drop—the kind that Aristotle argued a stone would make in seeking to return to its natural state in the universe. Fair enough. That’s what it would have to seem to someone standing on the steadily moving ship whose only knowledge of the motion of the Earth was that it stood still. That person would feel neither the motion of the Earth nor the motion of the ship and so would take into account only the motion of the stone. But for you, observing from the dock, the stone would be moving and the ship would be moving, and together those movements would make up a single system in motion. To you, the motion of the stone falling toward the ship would seem not a perpendicular drop—not at all an Aristotelian return to its natural state—but an angle. If you could trace the trajectory of the stone from the dock, it would just be geometry.

And vice versa. If, instead, you the observer standing on the dock were the one dropping a stone, then to you the motion of that stone relative to the Earth would appear perpendicular, because all you would be taking into account was the motion of the stone alone. That’s all Aristotle did—take into account only the motion of the stone. But from the point of view of the person at the top of the mast on the ship in the harbor, looking at you on the dock and taking into account the motion of the stone and the apparent motion of the dock together, the trajectory of the falling stone would describe an angle.

And there it is: a principle of relativity. Neither observer would have the right to claim to be absolutely at rest. The onboard observer would have as much right to claim that the ship was leaving the dock as that the dock was leaving the ship. Rather than standing still at the center of the cosmos, our position in the new physics was just the opposite: never at rest. After Galileo, everything in the universe was in motion relative to something else—ships to docks, moons to planets, planets to sun, sun (as astronomers would come to discover by the end of the eighteenth century) to the so-called fixed stars, those socalled fixed stars (as astronomers would come to discover by the middle of the nineteenth century) to one another, and, conceivably, our entire vast system of stars (as astronomers were trying to determine at the turn of the twentieth century) to other vast systems of stars.

Unless you counted the ether. For this reason alone, the ether was—as Einstein had first recognized as a teenager—at least somewhat objectionable. Not long after he’d written the ether paper that he’d sent to his uncle, Einstein found himself wandering the grounds at his school in Aarau, Switzerland, wondering what the presence of an absolute space would do to Galileo’s idea of relativity. If you were on board Galileo’s ship but belowdecks, in an enclosed compartment, you shouldn’t be able to detect whether you were moving or standing still, relative to the dock or anything else in the universe that wasn’t moving along with you. But if the ship were traveling at the speed of light through the ether, that’s just what you would be able to detect. You’d know you were the one traveling at the speed of light—rather than someone on the dock, for instance—because you’d see the light around you standing still.

By the early years of the twentieth century, Einstein had done only what other physicists of his era had done. He’d thought about ways to define the ether through mathematics. He’d thought about ways to detect the ether through experiments. He’d even begun to think about whether physics really needed an ether. But then, one night in May 1905, Einstein did what no other physicist of his era had done. He thought of a new way of thinking about the problem.

Einstein had been spending the evening with a longtime friend both from his student years and at the patent office, Michele Besso, the two of them talking, as they often did in their off-hours, about physics. In the preceding three years, Einstein had moved to Bern, gotten married, and fathered two children (one illegitimate, whom he and Mileva gave up for adoption). Yet all the while he’d been applying himself to the most pressing issues of contemporary physics, often in the company of his patent-office sounding board, Besso. On this particular occasion, Einstein had approached Besso for the express purpose of doing “battle” with a problem that had been plaguing him on and off for the past decade. After a lively discussion, Einstein returned home, where, all at once, he understood what he and everyone else who had been studying the situation had been overlooking all along.

“Thank you!” he greeted Besso the following day. “I have completely solved the problem.” The trouble with the current conception of the universe, he explained, wasn’t absolute space—or at least wasn’t only absolute space. It was absolute time.

“If, for example, I say that ‘the train arrives here at 7 o’clock,’ that means, more or less, ‘the pointing of the small hand of my watch to seven and the arrival of the train are simultaneous events.‘“ This sentence comes early in “Zur Elektrodynamik bewegter Körper” (“On the Electrodynamics of Moving Bodies”), the paper that Einstein completed and mailed to the Annalen der Physik six weeks later. In its audacious simplicity, even borderline simplemindedness, this sentence is deceptive, for with this description of one of the most mundane of human observations—one that just about any eight-year-old can make—Einstein pinpointed precisely what everyone else who had been studying the problem had missed: “time” is not universal or absolute; it is not sometimes universal and sometimes local or relative; it is only local.

The key was the speed of light. The fact that the speed of light is not infinite, as Aristotle and Descartes and so many other investigators of nature over the millennia had supposed, had been common knowledge since the late seventeenth century. So had its approximate value. In 1676, the Danish astronomer Ole Rømer used the data from years of observations at the Paris Observatory to determine that the timing of the eclipses of Jupiter’s innermost moon depended on where Jupiter was in its orbit relative to Earth. The eclipses came earlier when Earth was nearest Jupiter, later when Earth was farthest from Jupiter, suggesting that the eclipses didn’t happen at the very same moment we saw them happen. That, in fact, when we saw them depended on where they happened, nearer or farther. “This can only mean that light takes time for transmission through space,” Rømer concluded—140,000 miles per second, by the best estimates of the day.

But the combination of these two factors—that the speed of light is incomprehensibly fast; that the speed of light is inarguably finite—didn’t begin to assume a literally astronomical dimension for another hundred years. Beginning in the 1770s, William Herschel (the same observer who proved that the sun is in motion relative to the fixed stars) began systematically exploring the so-called celestial vault—the ceiling of stars that astronomers had known since Galileo’s time must have a third dimension but that they still couldn’t help conceiving as anything except a flat surface. With every improvement in his telescopes, Herschel pushed his observations of stars to greater and greater depths in the sky or distances from Earth or—since the speed of light coming from the stars is finite, since it does take time to reach our eyes—farther and farther into the past. “I have looked further into space than ever human being did before me,” Herschel marveled in 1813, in his old age. “I have observed stars of which the light, it can be proved, must take two million years to reach the earth.”

Even that distance, however, would seem nearby if the speculations of some astronomers at the turn of the twentieth century turned out to be true. If certain smudges at the farthest reaches of the mightiest telescopes turned out to be systems of stars outside our own—other “island universes” altogether equal in size and magnitude to our own Milky Way—then when we looked at the starlight reaching us from them we might be seeing not Herschel’s previously unfathomable two million years into the past but two hundred million years or even two thousand million years. And so they would go, these meditations on the meaning of light, ever and ever outward, further and further pastward, if not necessarily ad infinitum, then at least, quite possibly, ad absurdum.

Now Einstein reversed that trajectory. Instead of considering the implications of looking farther and farther across the universe and thereby deeper and deeper into the past, he thought about the meaning of looking nearer and nearer—or, by the same reasoning, closer and closer to the present. Look near enough, he realized, and you’ll be seeing very close indeed to the present. But only one place can you claim to be the present—and then only your present.

It was this insight that allowed Einstein to endow the idea of time with an unprecedented immediacy, in both the positional and the temporal senses of the word: here and now: the arrival of a train and the hands of a watch. Because the train and the hands of the watch occupy the same location, they also occupy the same time. For an observer standing immediately adjacent to the train, that time is, by definition, the present: seven o’clock. But someone in a different location observing the arrival of that same train—that is, someone at some distance away receiving the image of the train, which has traveled by means of electromagnetic waves from the surface of the locomotive to the eyes of this second observer at the speed of light, an almost unimaginably high yet nonetheless finite velocity—wouldn’t be able to consider the arrival of the train simultaneous with its arrival for the first observer. If light did propagate instantaneously—if the speed of light were in fact infinite—then the two observers would be seeing the arrival of the train simultaneously. And indeed, it might very well seem to them as if they were, especially if (using the modern value for the speed of light as 186,282 miles, or 299,792 kilometers, per second) the other observer happens to be standing on a street corner that’s about two-millionths of a light-second (the distance that light travels in two-millionths of a second, or slightly less than 2000 feet) away rather than, less ambiguously, near a star that’s two million light-years (or slightly less than 12 quintillion miles) away. And yes, if you were the observer on the street corner in the same town, gazing down a hill at a slowing locomotive pulling into the station, the arrival of the train for all practical purposes might as well be happening at the same moment as its arrival for the observer on the platform.

But what it is, is in your past.

Einstein was not in fact alone in recognizing the role that the velocity of light plays in the conception of time. Other physicists and philosophers had begun to note a paradox at the heart of the concept of simultaneity—that for two observers, the difference in distances has to translate into a difference in time. But where Einstein diverged from even the most radical of his contemporaries was in accepting as potentially decisive what the velocity of light is.

It was there in the math. In 1821, the British physicist Michael Faraday had decided to investigate reports from the Continent concerning electricity and magnetism by placing a magnet on a table in his basement workshop and sending an electrical impluse through a wire dangling over it. The wire began twirling, as if the electricity were sparking downward and the magnetism were influencing upward. This was, in effect, the first dynamo, the invention that would drive the industrial revolution for the rest of the century, and the product that Einstein’s own father and uncle would manufacture as the family business. But not until the 1860s did the Scottish physicist James Clerk Maxwell manage to capture Faraday’s accomplishment in mathematical form, a series of equations with an unforeseen implication. Electromagnetic waves travel at the same speed as light (and therefore, Maxwell predicted, are light): 186,282 miles, or 299,792 kilometers, per second in a vacuum. Meaning … what? That it would be more than 186,282 miles per second if you were moving away from the source of light, or less than 186,282 miles per second if you were moving toward the source? Yes—according to Newton’s mechanics. Yet it never seemed to vary.

On a planet that was spinning; a spinning planet that was orbiting the sun; a spinning planet orbiting a sun that itself was moving in relation to other stars that were moving in relation to one another—in this setting that, as Copernicus and Galileo and Newton and Herschel and so many other astronomers and mathematicians and physicists and philosophers had so persuasively established, was never at rest and therefore wouldn’t be at rest in relation to a source of light outside itself, always the answer to the question of what was the speed of light seemed to come up exactly the same. Just as Aristotelian philosophers considering the descent of an onboard stone would have overlooked the motion of the ship, so maybe several generations of Galilean physicists had been overlooking properties of electromagnetism. Maybe what you needed to consider was the motion of the stone, the motion of the ship, and the motion of the medium by which we perceive both, and together those three elements would constitute a single system in motion.

A few years earlier, his friend Besso had given Einstein a copy of the Austrian physicist Ernst Mach’s Die Mechanik in ihrer Entwicklung (The Science of Mechanics). This work, Einstein later recalled, “exercised a profound influence upon me” because it questioned “mechanics as the final basis of all physical thinking.” The issue for Mach wasn’t whether mechanics had worked well over the past two centuries in describing the motions of matter; clearly, it had. The issue wasn’t even whether mechanics could answer all questions about the physical universe, as the Kelvins of the world were constantly trying to prove. Rather, the issue for Mach—the root of his objection to Newtonian mechanics—was that it raised some questions it couldn’t answer.

For instance, absolute space, the existence of which is necessary to measure absolute motions: On close reading, Newton’s definition of it turned out to be every bit as circular as the reigning definition of the ether. “Absolute motion,” Newton had written, “is the translation of a body from one absolute place into another.” And what is place? “Place is a part of space which a body takes up, and is according to the space, either absolute or relative.” So what, then, is absolute space? “Absolute space, in its own nature, without relation to anything external, remains always similar and immovable.” Newton anticipated some criticism: “It is indeed a matter of great difficulty to discover, and effectually to distinguish, the true motions of particular bodies from the apparent; because the parts of that immovable space, in which those motions are performed, do by no means come under the observation of our senses. Yet the thing is not altogether desperate,” he reassured the reader, “for we have some arguments to guide us, partly from the apparent motions, which are the differences of the true motions; partly from the forces, which are the causes and effects of the true motions.” And what are these true, or absolute, motions? See above.

“We join with the eminent physicist Thomson [later Lord Kelvin] in our reverence and admiration of Newton,” Mach wrote in 1883. “But we can only comprehend with difficulty his opinion that the Newtonian doctrines still remain the best and most philosophical foundation that can be given.” Not that Mach was proposing an alternative to Newtonian mechanics; not that he was even suggesting physics was in need of an alternative. Rather, he was trying to remind his fellow physicists that just because mechanics had come “historically first” in modern science didn’t mean that it had to be historically final. This was the argument that “shook” Einstein’s “dogmatic faith” in mechanics alone as the basis of the physical world, and now, in May 1905, this was the argument that led Einstein to wonder whether mechanics and electromagnetism together could accommodate a principle of relativity—whether a synthesis of those two systems might in fact be historically next.

He tried it. First Einstein proposed that just as Newton’s mechanics don’t allow observers either on a dock or on a ship to consider themselves to be the ones absolutely at rest, neither should electrodynamics and optics. “We shall raise this conjecture (whose content will hereafter be called ‘the principle of relativity’) to the status of a postulate,” he wrote in the second paragraph of his paper. Then he accepted the constancy of the speed of light in empty space as another given—a second postulate: that the speed of light in a vacuum is always the same “independent of the state of motion of the emitting body.”

And that was all. It worked. Einstein now had two mutually reinforcing postulates, “only apparently irreconcilable”: a principle of relativity, allowing us to conduct experiments involving light either on the ship or on the dock with equal validity; and a principle of constancy, allowing the ship (or the dock, for that matter) to approach the speed of light without any light onboard (or on the dock)—including electromagnetic waves bringing images from objects to our eyes—slowing to a stop and thereby revealing whether the ship (or dock) is the one “really” in motion. In which case, as Einstein promised his readers in that same second paragraph, the “introduction of a ‘light ether’ will prove to be superfluous, inasmuch as the view to be developed here will not require a ‘space at absolute rest’ endowed with special properties.”

The rest was math—high-school-level algebra, at that. Suppose that you’re standing on a dock watching our old Galilean ship, still anchored just offshore after all these centuries. And suppose that the ship is absolutely motionless in the water. And suppose that instead of dropping a stone, someone on board is dropping a light signal—sending a beam of light from the top of the mast to the deck. If you time this simple event and get an answer of, say, one second, then you know that the distance between the top of the mast and the deck must be the distance that light travels in one second, or 186,282 miles. (It’s a big ship.)

The complications begin, just as they did in Galileo’s day, once that ship lifts anchor and sets sail. Suppose that it’s moving at a constant speed across your line of sight. If the person at the top of the mast sends a second light signal in the same manner as the first, what will you see from your vantage on the dock? The Aristotelian answer: a streak of light heading straight for the center of the Earth and therefore landing some distance behind the base of the mast—a distance corresponding to how far the ship has traveled along the water during the signal’s journey. The Galilean answer: a streak of light heading straight for the base of the mast—which is the Einsteinian answer as well. From your point of view, the base of the mast will have moved out from under the top of the mast during the descent of the beam of light, just as it did during the descent of a stone. Which means the distance the light has traveled, from your point of view, has lengthened. It’s not 186,282 miles. It’s more.

How much more you can easily find out by measuring the time of its journey—and that’s where the Einsteinian interpretation begins to depart from the Galilean. What is velocity? Nothing but distance divided by time, whether inches divided by—or, in the vernacular, per—day or kilometers per hour or miles per second. But if we accept Einstein’s second postulate, then the velocity in question isn’t just 186,282 miles per second. It’s always 186,282 miles per second. It’s constant—indeed, a constant. In the equation “velocity equals distance divided by time,” this constant is over on one side of the equals sign, off by itself, humming along at its own imperturbable rate. On the opposite side of the equals sign are the parts of the equation that can vary, that are indeed the variables—distance and time, also known as miles and seconds. They can undergo as many permutations as you can imagine, as long as they continue to divide in such a way that the result is 186,282 miles per second, or the equivalent—372,564 miles per two seconds, or 558,846 miles per three seconds, or 1,862,820 per ten seconds, and so on. Change the distance, and you have to change the time.

You have to change the time.

For more than two centuries, though, you didn’t. Now, on an evening in May 1905, you suddenly did, because on that evening Einstein, having talked the problem through with his friend Besso, realized that he needed to take into account something he’d never before adequately considered: the “inseparable connection between time and the signal velocity.” Time was a variable, a measurement that passed at different rates according to where you were. To an observer on the dock, the second light signal would had to have lasted longer than one second. To an observer on the ship, however, the second light signal would have appeared to do what the first one had done, back when the ship was anchored in the water: travel straight down to the base of the mast, 186,282 miles away. For this observer, the distance wouldn’t have differed from one signal to the next, so the time wouldn’t have, either. The shipboard observer would be measuring one second while you on the dock would have been counting two seconds or three seconds or more, depending on the speed of the ship along the water. For this reason, you would have every right to say that clocks on board the ship were moving slowly. And there it is: a new principle of relativity.

Of course, such an effect wouldn’t become noticeable unless the ship were moving at a significant fraction of the speed of light. At more modest speeds, the Galilean interpretation holds to a high degree of accuracy; as Einstein later wrote, “it supplies us with the actual motion of the heavenly bodies with a delicacy of detail little short of wonderful.” Still, according to Einstein’s math, as long as a ship is in motion at all, the distance the light travels on its angular path would have to be greater than the simple perpendicular drop you would see when the ship is at rest relative to you, and therefore the time to cover that distance would have to be greater, too. Through similar reasoning, Einstein also established that for an observer in a relative state of rest, back on the dock, the measurement of the length of a rod aboard a moving ship would have to shorten in the direction of motion, and to grow shorter the faster the ship is moving relative to the dock. And vice versa: Someone on the supposedly moving ship would have every right to consider that system to be the one at rest, and you and your so-called resting system to be the one whose dimensions would also appear to have shortened, and whose time would also appear to have slowed.

So which observer would be “right”? The observer on the ship, or you on the dock? The answer: Both—or, maybe more accurately, either, depending on who’s doing the measuring. But how much time passed really? How long is the rod really? The answer: There is no “really”—no absolute space, no ether, against which to measure the motions of all matter in the universe. There is only the relative motions of the two systems.

“For the rest of my life I want to reflect on what light is,” Einstein once said. If Einstein were correct, the universe wasn’t quite a clockwork mechanism; it didn’t function only according to the visible motions of matter. Instead it was electromagnetic, operating according to heretofore hidden principles. On a fundamental level, it was less a pocket watch than a compass.

Not that this new understanding of the universe was complete. Einstein knew that all he’d done was take into account the measurements of objects moving at uniform, or nonvarying, velocities relative to one another—a highly specialized situation. He hadn’t yet taken into account the measurements of objects moving at nonuniform, or varying, velocities relative to one another—a far more representative sampling of the universe as we know it.

Still, it was a start. In a way, Einstein’s light-centered universe was as physically distinct from the Galilean one he’d inherited as Galileo’s sun-centered universe was from the Aristotelian one he’d inherited. But like Galileo, Einstein knew his had to be true—or truer than the one it was replacing, anyway—because he had seen the evidence for himself, if only in his mind’s eye.




TWO MORE THINGS ON EARTH (#ulink_d1a6bc80-1e22-5d7d-a1ff-a6cba2308627)


Listen.

And so the boy listened. His father had something to tell him. Hand in hand, they were going for a walk during which, in the manner they’d recently adopted, the father would attempt to impart to his son some lesson about life. On this occasion, the story concerned an incident that had happened to the father years earlier, on the streets of the city of Freiberg, the boy’s birthplace. The father, then a young man, had been walking along minding his own business when a stranger came up to him and in one swift motion knocked his new fur hat off his head, called him a Jew, and told him to get off the pavement. The boy dutifully listened to his father describe this scene, and he had to wonder: So what did you do? His father answered quietly that he had simply stepped into the roadway and picked up his cap. The boy and his father then walked along in silence. The boy was considering this answer. He knew his father was trying to tell him something about how times had changed and how the treatment of Jews was better today. But that’s not what the boy was thinking. Some three decades later, when Sigmund Freud recalled this scene, he couldn’t remember whether he had been ten or twelve at the time, but the impression he’d taken away from that encounter he could still summon and summarize drily: “This struck me as unheroic conduct on the part of the big, strong man who was holding the little boy by the hand.”

An impression, anyway. By the time Freud was committing this memory to paper, he was beginning to understand that any interpretation of the encounter on that long-ago day depended as much on what the boy had wanted to hear as on what the man had been trying to say, or even on what he, by now a father several times over, wished to believe about his father or himself, or about fathers and sons—depended, that is, on the vagaries of the human thought process. Not necessarily because of that conversation with his father (though maybe so; who knows?), it was the human thought process at the most basic level that Freud had grown up to explore: the pathways of nerves along which thoughts travel as they make their way through the brain. And so successful at tracing those paths were the neuroanatomists of Freud’s generation that they seemed to have reached, as one contemporary historian of science declared, “the very threshold of mind.”

That threshold was the neuron. Freud himself sought it, and in the early 1880s, as a young neuroanatomist fresh out of medical school, he even delivered a lecture before the Vienna Psychiatric Society about his own research into the structure of the nervous system. Although his subject that day was not specifically the ultimate point of connection between the fibers from two separate nerve cells within the human brain, he allowed himself a moment’s speculation on what form such a juncture might take, though he immediately, and judiciously, added, “I know that the existing material is not sufficient for a decision on this important physiological problem.”

Over the following decade, Freud metamorphosed from complacent researcher, pursuing his intuitions in a hospital laboratory in Vienna, to insecure clinician, struggling to establish a private medical practice so he could support a family, to some uneasy and perhaps unwieldy hybrid of the two, splitting his time between the laboratory and the clinic—between the theory and the practice of medicine. Yet even in private practice Freud continued to monitor neuroanatomical research as it raced toward its seemingly inevitable conclusion. If each central nerve cell in the brain exists in isolation from every other central nerve cell, as researchers had determined within Freud’s lifetime, it must still establish a connection with other central nerve cells. So: Where was it? Find that specific point of connection, as the neuroanatomists of Freud’s generation understood, and you’ll have found, at long last, the final piece in the puzzle of man.

As recently as the first quarter of the nineteenth century, that puzzle had seemed potentially insoluble, in large part because of the limitations inherent in the only instrument that might conceivably assist such an investigation. Back in the 1670s, when Antonius von Leeuwenhoek and other natural philosophers began reporting what they could see by examining terrestrial objects through a microscope, this new method of investigation might have seemed to offer limitless possibilities for anatomists. That promise, however, pretty much vanished once they got a good look at the infinitesimal scale of what they’d be studying. Leeuwenhoek himself reported that the finest pieces of matter he could see in animal tissue were simply “globules,” and for well over a century anatomists were left to concoct hypotheses about what these globulous objects might be without being able to see them any better than Leeuwenhoek had. As late as 1821, the English surgeon Charles Bell—who himself had only just recently distinguished between those nerves that carry the sensory impulses, or sensations, to the brain and those that carry the motor impulses, or instructions on how the body should respond, from the brain—gave up on the brain itself: “The endless confusion of the subject induces the physician, instead of taking the nervous system as the secure ground of his practice, to dismiss it from his course of study, as a subject presenting too great irregularity for legitimate investigation or reliance.”

Only five years later, however, British physicist Joseph Jackson Lister effectively revolutionized the microscope through improvements to the objective lens—the one nearer the specimen—that mostly eliminated distortions and color aberrations. Before this advance, neuroanatomy hadn’t been much more than speculation supported by inadequate, incomplete or imprecise observation—supported by further speculation even. Now a new breed of anatomist could explore tissue at a level of detail that was literally microscopic—the technique that would become known as histology.

In 1827, only a year after inventing his achromatic microscope, Lister himself (along with the physician Thomas Hodgkin) decisively refuted all the variations on Leeuwenhoek’s “globular” hypothesis that had arisen over the preceding century and a half. The globules, it turned out, were illusions, mere tricks of the light that Lister’s new lens system could now correct. By utilizing the achromatic microscope to examine brain tissue, Lister reported, “one sees instead of globules a multitude of very small particles, which are most irregular in shape and size.” To these particles anatomists applied a term that the first microscopists back in the seventeenth century had used to describe some of the smallest features they could see, though now it came to refer only to the basic units of organisms, both plant and animals: cells. These cells, moreover, seemed always to be accompanied by long strands, or fibers. That a relationship between these cells and fibers existed in the central nervous system—the brain and spinal cord—formed the basis of Theodor Schwann’s cell theory of 1839, a tremendous advance in knowledge from even fifteen years earlier. “However,” as one researcher reported in 1842, “the arrangement of those parts in relation to each other is still completely unknown.”

Some researchers argued that the fibers merely encircled the cells without having an anatomical connection to them. Some argued that the fibers actually emanated from the cells. Such was the density of the mass of material in the central nervous system, however, that even the achromatic microscope couldn’t penetrate it sensibly. Despite the promise of clarity that Lister’s improvements at first had seemed to offer, by the 1850s anatomists were beginning to resign themselves to the realization that the precise nature of the relationship between cells and fibers would have to remain a mystery, unless another technological advance came along.

It did, in 1858, two years after Freud was born, when the German histologist Joseph von Gerlach invented microscopic staining—a way to dye a sample so that the object under observation would bloom into a rich color against the background. The object under observation in this case was a central nerve cell, complete with an attendant network of fibers. Now neuroanatomists could see for themselves that the fibers and the cells were indeed anatomically connected. They could also see that the central nerve cells exist only in the gray matter of the brain and spinal cord, never the white. And they could see that even where a central nerve cell is part of a dense concentration within the gray matter, it exists apart from any other central nerve cell, in seeming isolation.

But if the cells themselves don’t come into contact with other cells, then how do they communicate with one another, as they clearly must? How does one cell “know” what the others are doing and therefore act in concert and thereby register a sensation or commit the nervous system to a response, an action, a thought? If the answer wasn’t in the cells, those solitary hubs, then it had to be elsewhere.

And the only elsewhere there was, was the fibers. Thanks to Gerlach’s staining method, neuroanatomists now found that the fibers extend from the central nerve cells only into the white matter of the brain and spinal cord, never the gray. But there the trail went cold. The meshwork was still too intricate for anyone to trace the paths of all the fibers from a single cell to the point where the fibers terminate. And so yet another technical innovation needed to be invented, and so yet another one was. In 1873—the same year that Freud entered the University of Vienna as a medical student—Camillo Golgi, an Italian physician, developed a superior staining method that in effect isolated the fibers in the same way that Gerlach’s had isolated the cells. Even so, researchers still couldn’t find one point of connection between the fibers of two different cells. Golgi himself thought he found one in the 1880s, but his sample was inconclusive. Still, in order to do what central nerve cells do, which is pass along impulses to one another, both prevailing theory and common sense dictated that the fibers of neighboring cells must connect, somewhere.

Not until 1889 did the Spanish histologist Santiago Ramón y Cajal discover the truth: They don’t, anywhere. This, then, was the basic unit of the brain, what the German anatomist Wilhelm Waldeyer two years later would name the neuron: each central nerve cell and its own fibers, existing apart from—that is, not connecting to—any other central nerve cells and their own fibers. But if even the least wisp of a fiber—a fibril—doesn’t connect, what does it do? It contacts, Ramón y Cajal explained. It reaches out, under the excitation of an impulse, to touch the neighboring fibril or cell, and then, when the excitation has relaxed, it retracts to its previous state of isolation. “A connection with a fiber network,” Waldeyer wrote, “or an origin from such a network, does not take place.”

Although at first the “neuron doctrine,” as Waldeyer christened it, might have seemed to contradict common sense, upon reflection the idea that communication between individual neurons was not continuous but intermittent actually went a long way toward a possible explanation of several otherwise inexplicable mental phenomena, such as the isolation of ideas, the creation of new associations, the temporary inability to remember a familiar fact, the confusion of memories. These were the phenomena, anyway, that Freud had been confronting in his private practice, where he found himself listening at length to hysterics, and wondering how to represent their worries and cures within the webwork of cells and fibers he remembered from his years of neuroanatomy.

“I am so deep in the ‘Psychology for Neurologists’ that it quite consumes me, till I have to break off overworked,” Freud wrote to a friend in April 1895. “I have never been so intensely preoccupied by anything.” By now Freud had embarked on a second career. From 1873 to 1885, first as a medical student and then as a medical researcher, he’d devoted himself to an examination of the nervous system—to research in neuroanatomy. In 1886, on the eve of his thirtieth birthday, he’d opened a private practice devoted to nervous disorders and, for the first time in his life, begun seeing patients, though he continued to conduct research on the side. After the development of the neuron theory, Freud would have had every reason to believe that if anyone were in a position to unite the psychical with the physical, it was he. He’d seen both sides. He’d studied both sides, immersing himself in the peculiar logic of each for long periods of time. He’d even written some tentative outlines to this effect over the past few years, in letters to his closest friend and constant correspondent, Wilhelm Fliess, a Berlin ear, nose, and throat doctor. Not until Freud could meet with Fliess personally late in the summer of 1895, however, and the two men could convene one of their days-long “congresses,” as Freud liked to call these occasional periods of intense and inspirational professional discussion, did he see the project whole.

Freud began composing the manuscript on the train ride from Berlin back home to Vienna that September. “I am writing so little to you only because I am writing so much for you,” Freud informed Fliess by letter on September 23. Barely two weeks later, on October 8, Freud mailed a draft to Fliess—a hundred or so handwritten pages in which he attempted to explain definitively the processes of the mind by describing exhaustively the mechanism of the brain that encases it.

And a mechanism it was. “The project,” Freud wrote, in the second sentence of the manuscript, “involves two principal ideas”: in essence, and in accord with Descartes’s philosophy and Newton’s physics, motion and matter. Freud’s principal idea number I was straightforward enough: that the workings of the brain are “subject to the general laws of motion”—that matter moves immediately adjacent matter with comprehensive cause-and-effect predictability. What contributed to Freud’s sense of urgency in composing this draft, however, was that he now knew what the “matter” was: “2. That it is to be assumed that the material particles in question are the neurons.” He based this assumption, as he made explicit several pages later, on “the knowledge of neurons which has been arrived at by modern histology.”

Yet even as he was passing the manuscript along to Fliess, Freud was starting to have his doubts. “I have been alternately proud and overjoyed and ashamed and miserable—until now, after an excess of mental torment, I apathetically tell myself: it does not yet, perhaps never will, hang together,” he wrote in the accompanying letter. “I am not succeeding with the mechanical elucidation; rather, I am inclined to listen to the quiet voice which tells me that my explanations are not yet adequate.”

In the weeks to come that inner voice softened briefly, then hardened again. “During one industrious night last week,” Freud wrote to Fliess on October 20, twelve days after posting the manuscript, “the barriers suddenly lifted, the veils dropped, and everything became transparent—from the details of the neuroses to the determinants of consciousness. Everything seemed to fall into place, the cogs meshed, I had the impression that the thing now really was a machine that shortly would function on its own.” On November 8, however, he reported that after other professional commitments had forced him to put the manuscript aside, he found he couldn’t stop thinking about it—specifically, he noted with regret, that “it required a lot of revision. At that moment,” he went on, “I rebelled against my tyrant. I felt overworked, irritated, confused, and incapable of mastering it all. So I flung it all aside. If you felt called on to form an opinion of those few sheets of paper that would justify my cry of joy at my victory, I am sorry, because you must have found it difficult.” Freud added that in another two months, after he’d fulfilled his obligations, “I may be able to get the whole thing clearer.” It was not to be. Only three weeks later he wrote to Fliess, “I no longer understand the state of mind in which I hatched the psychology; cannot conceive how I could have inflicted it on you. I believe you are still too polite; to me it appears to have been a kind of madness.”

Maybe so. Whatever it was, it was over now, as if a fever had broken. The problem that Freud had found himself confronting was larger than pathways of nerves, larger than the neuron itself—or, maybe, smaller. Either way, it was the same problem that had been haunting physiology since the inception of the modern era more than two centuries earlier: brain. To be precise, it was brain in opposition to what the motions of matter within the human cranium represent: mind, maybe.

For much of human history, such a distinction would have been secondary, at best. The far more important distinction, instead, would have been the one between two types of matter: terrestrial and celestial. Down here, as Aristotle had said, were the four elements—earth, air, fire, and water, either alone or in any number of combinations. Up there was one element—quintessence, a single perfect substance that constituted the moon, sun, planets and stars, as well as the spheres that carried them on their heavenly journeys. An Earth that itself traveled through the heavens, however, not only erased the crucial distinction between what was terrestrial and what was celestial but—as Descartes appreciated when he was merely a budding philosopher—presented a strong argument that everything in heaven and everything on Earth might ultimately consist of the same stuff.

Descartes first heard about Galileo’s discovery of four moons orbiting Jupiter in 1610, as a student at the Jesuit college at La Flèche. Although Descartes was only thirteen or fourteen when this astonishing news reached his outpost in the French countryside, he understood at once the profound effects such a discovery could have on philosophy and physics. The very scope of those effects, however, also reinforced for him two growing suspicions: that although philosophy “has been cultivated for many centuries by the most excellent minds,” as he later wrote, “there is still no point in it which is not disputed and hence doubtful”; and that as “for the other sciences, in so far as they borrow their principles from philosophy I decided that nothing solid could have been built upon such shaky foundations.” The only rational approach to this appalling and ongoing state of ignorance, he concluded, was to begin again, from the beginning—“to demolish everything completely and start right again from the foundations,” and to do so by seeking “no knowledge other than that which could be found in myself or else in the great book of the world.”

The World, in fact, was what he called his first attempt to explain all of physics. As was often the case with Descartes when he produced a work of physics, he simultaneously produced a companion work on how a reconception of physics would necessitate a new interpretation of man’s role in it—a new physiology. This work he called Treatise on Man. He completed both in 1633, a year after Galileo released his own attempt at a new physics, Dialogue Concerning the Two Chief World Systems. But before his two volumes could reach publication, Descartes heard that the Roman Catholic Church had condemned Galileo because the Dialogue posited a sun-centered universe. Since his own two essays did the same and since he feared that if he altered them in any way they would be “mangled,” Descartes suppressed both.

But he never stopped working on physics and physiology. In particular, over the next few years, he wondered if the newfound conceptual unity between the heavens and the Earth would allow him to achieve a parallel mathematical unity. In other words, could he do to the terrestrial realm what astronomers had long done to the celestial realm: geometricize it? Geometry, after all, had originally been an attempt to render the terrestrial world in mathematical terms. Now, after a lapse of a couple of millennia, it was again, and in his 1637 Geometry, Descartes demonstrated how all matter, not only in heaven but on Earth, could be located according to three coordinates in space. In which case, as Descartes himself recognized and as succeeding generations came to appreciate, a crucial question presented itself: Could we approach the secrets of man’s inner universe with the same heretofore unthinkable curiosity that Galileo and his successors had regarded the outer? Could we render the motions of matter within the brain as predictable as any planet’s through the heavens? In short, could there be a Newton of neurology?

Even while Newton was alive and evidence had begun to accumulate that his laws extend to the outermost reaches of the universe, the question inevitably had arisen whether those same laws might extend to the innermost reaches as well. In 1725, Richard Mead, an English physician, had produced mathematical formulations of the effects of planetary gravity on the human body. Expanding on that idea later in the eighteenth century, the German physician Franz Anton Mesmer proposed a gravitational attraction between animals, or what he called animal magnetism, whose existence he then claimed to demonstrate through public displays of hypnotism. In the early nineteenth century, efforts at quantifying psychic phenomena found a champion in the German philosopher Johann Friedrich Herbart, who conceived of the workings of the mind as “forces” rather than ideas, who explicitly invoked Newton in advocating the use of mathematical formulas to describe the motions of these forces, and who once declared, “Regular order in the human mind is wholly similar to that in the starry sky.”

In retrospect, though, any such earlier efforts to reduce the workings of the inner universe to a series of cause-and-effect laws were doomed. These would-be Newtons couldn’t have known it at the time, but they didn’t yet have access to a Galilean equivalent of neuroanatomical data—the moons, planets, and stars of the inner universe—to provide their speculations with a solid empirical foundation.

Did Freud? It was tempting for him to think so. It would have been tempting for anyone in his position to think so—not only because it’s always tempting for an ambitious intellect to think that the generation into which it’s fortunate enough to be born is the one in possession of just enough information to settle a question that has thwarted the great thinkers since antiquity but because the state of neuroanatomical knowledge at the close of the nineteenth century was different from any other period in the history of science. In fact, in 1894—only five years after Ramón y Cajal’s discovery that fibers from central nerve cells contact, not connect, and only three years after Waldeyer developed the neuron theory—one of Freud’s former instructors and colleagues from his laboratory days, Sigmund Exner, published his own attempt at a comprehensive neuroanatomy, Entwurf zu einer physiologischen Erklärung der psychischen Erscheinungen (Draft Toward a Physiological Explanation of the Psychological Features).

Like most physiologists of his era, Freud knew firsthand what the achromatic microscope could accomplish. He’d used the still-new instrument extensively as a student in the 1870s, then proved his mastery of it the following decade as a reliable, respected diagnostician at the General Hospital of Vienna, where one of his examinations drew praise in a contemporary medical journal for its “very valuable contribution” to a field “heretofore lacking in detailed microscopic examination.” And like many physiologists of his era, Freud knew firsthand what staining a microscopic sample could accomplish. He’d twice developed his own significant improvements on existing staining methods, first in 1877 “for the purpose of preparing in a guaranteed and easy way the central and peripheral nervous system of the higher vertebrate (mice, rabbits, cattle),” and again in 1883 “for the study of nerve tracts in the brain and spinal cord.” And like a few physiologists of his era, Freud had even anticipated the neuron theory itself, during his lecture before the Vienna Psychiatric Society in the early 1880s, several years before Ramón y Cajal proved it. Unable to locate a fiber that he could trace from one central nerve cell to another, he’d wondered if cells might therefore not ultimately connect.

In the wake of his failure with the “Psychology for Neurologists,” however, Freud began to consider another way to frame the problem: not as mind in opposition to brain—or at least not only mind in opposition to brain. Instead he began to think in terms of mind in opposition to itself.

“The starting-point for this investigation,” Freud later wrote, outlining his reasoning at this juncture, “is provided by a fact without parallel, which defies all explanation or description—the fact of consciousness.” On the most basic level, the workings of the mind remained a mystery. Even a thought, the fundamental unit of mind, doesn’t remain in consciousness for any length of time. “A conception—or any other psychical element—which is now present to my consciousness may become absent the next moment, and may become present again, after an interval, unchanged.” Forget for the moment the gap within the brain—between one neuron and the next, that space across which some “quantity” of “energy” must pass, as he’d tried to express the transaction in his “Psychology.” And forget, too, the gap between brain and mind—between the physical communication among neurons and the resulting psychical impressions. With this description of one of the most mundane of human occurrences—something out of “our most daily personal experience”—Freud had identified a gap within the mind itself: “In the interval the idea was—we do not know what.”

“Unconscious,” he called it, adopting the common adjective of the time. In a sense, all he’d done was work his way back to the assumptions that he and his contemporaries had inherited. Mind was mind, brain was brain, and one day, maybe, the two would meet. Brain anyone with the proper training and equipment could tease the secrets out of, slicing tissue, staining samples, subjecting fibrils to microscopic scrutiny at recently unthinkable powers of magnification and degrees of resolution. Mind, however, nobody could fully capture using a mechanical model of the brain—not yet, anyway. Mind, as Freud could observe for himself on an almost daily basis in his private medical practice, was simply full of too many subtleties whose precise nature continued to doom any attempt to do for the physical workings of the inner universe what Newton had done for the outer.

But, in another sense, what Freud had learned through the experience of writing that manuscript was just how subde the subtleties of mind were. Nothing in his neurological training had prepared him for—or, as he had now learned the hard way, could account for—that. Under intensive scrutiny, the mind had turned out to be even more complicated—far more circuitous, far more contradictory, and, finally, far more elusive—than he or, as far as he knew, anyone else had begun to imagine. Brain might be simply brain, but mind wasn’t just mind.

As he reclined in a chair in his modest study in Vienna, listening to the complaints of patients week after week, year after year, Freud had learned to encourage them to try to see whether they could remember the trauma that had caused their hysterical symptoms. If they did so, as he tried to reassure them, their symptoms would disappear. Freud had first heard about this method many years earlier, back in 1882, from a friend and colleague in Vienna, the eminent physician and medical researcher Josef Breuer. At that time, Breuer had told Freud about how he’d treated a young woman’s hysteria through hypnosis. Freud, in fact, had seen a demonstration of hypnosis once. It was, he thought, impressive, especially for a student with a physiological turn of mind. But instructive? Curative?

It might be so, said Breuer. Rather than simply issuing a command or a prohibition while she was under hypnosis, he said, he had asked this patient—Anna O., Freud named her later, when recalling this period in his professional development—what the source of the trauma was. In her waking state she “could describe only very imperfectly or not at all” the memories relating to her trauma, as Freud later wrote; in a hypnotic state, however, she seemed oddly able to remember everything. Even more improbably, by recalling the source of the trauma, as well as by experiencing the emotional outpouring that invariably accompanied this memory, she seemed somehow to slip free of the grip of the memory—seemed to achieve, in Breuer’s term, a “catharsis.”

In order for this construct of a cure to hold, it might seem, the mind must work like a Newtonian machine: an initial cause leading to an effect, which in turn becomes a cause for another effect, which in turn becomes another cause for a further effect, and so on, insinuating itself throughout the subject’s life until one day, in an unrecognizable guise, it surfaces as hysteric behavior, those worrisome symptoms that prompt the victim to seek medical attention. But if this description of the process were true, then the removal of any link along the way would be sufficient to interrupt the chain of causality and lead to the removal of the ultimate effect—the hysterical symptom. In that case, using hypnosis in a purely suggestive way, by simply commanding the symptom to disappear, would be sufficient in effecting a cure.

Freud, however, believed that he’d seen otherwise. When he attempted to apply this kind of therapy in his own medical practice, he found that leading the patient back to some step between the current state of hysteria and the original inciting incident didn’t have a cathartic result. Only by revisiting the scene of the crime, so to speak, could a victim permanently break free of its memory, its insidious influence. Only by tracing it to its source would doctor and patient see the hysterical symptom disappear—but only by tracing it all the way to its source. As Freud told a meeting of the Vienna Medical Club in January 1893, “The moment at which the physician finds out the occasion when the symptom first appeared and the reason for its appearance is also the moment at which the symptom vanishes.”

Anna O., for instance, complained of a paralysis on her right side, persistent hallucinations of snakes in her hair, and a sudden inability to speak her native German. These symptoms Breuer eventually traced back to an evening when she was nursing her sick father and imagined a snake approaching his sleeping figure. She tried to move to save him, but her right arm had gone to sleep over the back of a chair; and so she resorted to prayer, but in her fear all she could recall were some children’s verses in English. Or, from Freud’s own case files, Frau Cäcilie M., who suffered from a pain between her eyes until she remembered the time her grandmother had fixed her with a “piercing” look. “Cessante causa cessat effectus,” as Freud said in that same lecture before the Vienna Medical Club: “When the cause ceases, the effect ceases.”

Freud, however, wasn’t content with a vision of the mind that began with a cause and then, no matter what, ended with a certain effect. How to account for the inability of a process so powerful—so active, after all—to reveal itself?

With his background in a physiology that was ultimately nothing more than matter and motion, Freud knew exactly how to account for it: by postulating the existence within the unconscious of an opposing force at least equally powerful—a “defense” or, as Freud soon came to call it, a “repression,” a change in terminology that itself reflected a change in Freud’s thinking. This opposing force wasn’t merely defending the mind against itself; it was repressing the unpleasant memory or association. It wasn’t reactive. It, too, was active, even while seemingly absent.

On October 26, 1896, Freud’s father died. The heroic figure of Sigmund’s childhood imagination may have disappeared forever during that long-ago walk when the father confided in the son how he’d submitted to the indignities of an anti-Semite, and now the corporeal figure was gone, too. Yet they lingered—both the heroic figure and the tragic shade. Like a traumatic event that remains present in the symptoms of a hysterical patient, the older man remained alive in the grown son. That night, in fact, Freud had a dream about him. On his way to the funeral, Freud stops at a barbershop. There he sees a sign: “You are requested to close the eyes.” Whose eyes? he had to wonder. The dead father’s? The son’s? And “close” them as in lay to rest? Or “close” them as in “wink at” or “overlook”?

The dead-but-not-gone father wasn’t the only thing that lingered. The dream did, too, taunting Freud with its myriad possible interpretations, haunting him like the earlier memory of the about-to-be-unheroic man on the street, inhabiting him, continuing to exert its influence over him, as an adult, decades after the event. In years to come, Freud more formally commemorated his father’s death as “the most important event, the most poignant loss, of a man’s life.” But now, when his impressions were raw, he confided in a letter to his friend Fliess, “By one of those dark pathways behind the official consciousness the old man’s death has affected me deeply.”

Could Freud navigate those dark pathways? When he tried to map the pathways of nerves within the brain, he had failed—and now he suspected it was because he’d set himself the wrong challenge. Now a new and radically different challenge presented itself to him: How to map the pathways of the mind alone? Even if he could, would anyone believe that such a description bears any resemblance to reality? He would, of course—but then, sitting in his office, listening to his patients, Sigmund Freud had heard the evidence for himself, if only in his mind’s ear.




THREE GOING TO EXTREMES (#ulink_0b8a40ec-e03d-5f97-93d4-1f9a08681bb4)


Now here was something nobody had ever seen before. The photograph that began appearing on the front pages of newspapers around the world in the first weeks of 1896 showed an image of a hand, more or less. Less, because this hand seemed to lack skin—or at least its outer layers of flesh and blood and tendons had been reduced to a presence sufficiently shadowy so as to allow a look beneath them. And more, because of what that look within revealed: the intricate webwork of bones that previously had been solely the province of the anatomist.

The hand belonged to the wife of Wilhelm Conrad Röntgen, a professor at the University of Würzburg in Germany. On November 8, 1895, while working alone in his darkened laboratory, Professor Röntgen had noticed a seemingly inexplicable glow. On closer inspection, this glow revealed mysterious properties. For the next several weeks Röntgen worked in secrecy, strictly adhering to the method that had initiated the Scientific Revolution more than two centuries earlier and had sustained it ever since: He made sure that anyone else could use a Hittorf-Crookes tube and a Ruhmkorff coil to produce and reproduce the effect he’d detected. Sometimes his wife, Bertha, would ask why he was spending so much time in his laboratory, and he would answer that he was working on something that, if word got out, would have people saying, “Der Röntgen ist wohl verruckt geworden” (“Röntgen has probably gone crazy”).

At last Röntgen satisfied himself that his discovery was legitimate—that he hadn’t somehow misinterpreted the data. On December 22, he invited Bertha to join him in the laboratory, where he asked her to insert her hand, for fifteen minutes, between the tube and a photographic plate. As she did so, something happened—something he’d witnessed for himself numerous times now, something he still couldn’t explain. A substance passed between the tube and the plate—must have passed, because even though the substance itself was invisible, the effect was undeniable. An image of his wife’s hand, to her horror, was slowly burning itself into existence. On New Year’s Day, Röntgen went for a walk with Bertha, during which he mailed to colleagues copies of this photograph as well as his preliminary report, “Eine neue Art von Strahlen” (“A New Kind of Ray“)—what Röntgen, in a footnote, christened “X rays,” because of their mysterious nature. On the way to the mailbox, Röntgen turned to Bertha, whose hand—more or less, and complete (so to speak) with wedding ring—would soon be immortalized, and said, “Now the devil will have to be paid.”

The first public news account appeared on January 5, after a professor of physics at the University of Vienna received a copy of the paper and photograph and passed them along to colleagues, who in turn contacted the editor of the WienerPresse. From there the news spread rapidly: the London Daily Chronicle on January 6, the Frankfurter Zeitung on January 7. “Men of science in this city,” began an article in The New York Times a few days later, “are awaiting with the utmost impatience the arrival of European technical journals”; when an English translation of Röntgen’s report arrived, the Times printed it on the front page almost in its entirety. By January 13 Röntgen had been summoned to Berlin, where he gave a demonstration of X-rays before Kaiser Wilhelm II. Röntgen granted only one substantive interview, to a particularly enterprising journalist from McClure’s Magazine, then withdrew from public scrutiny. “In a few days I was disgusted with the whole thing,” he later recalled. “I could not recognize my own work in the reports anymore.”

It was entirely coincidence that Röntgen had conducted what proved to be his decisive research at the same historical moment, and possibly even the same literal moment, that the sixteen-year-old Albert Einstein was pacing the grounds of a school in Aarau, trying to reconcile the properties of a beam of light with the concept of absolute space. It was similarly a coincidence that Röntgen made his discovery on the very day that the thirty-nine-year-old Sigmund Freud was writing to his friend Wilhelm Fliess that he was thinking of abandoning his attempt to render human thoughts strictly in terms of the motions of neuroanatomic matter. But it was no coincidence that speculation about the implications of Röntgen’s discovery led in the same direction that Einstein and Freud were already heading.

If X-rays were a “longitudinal vibration in the ether,” Thomas Edison pointed out to a journalist visiting his New Jersey laboratory, “then Professor Röntgen has found out at least one method of investigating [the ether’s] properties, and the gain to men of science in estimating the behavior of light and electricity through the medium of ether will probably be immense, causing many changes in our present theories.” Meanwhile, Science magazine reported that the College of Physicians and Surgeons in New York City had used the rays “to reflect anatomic diagrams directly into the brains of advanced medical students, making a much more enduring impression than ordinary teaching methods of learning anatomic details.” And whatever went into a skull presumably could also come out: A Mr. Ingles Rogers informed a San Francisco newspaper that he had “produced an impression on the photographic plate by simply gazing at it in the dark,” while a Dr. Baraduc attracted worldwide attention by sending an official communication to the Paris Académie de Médecin that he had succeeded in photographing thoughts, and he mounted an exhibition in Munich as proof. Or, as one early account regarding X-rays cautioned readers who would fool themselves into thinking that if they “go inside the house and pull down the blinds and wait till it is dark,” they might “feel quite safe in sinning”: “There are the x-rays, you know—and nobody knows what other invisible pencils may be registering all our actions or even thoughts—or what’s worse, the desires that we don’t dare think. They, too, must leave their mark somewhere.”

Nobody knew quite what X-rays were—not even Röntgen, though he did tentatively guess that they might indeed be longitudinal vibrations in the ether. And nobody knew quite what X-rays did—though the medical applications were apparent enough that by the following autumn The New York Times reported “no hospital in the land can do justice to its patients if it does not possess a complete X-ray outfit.” And nobody could have known what impact X-rays eventually would have on the history of science—revolutionary, but that’s getting ahead of the story. What anybody who cared about science did know by now, though, following more than two centuries of investigation, was that in the properties of the ether or the pathways of the brain the scientific method had reached the frontiers of the outer and inner universes.

In which case: Was the store of knowledge finite? And if so, was the history of science now nearing its end?

The idea of science’s even having a history was a novel concept. History, in its modern interpretation, was still a fairly recent addition to Western thought. Before the dawn of the modern era of science, time had seemed—to the extent it had seemed anything—unchanging or, at most, cyclical. People lived and died; civilizations rose and fell. The particular circumstances of such a world might change, but not its essence.

Not so the essence of a world that placed a premium on the acquisition of new knowledge—on change itself. When Western scholars of the fifteenth century began resurrecting texts from antiquity and rediscovering ancient knowledge, they would have had no reason to believe that these sources were inaccurate or in any way incomplete. On the contrary: They had every reason to assume the texts they were inheriting were correct and comprehensive. Didn’t these writings, whether in the original Greek or in Arabic translation, represent the cumulative knowledge of a civilization at the zenith of its intellectual powers? True, that particular civilization had risen, and it had fallen. And true, civilization itself had lain dormant for the following thousand years. But the essence of civilization itself didn’t change: the understanding of the way the world works.

Yet that’s what the understanding of the way the world works was doing now: changing. Ptolemy of Alexandria, writing in the second century A.D., had compiled such a seemingly thorough catalogue of the workings of the heavens that some seven centuries later his Arabic translators honored his work with the title of “The Greatest Composition,” or Al-mageste (later shortened to Almagest). Galen of Pergamum, also writing in the second century A.D., had described the human anatomy in more than a hundred texts that survived the ages, a volume of output that alone must have intimidated succeeding generations into accepting his word as gospel on all matters medical. Yet in 1610, Galileo looked through a telescope and determined to his own satisfaction that much of what Ptolemy had written was inaccurate, while in the 1630s Descartes surveyed the current state of medical knowledge and concluded, “I am confident that there is no one, even among those whose profession it is, who does not admit that all at present known in it is almost nothing in comparison of what remains to be discovered.”

So the acquisition of knowledge was only now just beginning, right? Wrong. Just ask Galileo. “It was granted to me alone to discover all the new phenomena in the sky and nothing to nobody else,” he wrote of what he’d observed through the telescope. Such was Galileo’s authority that his assessment was echoed, a generation after his death, by no less an eminence than Christopher Wren: “All celestial mysteries were at once disclosed to him. His successors are envious because they believe that there can scarcely be any new worlds left.”

Galileo, for once, had missed the point. No doubt his legendary arrogance contributed to his high opinion of his own accomplishments. So did the indisputably monumental and lasting nature of his additions to the astronomical and intellectual landscape. In the absence of a precedent for a single life’s work such as his, it might very well have seemed what Galileo assumed it to be: a self-standing anomaly, a onetime correction to the core of human knowledge—even, perhaps, its completion.

Put enough anomalies together, however, and that interpretation of a life’s work begins to shade in the direction that Descartes indicated. By the time Newton wrote in a letter to a friend in the 1670s that he had seen farther than his predecessors only because he was standing on the shoulders of giants, so many seemingly anomalous individuals had appeared in physics and astronomy that they were clearly no longer the exceptions but the rule. Through his choice of reference, Newton was indicating as much. He was paraphrasing the philosopher Bernard of Chartres, who in 1115 had written, “We are like dwarfs sitting on the shoulders of giants; hence we can see more and further than they, yet not by reason of the keenness of our vision, but because we have been raised aloft and are being carried by men of huge stature.” The cathedral then just beginning to rise in Bernard’s own village eventually commemorated this sentiment with lancet windows depicting New Testament preachers perching on the shoulders of Old Testament prophets, and it was this new interpretation of time that Newton, half a millennium later, was pointedly evoking: history as a cumulative enterprise, science as a cathedral of knowledge.





Конец ознакомительного фрагмента. Получить полную версию книги.


Текст предоставлен ООО «ЛитРес».

Прочитайте эту книгу целиком, купив полную легальную версию (https://www.litres.ru/richard-panek/the-invisible-century-einstein-freud-and-the-search-for-hidd/) на ЛитРес.

Безопасно оплатить книгу можно банковской картой Visa, MasterCard, Maestro, со счета мобильного телефона, с платежного терминала, в салоне МТС или Связной, через PayPal, WebMoney, Яндекс.Деньги, QIWI Кошелек, бонусными картами или другим удобным Вам способом.



A book which offers fresh perspectives on the scientific developments of the past hundred years through the complementary work of two of the century’s greatest thinkers, Einstein and Freud.At the turn of the century there was a widespread assumption in scientific circles that the pursuit of knowledge was nearing its end and that all available evidence had been exhausted. However, by 1916 both Einstein and Freud had exploded the myth by leading exploration into the science of the invisible and the unconscious. These men were more than just contemporaries – their separate pursuits were in fact complementary. Freud’s science of psychoanalysis found its cosmological counterpart in the Astronomy of Invisible Light pioneered by Einstein. Together they questioned the little inconsistencies of Newton’s ordered cosmos to reveal a different reality, a natural order that was anything but ordered, a cosmos that was volatile and vast – an organism alive in time. These men inspired a fundamental shift in the history of human thought. They began a revolution that is still in progress and provided one of the past century’s greatest contributions to the history of science.

Как скачать книгу - "The Invisible Century: Einstein, Freud and the Search for Hidden Universes" в fb2, ePub, txt и других форматах?

  1. Нажмите на кнопку "полная версия" справа от обложки книги на версии сайта для ПК или под обложкой на мобюильной версии сайта
    Полная версия книги
  2. Купите книгу на литресе по кнопке со скриншота
    Пример кнопки для покупки книги
    Если книга "The Invisible Century: Einstein, Freud and the Search for Hidden Universes" доступна в бесплатно то будет вот такая кнопка
    Пример кнопки, если книга бесплатная
  3. Выполните вход в личный кабинет на сайте ЛитРес с вашим логином и паролем.
  4. В правом верхнем углу сайта нажмите «Мои книги» и перейдите в подраздел «Мои».
  5. Нажмите на обложку книги -"The Invisible Century: Einstein, Freud and the Search for Hidden Universes", чтобы скачать книгу для телефона или на ПК.
    Аудиокнига - «The Invisible Century: Einstein, Freud and the Search for Hidden Universes»
  6. В разделе «Скачать в виде файла» нажмите на нужный вам формат файла:

    Для чтения на телефоне подойдут следующие форматы (при клике на формат вы можете сразу скачать бесплатно фрагмент книги "The Invisible Century: Einstein, Freud and the Search for Hidden Universes" для ознакомления):

    • FB2 - Для телефонов, планшетов на Android, электронных книг (кроме Kindle) и других программ
    • EPUB - подходит для устройств на ios (iPhone, iPad, Mac) и большинства приложений для чтения

    Для чтения на компьютере подходят форматы:

    • TXT - можно открыть на любом компьютере в текстовом редакторе
    • RTF - также можно открыть на любом ПК
    • A4 PDF - открывается в программе Adobe Reader

    Другие форматы:

    • MOBI - подходит для электронных книг Kindle и Android-приложений
    • IOS.EPUB - идеально подойдет для iPhone и iPad
    • A6 PDF - оптимизирован и подойдет для смартфонов
    • FB3 - более развитый формат FB2

  7. Сохраните файл на свой компьютер или телефоне.

Рекомендуем

Последние отзывы
Оставьте отзыв к любой книге и его увидят десятки тысяч людей!
  • константин александрович обрезанов:
    3★
    21.08.2023
  • константин александрович обрезанов:
    3.1★
    11.08.2023
  • Добавить комментарий

    Ваш e-mail не будет опубликован. Обязательные поля помечены *