The Fallacy of Human Freedom


By Robert W. Merry
June 25, 2013

JEAN-JACQUES Rousseau famously lamented, “Man is born to be free—and is everywhere in chains!” To which Alexander Herzen, a nineteenth-century Russian journalist and thinker, replied, in a dialogue he concocted between a believer in human freedom and a skeptic, “Fish are born to fly—but everywhere they swim!” In Herzen’s dialogue, the skeptic offers plenty of evidence for his theory that fish are born to fly: fish skeletons, after all, show extremities with the potential to develop into legs and wings; and there are of course so-called flying fish, which proves a capacity to fly in certain circumstances. Having presented his evidence, the skeptic asks the believer why he doesn’t demand from Rousseau a similar justification for his statement that man must be free, given that he seems to be always in chains. “Why,” he asks, “does everything else exist as it ought to exist, whereas with man, it is the opposite?”

This intriguing exchange was pulled from Herzen’s writings by John Gray, the acclaimed British philosopher and academic, in his latest book, The Silence of Animals: On Progress and Other Modern Myths. As the title suggests, Gray doesn’t hold with that dialogue’s earnest believer in freedom—though he has nothing against freedom. He casts his lot with the skeptic because he doesn’t believe freedom represents the culmination of mankind’s earthly journey. “The overthrow of the ancien régime in France, the Tsars in Russia, the Shah of Iran, Saddam in Iraq and Mubarak in Egypt may have produced benefits for many people,” writes Gray, “but increased freedom was not among them. Mass killing, attacks on minorities, torture on a larger scale, another kind of tyranny, often more cruel than the one that was overthrown—these have been the results. To think of humans as freedom-loving, you must be ready to view nearly all of history as a mistake.”

Such thinking puts Gray severely at odds with the predominant sentiment of modern Western man—indeed, essentially with the foundation of Western thought since at least the French Encyclopedists of the mid-eighteenth century, who paved the way for the transformation of France between 1715 and 1789. These romantics—Diderot, Baron d’Holbach, Helvétius and Voltaire, among others—harbored ultimate confidence that reason would triumph over prejudice, that knowledge would prevail over ignorance, that “progress” would lift mankind to ever-higher levels of consciousness and purity. In short, they foresaw an ongoing transformation of human nature for the good.

The noted British historian J. B. Bury (1861–1927) captured the power of this intellectual development when he wrote, “This doctrine of the possibility of indefinitely moulding the characters of men by laws and institutions . . . laid a foundation on which the theory of the perfectibility of humanity could be raised. It marked, therefore, an important stage in the development of the doctrine of Progress.”

We must pause here over this doctrine of progress. It may be the most powerful idea ever conceived in Western thought—emphasizing Western thought because the idea has had little resonance in other cultures or civilizations. It is the thesis that mankind has advanced slowly but inexorably over the centuries from a state of cultural backwardness, blindness and folly to ever more elevated stages of enlightenment and civilization—and that this human progression will continue indefinitely into the future. “No single idea,” wrote the American intellectual Robert Nisbet in 1980, “has been more important than, perhaps as important as, the idea of progress in Western civilization.” The U.S. historian Charles A. Beard once wrote that the emergence of the progress idea constituted “a discovery as important as the human mind has ever made, with implications for mankind that almost transcend imagination.” And Bury, who wrote a book on the subject, called it “the great transforming conception, which enables history to define her scope.”

Gray rejects it utterly. In doing so, he rejects all of modern liberal humanism. “The evidence of science and history,” he writes, “is that humans are only ever partly and intermittently rational, but for modern humanists the solution is simple: human beings must in future be more reasonable. These enthusiasts for reason have not noticed that the idea that humans may one day be more rational requires a greater leap of faith than anything in religion.” In an earlier work, Straw Dogs: Thoughts on Humans and Other Animals, he was more blunt: “Outside of science, progress is simply a myth.”

GRAY’S REJECTION of progress has powerful implications, and his book is an attempt to grapple with many of them. We shall grapple with them as well here, but first a look at Gray himself is in order. He was born into a working-class family in 1948 in South Shields, England, and studied at Oxford. He gravitated early to an academic life, teaching eventually at Oxford and the London School of Economics. He retired from the LSE in 2008 after a long career there. Gray has produced more than twenty books demonstrating an expansive intellectual range, a penchant for controversy, acuity of analysis and a certain political clairvoyance.

He rejected, for example, Francis Fukuyama’s heralded “End of History” thesis—that Western liberal democracy represents the final form of human governance—when it appeared in this magazine in 1989. History, it turned out, lingered long enough to prove Gray right and Fukuyama wrong. Similarly, Gray’s 1998 book, False Dawn: The Delusions of Global Capitalism, predicted that the global economic system, then lauded as a powerful new reality, would fracture under its own weight. The reviews were almost universally negative—until Russia defaulted on its debt, “and the phones started ringing,” as he recalled in a recent interview with writer John Preston. When many Western thinkers viewed post-Soviet Russia as inevitably moving toward Western-style democracy, Gray rejected that notion based on seventy years of Bolshevism and Russia’s pre-Soviet history. Again, events proved him correct.

Though often stark in his opinions, Gray is not an ideologue. He has shifted his views of contemporary politics in response to unfolding events and developments. As a young man, he was a Labour Party stalwart but gravitated to Margaret Thatcher’s politics after he concluded, in the late 1970s, that Labour had succumbed to “absurdist leftism.” In the late 1980s, disenchanted with the “hubristic triumphalism” of the Tories, he returned to Labour. But he resolutely opposed the Iraq invasion led by America’s George W. Bush and Britain’s Tony Blair, and today he pronounces himself to be a steadfast Euroskeptic.

Though for decades his reputation was confined largely to intellectual circles, Gray’s public profile rose significantly with the 2002 publication of Straw Dogs, which sold impressively and brought him much wider acclaim than he had known before. The book was a concerted and extensive assault on the idea of progress and its philosophical offspring, secular humanism. The Silence of Animals is in many ways a sequel, plowing much the same philosophical ground but expanding the cultivation into contiguous territory mostly related to how mankind—and individual humans—might successfully grapple with the loss of both metaphysical religion of yesteryear and today’s secular humanism. The fundamentals of Gray’s critique of progress are firmly established in both books and can be enumerated in summary.

First, the idea of progress is merely a secular religion, and not a particularly meaningful one at that. “Today,” writes Gray in Straw Dogs, “liberal humanism has the pervasive power that was once possessed by revealed religion. Humanists like to think they have a rational view of the world; but their core belief in progress is a superstition, further from the truth about the human animal than any of the world’s religions.”

Second, the underlying problem with this humanist impulse is that it is based upon an entirely false view of human nature—which, contrary to the humanist insistence that it is malleable, is immutable and impervious to environmental forces. Indeed, it is the only constant in politics and history. Of course, progress in scientific inquiry and in resulting human comfort is a fact of life, worth recognition and applause. But it does not change the nature of man, any more than it changes the nature of dogs or birds. “Technical progress,” writes Gray, again in Straw Dogs, “leaves only one problem unsolved: the frailty of human nature. Unfortunately that problem is insoluble.”

That’s because, third, the underlying nature of humans is bred into the species, just as the traits of all other animals are. The most basic trait is the instinct for survival, which is placed on hold when humans are able to live under a veneer of civilization. But it is never far from the surface. In The Silence of Animals, Gray discusses the writings of Curzio Malaparte, a man of letters and action who found himself in Naples in 1944, shortly after the liberation. There he witnessed a struggle for life that was gruesome and searing. “It is a humiliating, horrible thing, a shameful necessity, a fight for life,” wrote Malaparte. “Only for life. Only to save one’s skin.” Gray elaborates:

Observing the struggle for life in the city, Malaparte watched as civilization gave way. The people the inhabitants had imagined themselves to be—shaped, however imperfectly, by ideas of right and wrong—disappeared. What were left were hungry animals, ready to do anything to go on living; but not animals of the kind that innocently kill and die in forests and jungles. Lacking a self-image of the sort humans cherish, other animals are content to be what they are. For human beings the struggle for survival is a struggle against themselves.

When civilization is stripped away, the raw animal emerges. “Darwin showed that humans are like other animals,” writes Gray in Straw Dogs, expressing in this instance only a partial truth. Humans are different in a crucial respect, captured by Gray himself when he notes that Homo sapiens inevitably struggle with themselves when forced to fight for survival. No other species does that, just as no other species has such a range of spirit, from nobility to degradation, or such a need to ponder the moral implications as it fluctuates from one to the other. But, whatever human nature is—with all of its capacity for folly, capriciousness and evil as well as virtue, magnanimity and high-mindedness—it is embedded in the species through evolution and not subject to manipulation by man-made institutions.

Fourth, the power of the progress idea stems in part from the fact that it derives from a fundamental Christian doctrine—the idea of providence, of redemption. Gray notes in The Silence of Animals that no other civilization conceived any such phenomenon as the end of time, a concept given to the world by Jesus and St. Paul. Classical thinking, as well as the thinking of the ancient Egyptians and later of Hinduism, Buddhism, Daoism, Shintoism and early Judaism, saw humanity as reflecting the rest of the natural world—essentially unchanging but subject to cycles of improvement and deterioration, rather like the seasons.

“By creating the expectation of a radical alteration in human affairs,” writes Gray, “Christianity . . . founded the modern world.” But the modern world retained a powerful philosophical outlook from the classical world—the Socratic faith in reason, the idea that truth will make us free; or, as Gray puts it, the “myth that human beings can use their minds to lift themselves out of the natural world.” Thus did a fundamental change emerge in what was hoped of the future. And, as the power of Christian faith ebbed, along with its idea of providence, the idea of progress, tied to the Socratic myth, emerged to fill the gap. “Many transmutations were needed before the Christian story could renew itself as the myth of progress,” Gray explains. “But from being a succession of cycles like the seasons, history came to be seen as a story of redemption and salvation, and in modern times salvation became identified with the increase of knowledge and power.”

Thus, it isn’t surprising that today’s Western man should cling so tenaciously to his faith in progress as a secular version of redemption. As Gray writes, “Among contemporary atheists, disbelief in progress is a type of blasphemy. Pointing to the flaws of the human animal has become an act of sacrilege.” In one of his more brutal passages, he adds:

Humanists believe that humanity improves along with the growth of knowledge, but the belief that the increase of knowledge goes with advances in civilization is an act of faith. They see the realization of human potential as the goal of history, when rational inquiry shows history to have no goal. They exalt nature, while insisting that humankind—an accident of nature—can overcome the natural limits that shape the lives of other animals. Plainly absurd, this nonsense gives meaning to the lives of people who believe they have left all myths behind.

IN THE Silence of Animals, Gray explores all this through the works of various writers and thinkers. In the process, he employs history and literature to puncture the conceits of those who cling to the progress idea and the humanist view of human nature. Those conceits, it turns out, are easily punctured when subjected to Gray’s withering scrutiny.

Gray pulls from the past Stefan Zweig (1881–1942) and Joseph Roth (1894–1939), noted Austrian authors and journalists, both of Jewish origin, who wrote extensively about what Austria had been like under the Hapsburg crown. As Zweig described it in his memoir, The World of Yesterday, the vast Hapsburg Empire seemed to be a tower of permanence, where “nothing would change in the well-regulated order.” Zweig added, “No one thought of wars, of revolutions, or revolts. All that was radical, all violence, seemed impossible in an age of reason.” In Roth’s novella, The Emperor’s Tomb (1938), he describes the tidy uniformity of Austrian life. All provincial railway stations looked alike—small and painted yellow. The porter was the same everywhere, clothed in the same blue uniform. He saluted each incoming and outgoing train as “a kind of military blessing.” People knew where they stood in society and accepted it.

This little world was utterly destroyed with the fall of the Hapsburgs after World War I, and many heralded the departure of this obsolete system of royalist governance. After all, the polyglot empire was not a modern state, even during its final sixty years or so when Franz Joseph finally embraced new technology such as railroads and telegraphic communication. But the old system lacked some of the “ancient evils,” as Gray puts it, that more modern states later revived in pursuit of what they anticipated as a better world. Torture had been abolished under the Hapsburgs. Bigotry and hatred, while evident in society, were kept in check by an authoritarian monarchy that didn’t have to respond to mass movements spawned in the name of self-government. “Only with the struggle for national self-determination,” writes Gray, “did it come to be believed that every human being had to belong to a group defined in opposition to others.”

As Roth wrote in his short story “The Emperor’s Bust”:

All those people who had never been other than Austrians, in Tarnopol, in Sarajevo, in Vienna, in Brunn, in Prague, in Czernowitz, in Oderburg, in Troppau, never anything other than Austrians, they now began, in compliance with the “order of the day,” to call themselves part of the Polish, the Czech, the Ukrainian, the German, the Romanian, the Slovenian, the Croatian “nation”—and so on and so forth.

Roth could see that the declining devices of empire were being replaced “by modern emblems of blood and soil,” as Gray puts it. Thus, Roth’s progressive, future-gazing outlook soon gave way to a kind of reactionary nostalgia. Gray explains:

Along with the formation of nations there was the “problem of national minorities.” Ethnic cleansing—the forcible expulsion and migration of these minorities—was an integral part of building democracy in central and eastern Europe. Progressive thinkers viewed this process as a stage on the way to universal self-determination. Roth had no such illusions. He knew the end-result could only be mass murder. Writing to Zweig in 1933, he warned: “We are drifting towards great catastrophes . . . it all leads to a new war. I won’t bet a penny on our lives. They have established a reign of barbarity.”

Both Roth and Zweig died before they could see the full magnitude of this barbarity. But, whatever one may think of the Hapsburg Empire and what came after, it is difficult to see that train of events as representing human progress. Rather, it more accurately is seen as just another episode, among multitudes, of the haphazard human struggle upon the earth.

AND YET the myth of progress is so powerful in part because it gives meaning to modern Westerners struggling, in an irreligious era, to place themselves in a philosophical framework larger than just themselves. That is the lesson of Joseph Conrad’s An Outpost of Progress (1896), discussed by Gray as a reflection of man’s need to fight off despair and gloom. The story centers on two Belgian traders, Kayerts and Carlier, sent by their company to a remote part of the Congo, where a native interpreter lures them into a slave-trading transaction. Though initially shocked to be involved in such an activity, they later think better of themselves after receiving the valuable elephant tusks put up as trade for human chattel, as well as after reading old newspapers extolling “Our Colonial Expansion” and “the merits of those who went about bringing light, faith and commerce to the dark places of the earth.”

But the steamer they were expecting doesn’t arrive, and their languid outpost existence is darkened by the threat of starvation. In a fight over a few lumps of sugar, Carlier is killed. In desperation, Kayerts decides to kill himself. He’s hanging from a gravesite cross when the steamer arrives shortly afterward. Conrad describes Kayerts’s disillusionment as he contemplates what he has done and his ultimate insignificance born of placing himself outside civilization: “His old thoughts, convictions, likes and dislikes, things he respected and things he abhorred, appeared in their true light at last! Appeared contemptible and childish, false and ridiculous.”

And yet he can’t quite give up his attachment to civilization or progress even as he ponders his predicament. “Progress was calling Kayerts from the river,” writes Conrad. “Progress and civilisation and all the virtues. Society was calling to its accomplished child to come to be taken care of, to be instructed, to be judged, to be condemned; it called him to return from that rubbish heap from which he had wandered away, so that justice could be done”—justice administered by himself, in a final bow to the permanence of civilization and the myth of progress.

Gray notes that Conrad himself had traveled to the Congo in 1890 to take command of a river steamer. He arrived thinking he was a civilized human being but later thought differently: “Before the Congo, I was just a mere animal,” he wrote, referring to European humanity—which, as Gray notes, “caused the deaths of millions of human beings in the Congo.” Gray elaborates:

The idea that imperialism could be a force for human advance has long since fallen into disrepute. But the faith that was once attached to empire has not been renounced. Instead it has spread everywhere. Even those who nominally follow more traditional creeds rely on a belief in the future for their mental composure. History may be a succession of absurdities, tragedies and crimes; but—everyone insists—the future can still be better than anything in the past. To give up this hope would induce a state of despair like that which unhinged Kayerts.

This perception leads Gray to a long passage of praise for Sigmund Freud, who “reformulated one of the central insights of religion: humans are cracked vessels.” Freud, writes Gray, saw the obstacles to human fulfillment as not only external but also within the human psyche itself. Unlike earlier therapies and those that came after, however, Freud’s approach did not seek to heal the soul. As Gray explains, psychotherapy generally has viewed the normal conflicts of the mind as ailments in need of remedy. “For Freud, on the other hand,” writes Gray, “it is the hope of a life without conflict that ails us.” Most philosophies and religions have begun with the assumption that humans are sickly animals, and Freud didn’t depart from this perception. “Where he was original,” says Gray, “was in also accepting that the human sickness has no cure.” Thus, he advocated a life based on the acceptance of perpetual unrest, a prerequisite to human assertion against fate and avoidance of the inner turmoil that led to Kayerts’s suicide.

This insight emerges as the underlying thesis of Gray’s book. As he sums up, “Godless mysticism cannot escape the finality of tragedy, or make beauty eternal. It does not dissolve inner conflict into the false quietude of any oceanic calm. All it offers is mere being. There is no redemption from being human. But no redemption is needed.” In other words, we don’t need religion and we don’t need the idea of progress because we don’t need redemption, either divine or temporal. We simply need to accept our fate, as they did in the classical age, before the Socratic faith in knowledge and the Christian concept of redemption combined to form the modern idea of progress and the belief in the infinite malleability of human nature.

IN THE end, then, Gray’s message is largely for individual Westerners, adjudged by the author to be in need of a more stark and unblinking view of the realities of human existence. It’s a powerful message, and not without elements of profundity. And it is conveyed with eloquence of language and dignity of thought.

But this is a magazine about man as a political animal, about public policy and the ongoing drama of geopolitical force and competition. Thus, it would seem appropriate to seek to apply Gray’s view of progress and human nature to that external world. The idea of progress was a long time in gestation in Western thought, beginning perhaps with St. Augustine of Hippo, in the fifth century, who crystallized the concept of the unity of all mankind, a fundamental tenet of both Christian theology and the idea of progress. It drove Christianity toward its impulse of conversion and missionary zeal, which led later, in a more secular age, to impulses of humanitarianism and a desire to spread democracy around the world. And Gray is correct in suggesting that the theological idea of man’s immanent journey toward perfection and a golden age of happiness on earth would lead much later to utopian dreams, revolutionary prescriptions, socialist formulas, racialist theories and democratic crusades.

But it wasn’t until René Descartes (1596–1650) that Western thought began its turn toward humanism. He posited two fundamental axioms—the supremacy of reason and the invariability of the laws of nature. And he insisted his analytical methods were available to any ordinary seeker of truth willing to follow his rules of inquiry. No longer was knowledge the exclusive preserve of scholars, scientists, archivists and librarians. This was revolutionary—man declaring his independence in pursuit of knowledge and mastery of the universe. It unleashed a spree of intellectual ferment in Europe, and soon the Cartesian method was applied to new realms of thinking. The idea of progress took on a new, expanded outlook—humanism, the idea that man is the measure of all things. As J. B. Bury notes in his book The Idea of Progress: An Inquiry into Its Growth and Origin (1920), psychology, morals and the structure of society now riveted the attention of new thinkers bent on going beyond the larger “supra-human” inquiries (astronomy and physics, for example) that had preoccupied Bacon, Newton, Leibniz and even Descartes.

And that led inevitably to those eighteenth-century French Encyclopedists and the emergence of their intellectual offspring, Rousseau, who twisted the idea of progress into a call for the use of civic force on behalf of a culminating paradise on earth that Rousseau called a “reign of virtue.” Shortly thereafter, his adherents and intellectual heirs pulled France into what became known as the Reign of Terror.

Much of the human folly catalogued by Gray in The Silence of Animals makes a mockery of the earnest idealism of those who later shaped and molded and proselytized humanist thinking into today’s predominant Western civic philosophy. But other Western philosophers, particularly in the realm of Anglo-Saxon thought, viewed the idea of progress in much more limited terms. They rejected the idea that institutions could reshape mankind and usher in a golden era of peace and happiness. As Bury writes, “The general tendency of British thought was to see salvation in the stability of existing institutions, and to regard change with suspicion.” With John Locke, these thinkers restricted the proper role of government to the need to preserve order, protect life and property, and maintain conditions in which men might pursue their own legitimate aims. No zeal here to refashion human nature or remake society.

A leading light in this category of thinking was Edmund Burke (1729–1797), the British statesman and philosopher who, writing in his famous Reflections on the Revolution in France, characterized the bloody events of the Terror as “the sad but instructive monuments of rash and ignorant counsel in time of profound peace.” He saw them, in other words, as reflecting an abstractionist outlook that lacked any true understanding of human nature. The same skepticism toward the French model was shared by many of the Founding Fathers, who believed with Burke that human nature isn’t malleable but rather potentially harmful to society. Hence, it needed to be checked. The central distinction between the American and French revolutions, in the view of conservative writer Russell Kirk, was that the Americans generally held a “biblical view of man and his bent toward sin,” whereas the French opted for “an optimistic doctrine of human goodness.” Thus, the American governing model emerged as a secular covenant “designed to restrain the human tendencies toward violence and fraud . . . [and] place checks upon will and appetite.”

Most of the American Founders rejected the French philosophes in favor of the thought and history of the Roman Republic, where there was no idea of progress akin to the current Western version. “Two thousand years later,” writes Kirk, “the reputation of the Roman constitution remained so high that the framers of the American constitution would emulate the Roman model as best they could.” They divided government powers among men and institutions and created various checks and balances. Even the American presidency was modeled generally on the Roman consular imperium, and the American Senate bears similarities to the Roman version. Thus did the American Founders deviate from the French abstractionists and craft governmental structures to fit humankind as it actually is—capable of great and noble acts, but also of slipping into vice and treachery when unchecked. That ultimately was the genius of the American system.

But, as the American success story unfolded, a new collection of Western intellectuals, theorists and utopians—including many Americans—continued to toy with the idea of progress. And an interesting development occurred. After centuries of intellectual effort aimed at developing the idea of progress as an ongoing chain of improvement with no perceived end into the future, this new breed of “Progress as Power” thinkers began to declare their own visions as the final end point of this long progression.

Gray calls these intellectuals “ichthyophils,” which he defines as “devoted to their species as they think it ought to be, not as it actually is or as it truly wants to be.” He elaborates: “Ichthyophils come in many varieties—the Jacobin, Bolshevik and Maoist, terrorizing humankind in order to remake it on a new model; the neo-conservative, waging perpetual war as a means to universal democracy; liberal crusaders for human rights, who are convinced that all the world longs to become as they imagine themselves to be.” He includes also “the Romantics, who believe human individuality is everywhere repressed.”

Throughout American politics, as indeed throughout Western politics, a large proportion of major controversies ultimately are battles between the ichthyophils and the Burkeans, between the sensibility of the French Revolution and the sensibility of American Revolution, between adherents of the idea of progress and those skeptical of that potent concept. John Gray has provided a major service in probing with such clarity and acuity the impulses, thinking and aims of those on the ichthyophil side of that great divide. As he sums up, “Allowing the majority of humankind to imagine they are flying fish even as they pass their lives under the waves, liberal civilization rests on a dream.”

IMPORTANT: the copyright of this article belongs to the original author(s) and/or publisher(s). Please contact the admin if you have questions regarding the copyright: Contact


Related Articles:
Have We Evolved to Be Nasty or Nice?
James Q. Wilson and the Defense of Moral Judgment




By Tom Whipple

There are many reasons to believe that film stars earn too much. Brad Pitt and Angelina Jolie once hired an entire train to travel from London to Glasgow. Tom Cruise’s daughter Suri is reputed to have a wardrobe worth $400,000. Nicolas Cage once paid $276,000 for a dinosaur head. He would have got it for less, but he was bidding against Leonardo DiCaprio.

Nick Meaney has a better reason for believing that the stars are overpaid: his algorithm tells him so. In fact, he says, with all but one of the above actors, the studios are almost certainly wasting their money. Because, according to his movie-analysis software, there are only three actors who make money for a film. And there is at least one A-list actress who is worth paying not to star in your next picture.

The headquarters of Epagogix, Meaney’s company, do not look like the sort of headquarters from which one would confidently launch an attack on Hollywood royalty. A few attic rooms in a shared south London office, they don’t even look as if they would trouble Dollywood. But my meeting with Meaney will be cut short because of another he has, with two film executives. And at the end, he will ask me not to print the full names of his analysts, or his full address. He is worried that they could be poached.

Worse though, far worse, would be if someone in Hollywood filched his computer. It is here that the iconoclasm happens. When Meaney is given a job by a studio, the first thing he does is quantify thousands of factors, drawn from the script. Are there clear bad guys? How much empathy is there with the protagonist? Is there a sidekick? The complex interplay of these factors is then compared by the computer to their interplay in previous films, with known box-office takings. The last calculation is what it expects the film to make. In 83% of cases, this guess turns out to be within $10m of the total. Meaney, to all intents and purposes, has an algorithm that judges the value—or at least the earning power—of art.

To explain how, he shows me a two-dimensional representation: a grid in which each column is an input, each row a film. “Curiously,” Meaney says, “if we block this column…” With one hand, he obliterates the input labelled “star”, casually rendering everyone from Clooney to Cruise, Damon to De Niro, an irrelevancy. “In almost every case, it makes no difference to the money column.”

“For me that’s interesting. The first time I saw that I said to the mathematician, ‘You’ve got to change your program—this is wrong.’ He said, ‘I couldn’t care less—it’s the numbers.’” There are four exceptions to his rules. If you hire Will Smith, Brad Pitt or Johnny Depp, you seem to make a return. The fourth? As far as Epagogix can tell, there is an actress, one of the biggest names in the business, who is actually a negative influence on a film. “It’s very sad for her,” he says. But hers is a name he cannot reveal.

IF YOU TAKE the Underground north from Meaney’s office, you will pass beneath the housing estates of south London. Thousands of times every second, above your head, someone will search for something on Google. It will be an algorithm that determines what they see; an algorithm that is their gatekeeper to the internet. It will be another algorithm that determines what adverts accompany the search—gatekeeping does not pay for itself.

Algorithms decide what we are recommended on Amazon, what films we are offered on Netflix. Sometimes, newspapers warn us of their creeping, insidious influence; they are the mysterious sciencey bit of the internet that makes us feel websites are stalking us—the software that looks at the e-mail you receive and tells the Facebook page you look at that, say, Pizza Hut should be the ad it shows you. Some of those newspaper warnings themselves come from algorithms. Crude programs already trawl news pages, summarise the results, and produce their own article, by-lined, in the case of Forbes magazine, “By Narrative Science”.

Others produce their own genuine news. On February 1st, the Los Angeles Times website ran an article that began “A shallow magnitude 3.2 earthquake was reported Friday morning.” The piece was written at a time when quite possibly every reporter was asleep. But it was grammatical, coherent, and did what any human reporter writing a formulaic article about a small earthquake would do: it went to the US Geological Survey website, put the relevant numbers in a boilerplate article, and hit send. In this case, however, the donkey work was done by an algorithm.

But it is not all new. It is also an algorithm that determines something as old-fashioned as the route a train takes through the Underground network—even which train you yourself take. An algorithm, at its most basic, is not a mysterious sciencey bit at all; it is simply a decision-making process. It is a flow chart, a computer program that can stretch to pages of code or is as simple as “If x is greater than y, then choose z”.

What has changed is what algorithms are doing. The first algorithm was created in the ninth century by the Arabic scholar Al Khwarizami—from whose name the word is a corruption. Ever since, they have been mechanistic, rational procedures that interact with mechanistic, rational systems. Today, though, they are beginning to interact with humans. The advantage is obvious. Drawing in more data than any human ever could, they spot correlations that no human would. The drawbacks are only slowly becoming apparent.

Continue your journey into central London, and the estates give way to terraced houses divided into flats. Every year these streets inhale thousands of young professional singles. In the years to come, they will be gently exhaled: gaining partners and babies and dogs, they will migrate to the suburbs. But before that happens, they go to dinner parties and browse dating websites in search of that spark—the indefinable chemistry that tells them they have found The One.

And here again they run into an algorithm. The leading dating sites use mathematical formulae and computations to sort their users’ profiles into pairs, and let the magic take its probabilistically predicted course.

Not long after crossing the river, your train will pass the server farms of the Square Mile—banks of computers sited close to the fibre-optic cables, giving tiny headstarts on trades. Within are stored secret lines of code worth billions of pounds. A decade ago computer trading was an oddity; today a third of all deals in the City of London are executed automatically by algorithms, and in New York the figure is over half. Maybe, these codes tell you, if fewer people buy bananas at the same time as more buy gas, you should sell steel. No matter if you don’t know why; sell sell sell. In nanoseconds a trade is made, in milliseconds the market moves. And, when it all goes wrong, it goes wrong faster than it takes a human trader to turn his or her head to look at the unexpectedly red numbers on the screen.

Finally, your train will reach Old Street—next door to the City, but a very different place. This is a part of town where every office seems to have a pool table, every corner a beanbag, every receptionist an asymmetric haircut. In one of those offices is TechHub. With its bare brick walls and website that insists on being your friend, this is the epitome of what the British government insists on calling Silicon Roundabout. After all, what America can do with valleys, surely Britain can do with traffic-flow measures.

Inside are the headquarters of Simon Williams’s company QuantumBlack. The world, Williams says, has changed in the past decade—even if not everyone has noticed. “There’s a ton more data around. There’s new ways of handling it, processing it, manipulating it, interrogating it. The tooling has changed. The speed at which it happens has changed. You’re shaping it, sculpting it, playing with it.”

QuantumBlack is, he says, a “data science” agency. In the same way as, ten years ago, companies hired digital-media agencies to make sense of e-commerce, today they need to understand data-commerce. “There’s been an alignment of stars. We’ve hit a crossover point in terms of the cost of storing and processing data versus ten years ago. Then, capturing and storing data was expensive, now it is a lot less so. It’s become economically viable to look at a shed load more data.”

When he says “look at”, he means analysing it with algorithms. Some may be as simple as spotting basic correlations. Some apply the same techniques used to spot patterns in the human genome, or to assign behavioural patterns to individual hedge-fund managers. But there is no doubt which of Williams’s clients is the most glamorous: Formula 1 teams. This, it is clear, is the part of the job he loves the most.

“It’s a theatre, an opera,” he says. “The fun isn’t in the race, it’s in the strategy—the smallest margins win or lose races.” As crucial as the driver, is when that driver goes for a pit stop, and how his car is set up. This is what QuantumBlack advises on: how much fuel you put in, what tyres to use, how often to change those tyres. “Prior to the race, we look at millions of scenarios. You’re constantly exploring.”

He can’t say which team he is working with this season, but they are “generally at the front of the grid”. Using the tens of billions of calculations per second that are possible these days, his company might offer the team one strategy in which there is a slim chance of winning, but a greater chance of not finishing; another in which there is no chance of winning, but a good chance of coming third.

This, however, is not where Williams’s algorithms really earn their money. To borrow a line from Carl von Clausewitz, the Prussian military strategist, no Formula 1 plan survives first contact with a corner. “You all line up, the lights go out. Three seconds later someone’s crashed or overtaken and plans go out of the window. So the real advantage is being able to pick out what’s happening on the track, learn and adapt. The teams that do that win.”

In real time Williams collects data from thousands of variables. Some are from sensors in his team’s cars. Other data are fuzzier: “We listen to the engine notes of competitors’ cars, on TV. That can tell us their settings. The braking profile of a car on gps as it goes into a corner can also tell you all sorts of things.” His software then collates all the data, all the positions, and advises on pitting strategy. “If you are taking more than ten seconds to make a decision, you’re losing your advantage. Really you need to be under the eight-second mark. A human couldn’t take in that much data and process it fast.”

By analysing all this data with algorithms, not only can you find patterns no one thought existed, you can also challenge orthodoxies. Such as: a movie star is worth the money.

A FEW YEARS back, when Nick Meaney was just starting in the business, a Hollywood studio approached him confidentially to look at a script. You will have heard of this film. You may well have seen this film, although you might be reluctant to admit it in sophisticated company.

The budget for the film was $180m and, Meaney says, “it was breathtaking that it was under serious consideration”. There were dinosaurs and tigers. It existed in a fantasy prehistory—with a fantasy language. “Preposterous things were happening, without rhyme or reason.” Meaney, who will not reveal the film’s title because he “can’t afford to piss these people off”, told the studio that his program concurred with his own view: it was a stinker.

The difference is the program puts a value on it. Technically a neural network, with a structure modelled on that of our brain, it gradually learns from experience and then applies what it has learnt to new situations. Using this analysis, and comparing it with data on 12 years of American box-office takings, it predicted that the film in question would make $30m. With changes, Meaney reckoned they could increase the take—but not to $180m. On the day the studio rejected the film, another one took it up. They made some changes, but not enough—and it earned $100m. “Next time we saw our studio,” Meaney says, “they brought in the board to greet us. The chairman said, ‘This is Nick—he’s just saved us $80m.’”

He might well have done, and Epagogix might well have the advantage of being the only company doing this in quite this way. But, Meaney says, it still sometimes feels as if they are “hanging on by our fingertips”. He has allies in the boardrooms of Hollywood, but they have to fight the prevailing culture. Calculations like Meaney’s tend to be given less weight than the fact that, say, the vibe in the room with Clooney was intense, or Spielberg is hugely excited.

Predicting a Formula 1 race, or the bankability of Brad Pitt, is arguably quite a simple problem. Predicting the behaviour of individuals is rather more complex. Not everyone is convinced, for all the claims, that algorithms are really able to do it yet—least of all when it comes to love. Earlier this year, a team of psychologists published an article in the journal Psychological Science in the Public Interest that looked into the claims made by dating websites for their algorithms. They wrote: “Ever since, the first algorithm-based matching site, launched in 2000, sites such as,,, and have claimed that they have developed a sophisticated matching algorithm that can find singles a uniquely compatible mate.”

“These claims,” Professor Eli Finkel from Northwestern University wrote, “are not supported by credible evidence.” In fact, he said, there is not “a shred of evidence that would convince anybody with any scientific training”. The problem is, we have spent a century studying what makes people compatible—by looking at people who are already together, so we know they are compatible. Even looking at guaranteed, bona-fide lovebirds has produced only weak hypotheses—so using these data to make predictions about which people could fall in love with each other is contentious.

ALTHOUGH IT IS difficult to predict what attracts a human to another human, it turns out to be rather simpler to predict what attracts a human to a political party. Indeed, the destiny of the free world was, arguably, changed by an algorithm—and its ability to understand people. In October 2008 I was in Denver, covering the American election for the Times. I was embedded with the volunteers. For a month I worked surrounded by enthusiastic students and even more enthusiastic posters: “Hope”, “Change we can believe in”, and, a rare concession to irony, “Pizza we can believe in”.

At this stage in one of the most closely fought elections in American history, few people were going to have their mind changed. So the Obama campaign had no interest in sending us out to win over Republican voters. Our job was just to contact Democrats, and get them to vote, and our tool for this was VoteBuilder.

Every night, churning away in the Democratic National Committee’s central servers, the VoteBuilder software combined the list of registered Democrats with demographics and marketing information. The company would not speak to me, but you can guess which data were relevant. Who was a regular churchgoer? Who lived in a city apartment block? Who lived in a city apartment block and had a Hispanic name? Who had lentils on their shopping list? Every morning, it e-mailed the results to us: a list of likely Democrats in Denver, to be contacted and encouraged to vote.

As Obama supporters arrived to volunteer from safe Democrat states across the country, the result of these algorithmic logistics was organised stalking on a colossal scale. I was not the only volunteer to call someone on their mobile to convince them to vote, only to discover that they were in the office with me. Some complained of getting five calls a day. One changed his answerphone message to, “I’m a Democrat, I’ve already voted and I volunteer for the campaign.”

But it was effective; tens of thousands of volunteers were mobilised across the country. And in four weeks, only once did VoteBuilder pair me with a likely Republican. Given that his lawn had a sign saying “McCain-Palin“—a flag marking a lonely, defiant Alamo in his part of town—I didn’t go in. Voting intentions, it seems, really can be narrowed down to simple criteria, drawn from databases and weighted in an algorithm. For all its success, which has subsequently been studied by political parties across the world, VoteBuilder was about volume. Somewhere, there would have been country-club members who liked guns but also believed in free health care, or wealthy creationists who favoured closing down Guantánamo. They might well have evaded our stalking.

Equally, when VoteBuilder made a mistake, the worst that would happen would have been an idealistic student finding themselves arguing with someone holding rather different beliefs. In a world controlled by algorithms, though, sometimes the most apparently innocuous of processes can have unintended consequences.

Recently an American company, Solid Gold Bomb, hit on what it thought was a clever strategy. It would be yet another play on the British wartime poster “Keep Calm and Carry On”, now a 21st-century meme. Using a program that trawled a database of hundreds of thousands of words to generate slogans, it then printed the results on T-shirts and sold them through Amazon. Solid Gold didn’t realise what the computer had come up with—until the former deputy prime minister Lord Prescott tweeted, “First Amazon avoids paying UK tax. Now they’re making money from domestic violence.”

T-shirts, it transpired, had been made and sold bearing the slogans “Keep Calm and Rape” and “Keep Calm and Grope a Lot”. The shirts were withdrawn, but the alert had been sounded. An algorithm had designed a T-shirt and put it up for sale on a website that directs users to items on the basis of algorithms, and it was only when it met its first human that anyone knew anything had gone wrong.

The absence of humans in other processes has proven more fraught still. In 2000, Dave Cliff, professor of computer science at Bristol University, was responsible for designing one of the first trading algorithms. A decade later, he was responsible for writing a British government report into the dangers they posed to the world economy.

“Every now and then there were these interactions between trading systems,” he says of his early experience, working for Deutsche Bank. “They were interacting in ways we hadn’t foreseen—couldn’t have foreseen.” Designing a trading algorithm is, he says, “a bit like raising a kid. You can do it all right, but then you send them to the playground and you don’t know who they are going to meet.”

In October 2012, in under a minute, the market value of Kraft increased by 30%. In 2010, the now-infamous “flash crash” briefly wiped a trillion dollars off United States markets. Last March BATS, an American stock-exchange company, floated on its own market at over $15 a share—but there was a glitch. The official explanation, still disputed by some, is that the BATS market software malfunctioned for all stock beginning with anything from A to BF. “If they had popped a champagne bottle as they launched the shares,” Cliff says, “by the time the cork hit the floor their value was zero.”

Cliff’s report did find benefits to high-frequency algorithmic trading. It seems to increase liquidity and decrease transaction costs. The problem, he says, is that not enough people understand how it works yet, and there is no proper regulation. His report, which has been endorsed by John Beddington, Britain’s chief scientific officer, recommends the creation of forensic software tools, to analyse the market and help investigations. “The danger is an over-reliance on computer systems which are not well understood,” he said. “I have no problem with technologies. I like flying, I like to give my kids medicine. But I like my planes certified safe, my medicine tested. I prefer to be engaged in capital markets where there are similar levels of trust, and meaningful and incisive investigation when things go wrong.”

WHAT OF THE future of algorithms? In a sense, the question is silly. Anything that takes inputs and makes a decision is an algorithm of sorts. As computer-processing power increases and the cost of storing data decreases, their use will only spread. Almost every week a new business appears that is specifically algorithmic; they are so common that we barely comment on the fact they use algorithms.

Last year Target, a marketing company, yet again proved the power of algorithms, in a startling way. Its software tracks purchases to predict habits. Using this, it chooses which coupons to send customers. It seemed to have gone wrong when it began sending a teenage girl coupons for nappies, much to the anger of her father, who made an official complaint. A little later, the New York Times reported that the father had phoned the company to apologise. “It turns out,” he said, “there have been some activities in my house I haven’t been completely aware of.” He was going to be a grandfather—and an algorithm knew before he did.

Taken together, all this is a revolution. The production line standardised industry. We became a species that could have any colour Model T Ford as long as it was black. Later, the range of colours increased, but never to match the number of customers. Today, the chances are that the recommendations Amazon gives you will match no one else’s in the world.

Soon internet-shopping technology will come to the high street. Several companies are now producing software that can use facial recognition to change the advertising you see on the street. Early systems just spot if you are male or female and react accordingly. The hope—from the advertisers’ point of view, at least—is to correlate the facial recognition with Facebook, to produce a truly personalised advert.

But providing a service that adapts to individual humans is not the same as becoming like a human, let alone producing art like humans. This is why the rise of algorithms is not necessarily relentless. Their strength is that they can take in that information in ways we cannot quickly understand. But the fact that we cannot understand it is also a weakness. It is worth noting that trading algorithms in America now account for 10% fewer trades than they did in 2009.

Those who are most sanguine are those who use them every day. Nick Meaney is used to answering questions about whether computers can—or should—judge art. His answer is: that’s not what they’re doing. “This isn’t about good, or bad. It is about numbers. These data represent the law of absolute numbers, the cinema-going audience. We have a process which tries to quantify them, and provide information to a client who tries to make educated decisions.”

Such as? “I was in a meeting last week about the relative merits of zombies versus the undead.” Is there a difference? “The better question is, what is a grown, educated man doing in that discussion? But yes, there is a difference.” (Zombies are gross flesh-eaters; the undead, like Dracula, are photo-sensitive garlic-haters with no reflection in a mirror.)

Equally, his is not a formula for the perfect film. “If you take a rich woman and a poor man and crash them into an iceberg, will that film always make money?” No, he says. No algorithm has the ability to write a script; it can judge one—but only in monetary terms. What Epagogix does is a considerably more sophisticated version, but still a version, of noting, say, that a film that contains nudity will gain a restricted rating, and thereby have a more limited market.

But the hardest bit has already been done. “We presuppose competence.” In other words, all the scripts have the same presumed standard—you can assume dialogue is not overly dire, that special effects will not be catastrophically inept. This is a standard that requires talented people. And that, for actors who aren’t Pitt, Depp or Smith, is the crumb of algorithmic comfort. It is not that Robert de Niro or Al Pacino is worthless; it’s that in this program they are interchangeable. Even if zombies and the undead are not.

IMPORTANT: the copyright of this article belongs to the original author(s) and/or publisher(s). Please contact the admin if you have questions regarding the copyright: Contact


Related Articles:
Have We Evolved to Be Nasty or Nice?
Noam Chomsky on Where Artificial Intelligence Went Wrong


Time Regained


By James Gleick

A pregnant moment in intellectual history occurs when H.G. Wells’s Time Traveller (“for so it will be convenient to speak of him”) gathers his friends around the drawing room fire to explain that everything they know about time is wrong. This after-dinner conversation marked something of a watershed, more telling than young Wells, who had never even published a book before The Time Machine, imagined just before the turn of the twentieth century.

What is time? Nothing but a fourth dimension, after length, breadth, and thickness. “Through a natural infirmity of the flesh,” the cheerful host explains, “we incline to overlook this fact.” The geometry taught in school needs revision. “Now, it is very remarkable that this is so extensively overlooked…. There is no difference between Time and any of the three dimensions of Space except that our consciousness moves along it.”

Wells didn’t make this up. It was in the air, the kind of thing bruited by students in the debating society of the Royal College of Science. But no one had made the case as persuasively as he did in 1895, by way of trying to gin up a plausible plot device in a piece of fantastic storytelling. Albert Einstein was then just a boy at gymnasium. Not till 1908 did the German mathematician Hermann Minkowski announce his “radical” idea that space and time were a single entity: “Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.”

So spacetime was born. In spacetime all events are baked together, a four-dimensional continuum. Past and future are no more privileged than left and right or up and down. The time dimension only looks special for the reason Wells mentioned: our consciousness is involved. We have a limited perspective. At any instant we see only a slice of the loaf, a puny three-dimensional cross-section of the whole. For the modern physicist, reality is the whole thing, past and future joined in a single history. The sensation of now is just that, a sensation, and different for everyone. Instead of one master clock, we have clocks in multitudes. And other paraphernalia, too: light cones and world lines and time-like curves and other methods for charting the paths of light and objects through this four-dimensional space. To say that the spacetime view of reality has empowered the physicists of the past century would be an understatement.

Philosophers like it, too. “I conclude that the problem of the reality and the determinateness of future events is now solved,” wrote Hilary Putnam in 1967.

Moreover, it is solved by physics and not by philosophy. We have learned that we live in a four-dimensional and not a three-dimensional world, and that space and time—or, better, space-like separations and time-like separations—are just two aspects of a single four-dimensional continuum….

“Indeed,” he added, “I do not believe that there are any longer any philosophical problems about Time.” Case closed.

Now comes a book from the theoretical physicist Lee Smolin aiming to convince us that time is real after all. He is frankly recanting the accepted doctrine—an apostate:

I used to believe in the essential unreality of time. Indeed, I went into physics because as an adolescent I yearned to exchange the time-bound, human world, which I saw as ugly and inhospitable, for a world of pure, timeless truth…. I no longer believe that time is unreal. In fact I have swung to the opposite view: Not only is time real, but nothing we know or experience gets closer to the heart of nature than the reality of time.

Smolin is a founder and faculty member of the Perimeter Institute for Theoretical Physics in Toronto, an authority on quantum gravity who has also written on elementary particle theory, cosmology, and the philosophy of science. He proposes to validate what we already know—those of us who wear wristwatches, cross the days off our calendars, mourn the past, pray for the future, feel in our bones the march of time or the flow of time. We unphilosophical naïfs, that is—known for short as the “man on the street.” Hilary Putnam again: “I think that if we attempted to set out the ‘man on the street’s’ view of the nature of time, we would find that the main principle…might be stated somewhat as follows: (1) All (and only) things that exist now are real.” Past things were real once but have ceased to exist. Future things don’t yet exist; they will become real only when the time comes.

This is the view that most physicists deny and the view that Smolin proposes to demonstrate in his book. For him the past is gone; the future is open: “The fact that it is always some moment in our perception, and that we experience that moment as one of a flow of moments, is not an illusion.” Timelessness, eternity, the four-dimensional space-time loaf—these are the illusions.

His argument from science and history is as provocative, original, and unsettling as any I’ve read in years. It turns upside-down the now standard view of Wells, Minkowski, and Einstein. It contravenes our intellectual inheritance from Newton and, for that matter, Plato, and it will ring false to many of Smolin’s contemporaries in theoretical physics.

We say that time passes, time goes by, and time flows. Those are metaphors. We also think of time as a medium in which we exist. If time is like a river, are we standing on the bank watching, or are we bobbing along? It might be better merely to say that things happen, things change, and time is our name for the reference frame in which we organize our sense that one thing comes before another.

That most authoritative of machines, the clock, has no purpose but to measure something, and that thing is time. In fact you can define time that way: time is what clocks measure. Unfortunately that’s a circular definition, if clocks are what measure time. (Smolin suggests, “For our purposes, a clock is any device that reads out a sequence of increasing numbers,” which is interesting, even if it isn’t in the dictionary.) Scientists devote considerable resources to quantifying time, going beyond our usual seconds and minutes. Humanity has a collective official time scale, established by a chorus of atomic clocks cooled to near absolute zero in vaults at the United States Naval Observatory in Washington, the Bureau International des Poids et Mesures near Paris, and elsewhere. Isaac Newton would be pleased. International Atomic Time appears to codify the notion of absolute time that he worked so effectively to establish. Newton’s view, handed down to us as if engraved on tablets of stone, was this:

Absolute, true, and mathematical time, in and of itself and of its own nature, without reference to anything external, flows uniformly….

The cosmic clock ticks invisibly and inexorably, everywhere the same. Absolute time is God’s time. This was Newton’s credo. He had no evidence for it, and his clocks were primitive compared to ours. He wrote:

It may be that there is no such thing as an equable motion, whereby time may be accurately measured. All motions may be accelerated and retarded, but the flowing of absolute time is not liable to any change.

He needed absolute time, as he needed absolute space, in order to define his terms and express his laws. Motion is nothing but the change in place over time; acceleration is the change in velocity over time. With a backdrop of absolute, true, and mathematical time, Newton could build an entire cosmology, a “System of the World.”

So Newton made time more real—reified it, as no one had done before. But he also made time into a useful abstraction, and in this way it began to fade away. When a scientist records a series of observations—the position of the moon, let’s say—the result is a table of numbers representing both space and time. A generation before Newton, René Descartes showed how to turn such tables into graphs, using different axes for different variables. Representing the orbit of the moon in Cartesian coordinates makes it a curve in space and time—the whole orbit becomes static, a mathematical object in a timeless configuration space. On such a graph time is frozen, and the history of a dynamical system is revealed for study at leisure.

The technique has had psychological side effects, Smolin suggests. It gives those who use it the idea that the experience of time passing is an illusion:

The method of freezing time has worked so well that most physicists are unaware that a trick has been played on their understanding of nature. This trick was a big step in the expulsion of time from the description of nature, because it invites us to wonder about the correlation between the real and the mathematical, the time-bound and the timeless.

This is his crucial dichotomy: the time-bound versus the timeless. Thinking “in time”—i.e., time-bound—versus thinking “outside of time.” We have inherited the idea of timeless truths from Plato: truths that exist in an ideal plane, in eternity. A leaf fades from green to brown, but greenness and brownness are immutable. Here in the sublunary world everything is subject to change and nothing is perfect; no actual triangle we experience is ever exactly equilateral. But in the mathematical world the angles of every triangle add up to 180 degrees. It was always so, and it always will be: mathematical truth exists outside of time.

In that same spirit Newton’s laws, the laws of nature, are meant to be timeless, true now and forever. Otherwise what good are they? We can hardly value the ephemeral. “We yearn for ‘eternal love,’” says Smolin. “Whatever we most admire and look up to—God, the truths of mathematics, the laws of nature—is endowed with an existence that transcends time.” This leads to cognitive dissonance. We live in one world while imagining the existence of another, outside: a heavenly plane. Smolin argues that the belief in timeless truths is not only misguided but harmful. He writes that “we act inside time but judge our actions by timeless standards”—not only of laws such as Newton’s, but also the precepts of religion or morality:

As a result of this paradox, we live in a state of alienation from what we most value…. In science, experiments and their analysis are time-bound, as are all our observations of nature, yet we imagine that we uncover evidence for timeless natural laws.

There is an alternative. We reenter time when we accept uncertainty; when we embrace the possibility of surprise; when we question the bindings of tradition and look for novel solutions to novel problems. The prototype for thinking “in time,” Smolin argues, is Darwinian evolution. Natural processes lead to genuinely new organisms, new structures, new complexity, and—here he departs from the thinking of most scientists—new laws of nature. All is subject to change. “Laws are not timeless,” he says. “Like everything else, they are features of the present, and they can evolve over time.”

The faith in timeless, universal laws of nature is part of the great appeal of the scientific enterprise. It is a vision of transcendence akin to the belief in eternity that draws people to religion. This view of science claims that the explanations for our world lie in a different place altogether, the world of shadows, or heaven: “another, more perfect world standing apart from everything that we perceive.” But for Smolin this is a dodge, no better than theology or mysticism. Instead, he wants us to consider the possibility that timeless laws of nature are no more real than perfect equilateral triangles. They exist, but only in our minds.

The cosmic clock of Newton (or God), marking time absolutely, everywhere the same, did not survive. Einstein shattered it. He did this by refusing to take it for granted and asking a simple question: Is it possible to say that two distant events occur at the same time? Is that even meaningful? Suppose you assert that lightning has struck a railway embankment at points A and B, distant from each other, and that the lightning flashes were simultaneous. Can you—a physicist with the most excellent equipment—establish that for sure?

You cannot. It turns out that a physicist riding on the train will disagree with a physicist standing at the station. Every observer has a reference frame, and each reference frame includes its own clock. Simultaneity is not meaningful. Now is relative. As Smolin puts it, “the clocks can be funky—that is, they can run at different rates in different places, and each can speed up and slow down.” We don’t have to like that. Every experiment confirms it.

Put another way, events in our universe can be connected, such that one is the cause of the other; or they can be close enough in time and far enough apart that they cannot be connected and no one can even say which came first. The distinction between past and future begins to decay. No observer has access to the now of any other observer. Everything that reaches our senses comes from the past.

Thus space and time are wedded. One cannot be measured—cannot be defined, can barely be talked about—independent of the other. Spacetime, having begun as a convenient technique of visualization, becomes indispensable. Time is frozen into the four-dimensional block. Motion gives way to geometry.

H.G. Wells said the only difference between time and space is that “our consciousness moves along it,” and likewise a half-century later the mathematician, physicist, and philosopher Hermann Weyl explained that the universe doesn’t “happen”—it “simply is”:

Only to the gaze of my consciousness, crawling upward along the world line of my body, does a section of the world come to life as a fleeting image in space which continuously changes in time.

Three weeks before his death, in 1955, Einstein wrote, “People like us, who believe in physics, know that the distinction between past, present, and future is only a stubbornly persistent illusion.” Yet Einstein was not altogether sanguine. He could not explain away our sense of time passing, our awareness of the present moment. “The problem of the Now worried him seriously,” recalled Rudolf Carnap.

He explained that the experience of the Now means something special for man, something essentially different from the past and the future, but that this important difference does not and cannot occur within physics.

Carnap, a philosopher of the Vienna Circle, suggesting leaving this sort of problem to the psychologists. Not Smolin: he thinks we should embrace Einstein’s discontent:

Everything we experience, every thought, impression, action, intention, is part of a moment. The world is presented to us as a series of moments. We have no choice about this. No choice about which moment we inhabit now, no choice about whether to go forward or back in time. No choice to jump ahead. No choice about the rate of flow of the moments. In this way, time is completely unlike space. One might object by saying that all events also take place in a particular location. But we have a choice about where we move in space. This is not a small distinction; it shapes the whole of our experience.

Still, he knows that intuition is not an argument. For most of history, human experience made it clear that up and down are special directions, everywhere the same—down being where things fall and up being the home of sun and stars—and that did turn out to be an illusion. If you are in outer space, there is no up or down—those concepts are meaningful only relative to the surface of the earth or some other planet. Our senses tell us all sorts of lies.

In an empty universe, would time exist?

No, it would not. Time is the measure of change; if nothing changes, time has no meaning.

Would space exist, in the absence of any matter or energy? Newton would have said yes: space would be empty.

For Smolin, the key to salvaging time turns out to be eliminating space. Whereas time is a fundamental property of nature, space, he believes, is an emergent property. It is like temperature: apparent, measurable, but actually a consequence of something deeper and invisible—in the case of temperature, the microscopic motion of ensembles of molecules. Temperature is an average of their energy. It is always an approximation, and therefore, in a way, an illusion. So it is with space for Smolin: “Space, at the quantum-mechanical level, is not fundamental at all but emergent from a deeper order”—an order, as we will see, of connections, relationships. He also believes that quantum mechanics itself, with all its puzzles and paradoxes (“cats that are both alive and dead, an infinitude of simultaneously existing universes”), will turn out to be an approximation of a deeper theory.

For space, the deeper reality is a network of relationships. Things are related to other things; they are connected, and it is the relationships that define space rather than the other way around. This is a venerable notion: Smolin traces the idea of a relational world back to Newton’s great rival, Gottfried Wilhelm Leibniz: “Space is nothing else, but That Order or Relation; and is nothing at all without Bodies, but the Possibility of placing them.” Nothing useful came of that, while Newton’s contrary view—that space exists independently of the objects it contains—made a revolution in the ability of science to predict and control the world. But the relational theory has some enduring appeal; some scientists and philosophers such as Smolin have been trying to revive it.

Nowadays, the Internet—like the telegraph a century before—is commonly said to “annihilate” space. It does this by making neighbors of the most distant nodes in a network that transcends physical dimension. Instead of six degrees of separation, we have billions of degrees of connectedness. As Smolin puts it:

We live in a world in which technology has trumped the limitations inherent in living in a low-dimensional space…. From a cell-phone perspective, we live in 2.5-billion-dimensional space, in which very nearly all our fellow humans are our nearest neighbors. The Internet, of course, has done the same thing. The space separating us has been dissolved by a network of connections.

So maybe it’s easier now for us to see how things really are. This is what Smolin believes: that time is fundamental but space an illusion; “that the real relationships that form the world are a dynamical network”; and that the network itself, along with everything on it, can and must evolve over time.

We know that time runs one way, despite the apparent reversibility of most physical laws. The relational view supports the idea of the universe as a one-way street, growing ever more structured and complex in apparent contradiction to the second law of thermodynamics, which states that all isolated systems become more uniform over time. The second law has led physicists for more than a century to suggest that the fate of the universe is the cosmic equilibrium of “heat death,” a uniform state of maximum entropy and perfect disorder, but that’s not the universe we see. Instead it seems that the universe gets persistently more interesting. Smolin argues that the second law of thermodynamics applies to any isolated system within the universe but not to the universe taken as a whole; that, in a universe where time is real and fundamental, it is natural for complexity to evolve and for systems to become more organized.

By declaring space to be secondary, he makes a mathematical trade that avoids contradicting general relativity: relative size for relative time. If size and location are relative, then time doesn’t need to be. He arrives at a notion of “preferred global time” that extends throughout the universe and defines a boundary between past and future. It imagines a “family” of observers, spread throughout the universe, and a preferred state of rest, an abstract standard against which motion can be measured. Even if “now” need not be the same to different observers, it retains its meaning for the cosmos.

Time Reborn means to present a program for further study. Smolin maintains a fairly puritanical view of what science should and should not do. He doesn’t like the current fashion in “multiverses”—other universes lurking in extra dimensions or branching off infinitely from our own. Science for him needs to be testable, and no one can falsify a hypothesis about a universe held to be inaccessible to ours. For that matter, any theory about the entire cosmos has a weakness. The success of science over the centuries has come in giving rules and language for describing finite, isolated systems. We can make copies of those; we can repeat experiments many times. But when we talk about the whole universe, we have just the one, and we can’t make it start over. So Smolin sees little scope for science in the family of cosmic questions beginning with “Why…”:

Why is there something rather than nothing? I can’t imagine anything that would serve as an answer to this question, let alone an answer supported by evidence. Even religion fails here….

Better not to think of science as a quest for timeless truths. Science, he writes, creates “effective theories.” These are models—incomplete by definition. They are effective in limited domains, and they are approximate. That doesn’t have to be a failing. Science can construct better and better theories, approaching the truth with closer approximations. But a perfect model of the universe would have to be the size of the universe. We humans are finite creatures, with little brains.

It may seem that Smolin himself is taking on one of the grandest cosmic questions of all. He does try to restrain himself, though, to hypotheses that make testable, falsifiable predictions about the universe we can observe. The scientific case he makes is intricate, involving methods from loop quantum gravity (one of several approaches to combining quantum theory and the theory of relativity). He depicts the geometry of space as a graph with nodes and edges. He has reserved some detail for online appendices at www.timere and plans to publish a more rigorous formulation in collaboration with the Brazilian philosopher Roberto Mangabeira Unger.

“The world remains, always, a bundle of processes evolving in time,” says Smolin.

Logic and mathematics capture aspects of nature, but never the whole of nature. There are aspects of the real universe that will never be representable in mathematics. One of them is that in the real world it is always some particular moment.

In a coda he ruminates briefly on the problem of consciousness—“the really hard problem.” He doesn’t propose any answers, but I’m glad to see physicists, mathematicians, and computer scientists continuing to wrestle with it, rather than leaving it to neurologists. Whatever consciousness will turn out to be, it’s not a moving flashlight illuminating successive slices of the four-dimensional spacetime continuum. It is a dynamical system, occurring in time, evolving in time, able to absorb bits of information from the past and process them, able also to create anticipation for the future.

IMPORTANT: the copyright of this article belongs to the original author(s) and/or publisher(s). Please contact the admin if you have questions regarding the copyright: Contact


Related Articles:
Adam Wheeler Went to Harvard
Fraud Scandal Fuels Debate Over Practices of Social Psychology


The Lethality of Loneliness




Sometime in the late ’50s, Frieda Fromm-Reichmann sat down to write an essay about a subject that had been mostly overlooked by other psychoanalysts up to that point. Even Freud had only touched on it in passing. She was not sure, she wrote, “what inner forces” made her struggle with the problem of loneliness, though she had a notion. It might have been the young female catatonic patient who began to communicate only when Fromm-Reichmann asked her how lonely she was. “She raised her hand with her thumb lifted, the other four fingers bent toward her palm,” Fromm-Reichmann wrote. The thumb stood alone, “isolated from the four hidden fingers.” Fromm-Reichmann responded gently, “That lonely?” And at that, the woman’s “facial expression loosened up as though in great relief and gratitude, and her fingers opened.”

Fromm-Reichmann would later become world-famous as the dumpy little therapist mistaken for a housekeeper by a new patient, a severely disturbed schizophrenic girl named Joanne Greenberg. Fromm-Reichmann cured Greenberg, who had been deemed incurable. Greenberg left the hospital, went to college, became a writer, and immortalized her beloved analyst as “Dr. Fried” in the best-selling autobiographical novel I Never Promised You a Rose Garden (later also a movie and a pop song). Among analysts, Fromm-Reichmann, who had come to the United States from Germany to escape Hitler, was known for insisting that no patient was too sick to be healed through trust and intimacy. She figured that loneliness lay at the heart of nearly all mental illness and that the lonely person was just about the most terrifying spectacle in the world. She once chastised her fellow therapists for withdrawing from emotionally unreachable patients rather than risk being contaminated by them. The uncanny specter of loneliness “touches on our own possibility of loneliness,” she said. “We evade it and feel guilty.”

Her 1959 essay, “On Loneliness,” is considered a founding document in a fast-growing area of scientific research you might call loneliness studies. Over the past half-century, academic psychologists have largely abandoned psychoanalysis and made themselves over as biologists. And as they delve deeper into the workings of cells and nerves, they are confirming that loneliness is as monstrous as Fromm-Reichmann said it was. It has now been linked with a wide array of bodily ailments as well as the old mental ones.

In a way, these discoveries are as consequential as the germ theory of disease. Just as we once knew that infectious diseases killed, but didn’t know that germs spread them, we’ve known intuitively that loneliness hastens death, but haven’t been able to explain how. Psychobiologists can now show that loneliness sends misleading hormonal signals, rejiggers the molecules on genes that govern behavior, and wrenches a slew of other systems out of whack. They have proved that long-lasting loneliness not only makes you sick; it can kill you. Emotional isolation is ranked as high a risk factor for mortality as smoking. A partial list of the physical diseases thought to be caused or exacerbated by loneliness would include Alzheimer’s, obesity, diabetes, high blood pressure, heart disease, neurodegenerative diseases, and even cancer—tumors can metastasize faster in lonely people.

The psychological definition of loneliness hasn’t changed much since Fromm-Reichmann laid it out. “Real loneliness,” as she called it, is not what the philosopher Søren Kierkegaard characterized as the “shut-upness” and solitariness of the civilized. Nor is “real loneliness” the happy solitude of the productive artist or the passing irritation of being cooped up with the flu while all your friends go off on some adventure. It’s not being dissatisfied with your companion of the moment—your friend or lover or even spouse— unless you chronically find yourself in that situation, in which case you may in fact be a lonely person. Fromm-Reichmann even distinguished “real loneliness” from mourning, since the well-adjusted eventually get over that, and from depression, which may be a symptom of loneliness but is rarely the cause. Loneliness, she said—and this will surprise no one—is the want of intimacy.

Today’s psychologists accept Fromm-Reichmann’s inventory of all the things that loneliness isn’t and add a wrinkle she would surely have approved of. They insist that loneliness must be seen as an interior, subjective experience, not an external, objective condition. Loneliness “is not synonymous with being alone, nor does being with others guarantee protection from feelings of loneliness,” writes John Cacioppo, the leading psychologist on the subject. Cacioppo privileges the emotion over the social fact because—remarkably—he’s sure that it’s the feeling that wreaks havoc on the body and brain. Not everyone agrees with him, of course. Another school of thought insists that loneliness is a failure of social networks. The lonely get sicker than the non-lonely, because they don’t have people to take care of them; they don’t have social support.

To the degree that loneliness has been treated as a matter of public concern in the past, it has generally been seen as a social problem—the product of an excessively conformist culture or of a breakdown in social norms. Nowadays, though, loneliness is a public health crisis. The standard U.S. questionnaire, the UCLA Loneliness Scale, asks 20 questions that run variations on the theme of closeness—“How often do you feel close to people?” and so on. As many as 30 percent of Americans don’t feel close to people at a given time.

Loneliness varies with age and poses a particular threat to the very old, quickening the rate at which their faculties decline and cutting their lives shorter. But even among the not-so-old, loneliness is pervasive. In a survey published by the AARP in 2010, slightly more than one out of three adults 45 and over reported being chronically lonely (meaning they’ve been lonely for a long time). A decade earlier, only one out of five said that. With baby-boomers reaching retirement age at a rate of 10,000 a day, the number of lonely Americans will surely spike.

Obviously, the sicker lonely people get, the more care they’ll need. This is true, and alarming, although as we learn more about loneliness, we’ll also be better able to treat it. But to me, what’s most momentous about the new biology of loneliness is that it offers concrete proof, obtained through the best empirical means, that the poets and bluesmen and movie directors who for centuries have deplored the ravages of lonesomeness on both body and soul were right all along. As W. H. Auden put it, “We must love one another or die.”

Who are the lonely? They’re the outsiders: not just the elderly, but also the poor, the bullied, the different. Surveys confirm that people who feel discriminated against are more likely to feel lonely than those who don’t, even when they don’t fall into the categories above. Women are lonelier than men (though unmarried men are lonelier than unmarried women). African Americans are lonelier than whites (though single African American women are less lonely than Hispanic and white women). The less educated are lonelier than the better educated. The unemployed and the retired are lonelier than the employed.

A key part of feeling lonely is feeling rejected, and that, it turns out, is the most damaging part. Psychologists discovered this by, among other things, studying the experience of gay men during the first decade of the AIDS epidemic, when the condition was knocking out their immune systems, and, as it seemed at first, only theirs. The nation ignored the crisis for a while, then panicked. Soon, people all over the country were calling for gay men to be quarantined.

To psychologists trying to puzzle out how social experiences affect health, AIDS amounted to something of a natural experiment, the chance to observe the effects of conditions so extreme that no ethical person would knowingly subject another person to them. The disease came from a virus—HIV—that was neutralizing all the usual defenses of a discrete group of people who could be compared with each other and also with a control group of the uninfected. That allowed researchers in a lab at UCLA to take on one of life’s biggest questions, which had become even more urgent as the disease laid waste to thousands, then tens of thousands: Could social experiences explain why some people die faster than others?

In the mid-to late ’80s, the UCLA lab obtained access to a long-term study of gay men who enrolled without knowing whether they were infected with HIV. About half of them tested positive for the virus, and about a third of those agreed to let researchers put their lives under a microscope, answering extensive questions about drug use, sexual behavior, attitudes toward their own homosexuality, levels of emotional support, and so on. By 1993, around one-third of that group had developed full-blown AIDS, and slightly more than a quarter had died.

Steven Cole was a young postdoctoral student in the lab itching to move beyond his field’s mind-body split. At the time, he told me, psychology was only just beginning to grasp “how the physical world of our bodies gets remodeled by our psychic and conceptual worlds.” When the UCLA researchers started trying to figure out which social factors sped up the progress of the disease, they tested obvious ones like socioeconomic status and levels of support. Curiously, though, being poor or lacking family and friends didn’t much change the rate at which an infected man would die of AIDS (although being in mourning, as gay men often were those days, did seem to weaken an infected man’s immune system).

It eventually occurred to Cole to try to imagine the world from a gay man’s perspective. That wasn’t easy for him: “I’m a straight kid from the suburbs. I had stereotypes, but I didn’t really know the reality of these people’s lives.” Then he read a book, Erving Goffman’s Stigma: Notes on the Management of a Spoiled Identity, that tallies in detail the difficulties of “passing” as someone else. He learned that the closeted man must police every piece of information known about him, live in constant terror of exposure or blackmail, and impose sharp limits on intimacy, or at least friendship. “It was like walking around with a time-bomb,” says Cole.

Ariel Lee

Cole figured that a man who’d hide behind a false identity was probably more sensitive than others to the pain of rejection. His temperament would be more tightly wound, and his stress-response system would be the kind that “fires responses and fires ’em harder.” His heart would beat faster, stress hormones would flood his body, his tissues would swell up, and white blood cells would swarm out to protect him against assault. If this state of inflamed arousal subsided quickly, it would be harmless. But if the man stayed on high alert for years at a time, then his blood pressure would rise, and the part of his immune system that fends off smaller, subtler threats, like viruses, would not do its job.

And he was right. The social experience that most reliably predicted whether an HIV-positive gay man would die quickly, Cole found, was whether or not he was in the closet. Closeted men infected with HIV died an average of two to three years earlier than out men. When Cole dosed AIDS-infected white blood cells with norepinephrine, a stress hormone, the virus replicated itself three to ten times faster than it did in non-dosed cells. Cole mulled these results over for a long time, but couldn’t understand why we would have been built in such a way that loneliness would interfere with our ability to fend off disease: “Did God want us to die when we got stressed?”

The answer is no.

What He wanted is for us not to be alone. Or rather, natural selection favored people who needed people. Humans are vastly more social than most other mammals, even most primates, and to develop what neuroscientists call our social brain, we had to be good at cooperating. To raise our children, with their slow-maturing cerebral cortexes, we needed help from the tribe. To stoke the fires that cooked the meat that gave us the protein that sustained our calorically greedy gray matter, we had to organize night watches. But compared with our predators, we were small and weak. They came after us with swift strides. We ran in a comparative waddle.

“The very fact that [loneliness] can affect the genes like that—it’s huge,” Suomi says. “It changes the way one thinks about development.”

So what would happen if one of us wandered off from her little band, or got kicked out of it because she’d slacked off or been caught stealing? She’d find herself alone on the savanna, a fine treat for a bunch of lions. She’d be exposed to attacks from marauders. If her nervous system went into overdrive at perceiving her isolation, well, that would have just sent her scurrying home. Cacioppo thinks we’re hardwired to find life unpleasant outside the safety of trusted friends and family, just as we’re pre-programmed to find certain foods disgusting. “Why do you think you are ten thousand times more sensitive to foods that are bitter than to foods that are sweet?” Cacioppo asked me. “Because bitter’s dangerous!”

One of those alone-on-the-savanna moments in our modern lives occurs when we go off to college, because we have to make a whole new set of friends. Back in the mid-’90s, when Cacioppo was at Ohio State University (he is now at the University of Chicago), he and his colleagues sorted undergraduates into three groups—the non-lonely, the sort-of-sometimes lonely, and the lonely. The researchers then strapped blood- pressure cuffs, biosensors, and beepers onto the students. Nine times a day for seven days, they were beeped and had to fill out questionnaires. Cacioppo also kept them overnight in the university hospital with “nightcaps” on their heads, monitoring the length and quality of their rest. He took saliva samples to measure levels of cortisol, a hormone produced under stress.

As expected, he found the students with bodily symptoms of distress (poor sleep, high cortisol) were not the ones with too few acquaintances, but the ones who were unhappy about not having made close friends. These students also had higher than normal vascular resistance, which is caused by the arteries narrowing as their tissue becomes inflamed. High vascular resistance contributes to high blood pressure; it makes the heart work harder to pump blood and wears out the blood vessels. If it goes on for a long time, it can morph into heart disease. While Cole discovered that loneliness could hasten death in sick people, Cacioppo showed that it could make well people sick—and through the same method: by putting the body in fight-or-flight mode.

A famous experiment helps explain why rejection makes us flinch. It was conducted more than a decade ago by Naomi Eisenberger, a social psychologist at UCLA, along with her colleagues. People were brought one-by-one into the lab to play a multiplayer online game called “Cyberball” that involved tossing a ball back and forth with two other “people,” who weren’t actually people at all, but a computer program. “They” played nicely with the real person for a while, then proceeded to ignore her, throwing the ball only to each other. Functional magnetic resonance imaging scans showed that the experience of being snubbed lit up a part of the subjects’ brains (the dorsal anterior cingulate cortex) that also lights up when the body feels physical pain.

Ariel Lee

I asked Eisenberger why, if the same part of our brain processes social insult and bodily injury, we don’t confuse the two. She explained that physical harm simultaneously lights up another neural region as well, one whose job is to locate the ache—on an arm or leg, inside the body, and so on. What the dorsal anterior cingulate cortex registers is the emotional fact that pain is distressing, be it social or physical. She calls this the “affective component” of pain. In operations performed to relieve chronic pain, doctors have lesioned, or disabled, the dorsal anterior cingulate cortex. After the surgery, the patients report that they can still sense where the trouble comes from, but, they add, it just doesn’t bother them anymore.

It’s tempting to say that the lonely were born that way—it’d let the rest of us off the hook. And, as it turns out, we’d be about half right, because loneliness is about half heritable. A longitudinal study of more than 8,000 identical Dutch twins found that, if one twin reported feeling lonely and unloved, the other twin would report the same thing 48 percent of the time. This figure held so steady across the pairs of twins—young or old, male or female, notwithstanding different upbringings—that researchers concluded that it had to reflect genetic, not environmental, influence. To understand what it means for a personality trait to have 48 percent heritability, consider that the influence of genes on a purely physical trait is 100 percent. Children get the color of their eyes from their parents, and that is that. But although genes may predispose children toward loneliness, they do not account for everything that makes them grow up lonely. Fifty-two percent of that comes from the world.

Evolutionary theory, which has a story for everything, has a story to illustrate how the human species might benefit from wide variations in temperament. A group that included different personality types would be more likely to survive a radical change in social conditions than a group in which everyone was exactly alike. Imagine that, after years in which a group had lived in peace, an army of strangers suddenly appeared on the horizon. The tribe in which some men stayed behind while the rest headed off on a month-long hunting expedition (the stay-at-homes may have been less adventurous, or they may just have been loners) had a better chance of repelling the invaders, or at least of saving the children, than the tribe whose men had all enthusiastically wandered off, confident that everything would be fine back home.

And yet loneliness is made as well as given, and at a very early age. Deprive us of the attention of a loving, reliable parent, and, if nothing happens to make up for that lack, we’ll tend toward loneliness for the rest of our lives. Not only that, but our loneliness will probably make us moody, self-doubting, angry, pessimistic, shy, and hypersensitive to criticism. Recently, it has become clear that some of these problems reflect how our brains are shaped from our first moments of life.

Proof that the early brain is molded by love comes, in part, from another notorious natural experiment: the abandonment of tens of thousands of Romanian orphans born during the regime of Communist dictator Nicolae Ceauşescu, who had banned birth control. A great deal has been written about the heartbreaking emotional and educational difficulties of these children, who grew up 20 to a nurse in Dickensian orphanages. In the age of the brain scan, we now know that those institutionalized children’s brains developed less “gray matter”—that is, fewer of the neurons that make up the bulk of the brain—and that, if those children never went on to be adopted, they’d sprout less “white matter,” too. White matter helps send signals from one part of the brain to another; think of it as the mind’s internal Internet. In the orphans’ case, the amygdala and the prefrontal cortex—which are involved in memory, emotions, decision-making, and social interaction—just weren’t connecting.

There’s a limit to how much we can poke around inside lonely humans, for obvious reasons. That’s why a great deal of research on the biological effects of a lonely childhood involves monkeys. Last year, I visited a monkey lab in the rolling farmland of rural Maryland run by a burly and affable psychologist-turned- primatologist named Steve Suomi. Suomi conducts his experiments on rhesus macaques, adorable little creatures sometimes called a “weed species,” because they, like humans, thrive in most environments they’re thrown into.

Suomi is building on research begun by his teacher and mentor, Harry Harlow, a psychologist at the University of Wisconsin notorious for experiments in the ’50s and ’60s. Harlow subjected newborn rhesus macaques to appalling isolation—months spent in cages in the company only of “surrogate mothers” made of wire with cartoonish monkey heads and bottles attached. Luckier monkeys had that and cloth-covered versions of the same thing to cuddle. (It is remarkable what a soft cloth can do to calm an anxious baby monkey down.) In the most extreme cases, the babies languished alone at the bottom of a V-shaped steel container. Cruel as these experiments were, Harlow proved that the absence of mothering destroyed the monkeys’ ability to mingle with other monkeys, though the “cloth mother” could mitigate the worst effects of isolation. Years of monkey therapy were required to integrate them into the troop. Harlow’s insights were not well received. Behaviorists, who reigned in U.S. psychology departments, held a blank-slate view of animal and human behavior. They scoffed at the notion that baby monkeys could be hard-wired for love, or at least for a certain quality of touch.

Times have changed, and Harlow’s conviction that nature demands nurture is now the common view. (Changing laws also mean that Suomi would have a harder time getting away with such experiments, which he’s not inclined to do anyway.) What Suomi has that Harlow did not have is technology. By shipping off monkey tissue to laboratories, such as Steve Cole’s, that have machines capable of seeing which genes are turned on and which are turned off, Suomi can show that loneliness transforms the brain and body. He can match the behavior of the lonely monkeys as they grow—what they act like, where they rank in dominance hierarchies when they’re introduced into a troop, whether they ever manage to reproduce—with the activity of genes that affect their brains and immune systems.

Suomi raises his monkeys in three groups, one group confined entirely to the company of peers (a chaotic, Lord of the Flies kind of childhood); another group left alone with terry-cloth mother-surrogates, except when released for a couple of hours a day to scamper with fellow babies; and the third raised by their mothers. What he found is that, in monkeys separated from their mothers in the first four months of life, some important immunity-related genes show a different pattern of expression. Among these were genes that help make the protein that inflames tissue and genes that tell the body to ward off viruses and other microbes.

Suomi was also excited about results coming in from peer-raised monkeys’ brain tissue: Thousands of little changes in genetic activity had been detected in their prefrontal cortexes. This region is sometimes called the “CEO” of the brain; it restrains violent impulses and inappropriate behavior. (In humans, faulty wiring in the prefrontal cortex has been associated with schizophrenia and ADHD.) Some of the aberrations were on genes that direct growth of the brain; modifications of those were bound to result in altered neural architecture. These findings eerily echoed the Romanian orphans’ brain scans and suggested that the lonely monkeys were going to be weirder than the others.

“The very fact that something outside the organism can affect the genes like that—it’s huge,” Suomi says. “It changes the way one thinks about development.” I didn’t need genetics, though, to see how defective the peer-raised monkeys’ development had been. Suomi took me outside to watch them. They huddled in nervous groups at the back of the cage, holding tight to each another. Sometimes, he said, they invite aggression by cowering; at other times, they fail to recognize and kowtow to the alpha monkeys, so they get picked on even more. The most perturbed monkeys might rock, clutch at themselves, and pull out their own hair, looking for all the world like children with severe autism.

Suomi added that good foster care could greatly improve the troubled macaques’ lives. He pointed out some who had been given over to foster grandmothers. Not only did they act more monkey-like, but, he told me, about half of their genetic deviations had vanished, too.

If we now know that loneliness, a social emotion, can reach into our bodies and rearrange our cells and genes, what should we do about it? We should change the way we think about health. James Heckman, a Nobel Prize–winning economist at the University of Chicago who tabulates the costs of early childhood deprivation, speaks bitterly of “silos” in health policy, meaning that we see crime and low educational achievement as distinct from medical problems like obesity or heart disease. As far as he’s concerned, these are, in too many cases, symptoms of the same social disorder: the failure to help families raise their children. Heckman believes that the life of a child at the lower end of the U.S. socioeconomic spectrum is starting to look more like the life of one of Suomi’s lonely macaques. As nearly half of all marriages continue to end in divorce, as marriage itself floats further out of reach for the undereducated and financially strapped, childhood has become a more solitary and chaotic experience. Single mothers don’t have a lot of time to spend with their children, nor, in most cases, money for emotionally enriching social activities.

“As inequality has increased, childhood inequality has increased,” Heckman said, “So has inequality of parenting.” For the first time in 30 years, mental health disabilities such as ADHD outrank physical ones among American children. Heckman doesn’t think that’s only because parents seek out attention-deficit diagnoses when their children don’t come home with A’s. He thinks it’s also because emotional impoverishment embeds itself in the body. “Mothers matter,” he says, “and mothering is in short supply.”

Heckman has been analyzing data from two famous early-childhood intervention programs, the Abecedarian Project of the ’70s and the Perry Preschool project of the ’60s. Both have furnished ample evidence that, if you enroll very young children from poor families in programs that give both them and their parents an extra boost, then they grow up to be wealthier and healthier than their counterparts—less fat, less sick, better educated, and, for men, more likely to hold down a job. In the case of the Perry Preschool, Heckman estimated that each dollar invested yielded $7 to $12 in savings over the span of decades. One of the most effective economic and social policies, he told me, would be “supplementing the parenting environment of disadvantaged young children.”

If you can’t change society all at once, though, you can change it a few people at a time. Cacioppo and a colleague, Louise Hawkley, have been developing programs to teach lonely people to get along better with others. At one point, the psychologists thought of designing a mobile app, a sort of electronic nagging mother, to help people break bad social habits. (You’d check an item off the list, say, if you remembered to talk to anyone that day—a store clerk or a librarian.) But they didn’t get funding for the software, so now they’re focusing on a simpler and more low-tech fix. It’s a seminar with an instructor and a pointer and a screen in which students learn to read faces and interpret voices and also to stop making the assumption that lonely people seem prone to make, which is that every person they meet is judging or rebuffing them. What they’re learning, says Hawkley, is the art of “social cognition.” Her goal is to show people that they come at the world full of “assumptions about human nature, about social mores, that aren’t necessarily accurate.”

Cacioppo and Hawkley have been testing their social-cognition curriculum on Army bases, holding classes to hone soldiers’ social skills and teach platoon leaders to spot the lonely in their ranks and help them fit in better. The results aren’t in yet, U.S. Army psychologist Major Paul Lester told me, but he has been receiving reports that suggest that people who have gone through the training fall prey to post-traumatic stress disorder less often. Lester insisted that I add that the Army hadn’t agreed to spend $50 million a year for this experiment only because it’s worried about suicide and post-traumatic stress disorder— although if loneliness training brought down the number of suicidal and dysfunctional soldiers, so much the better. The Army sees the classes as essential training for coping with military life. The best fighting comes from soldiers who interact well with other soldiers, said Lester, and soldiers’ lives are full of social disruption—transfers from base to base and so on.

These are patch solutions, obviously, though it’s appealing to imagine a social-cognition program filtering down and replacing the vague platitudes usually taught to elementary- and middle-schoolers in their human growth and development classes. And it would completely transform a child’s world to have a teacher trained to identify the lonely kids in her classroom and to provide succor and support once she’d found them. Naomi Eisenberger pointed out to me that, while schools take physical pain very seriously, they usually trivialize social pain: “You cannot hit other students, but oftentimes, there are no rules about excluding another student,” she said.

Cole can imagine giving people medications to treat loneliness, particularly when it exacerbates chronic diseases such as diabetes and high blood pressure. These could be betablockers, which reduce the physical effects of stress; anti-inflammatory medicine; or even Tylenol—since physical and emotional pain overlap, it turns out that Tylenol can reduce the pain of heartbreak.

At a deeper level, though, loneliness research forces us to acknowledge our own extraordinary malleability in the face of social forces. This susceptibility is both terrifying and exhilarating. On the terrifying side is the unhappy fact that isolation, especially when it stems from the disenfranchisement of the underprivileged, creates a bodily limitation all too easily reproduced in each successive generation. Given that we have been scaling back the kinds of programs that could help people overcome such disadvantages and that many in Congress, mostly Republicans, have been trying to defund exactly the kind of behavioral science research that could yield even better programs, we have reason to be afraid. But there’s something awe-inspiring about our resilience, too. Put an orphan in foster care, and his brain will repair its missing connections. Teach a lonely person to respond to others without fear and paranoia, and over time, her body will make fewer stress hormones and get less sick from them. Care for a pet or start believing in a supernatural being and your score on the UCLA Loneliness Scale will go down. Even an act as simple as joining an athletic team or a church can lead to what Cole calls “molecular remodeling.” “One message I take away from this is, ‘Hey, it’s not just early life that counts,’ ” he says. “We have to choose our life well.”

IMPORTANT: the copyright of this article belongs to the original author(s) and/or publisher(s). Please contact the admin if you have questions regarding the copyright: Contact


Related Articles:
Does life have a purpose?
Philosophy vs science: which can answer the big questions of life?


Richard Feynman: Life, the universe and everything


By Christopher Riley

In these days of frivolous entertainments and frayed attention spans, the people who become famous are not necessarily the brightest stars. One of the biggest hits on YouTube, after all, is a video of a French bulldog who can’t roll over. But in amongst all the skateboarding cats and laughing babies, a new animated video, featuring the words of a dead theoretical physicist, has gone viral. In the film, created from an original documentary made for the BBC back in the early Eighties, the late Nobel Prize-winning professor, Richard Feynman, can be heard extolling the wonders of science contained within a simple flower.

There is “beauty”, he says, not only in the flower’s appearance but also in an appreciation of its inner workings, and how it has evolved the right colours to attract insects to pollinate it. Those observations, he continues, raise further questions about the insects themselves and their perception of the world. “The science,” he concludes, “only adds to the excitement and mystery and awe of the flower.” This interview was first recorded by the BBC producer Christopher Sykes, back in 1981 for an episode of Horizon called “The Pleasure of Finding Things Out”. When it was broadcast the following year the programme was a surprise hit, with the audience beguiled by the silver-haired professor chatting to them about his life and his philosophy of science.

Now, thanks to the web, Richard Feynman’s unique talents – not just as a brilliant physicist, but as an inspiring communicator – are being rediscovered by a whole new audience. As well as the flower video, which, to date, has been watched nearly a quarter of a million times, YouTube is full of other clips paying homage to Feynman’s ground-breaking theories, pithy quips and eventful personal life.

The work he did in his late twenties at Cornell University, in New York state, put the finishing touches to a theory which remains the most successful law of nature yet discovered. But, as I found while making a new documentary about him for the BBC, his curiosity knew no bounds, and his passion for explaining his scientific view of the world was highly contagious. Getting to glimpse his genius through those who loved him, lived and worked with him, I grew to regret never having met him; to share first-hand what so many others described as their “time with Feynman”.

Richard Phillips Feynman was born in Far Rockaway — a suburb of New York – in May 1918, but his path in life was forged even before this. “If he’s a boy I want him to be a scientist,” said his father, Melville, to his pregnant wife. By the time he was 10, Feynman had his own laboratory at home and, a few years later, he was employing his sister Joan as an assistant at a salary of four cents a week. By 15, he’d taught himself trigonometry, advanced algebra, analytic geometry and calculus, and in his last year of high school won the New York University Math Championship, shocking the judges not only by his score, but by how much higher it was than those of his competitors.

He graduated from the Massachusetts Institute of Technology in 1939 and obtained perfect marks in maths and physics exams for the graduate school at Princeton University — an unprecedented feat. “At 23 there was no physicist on Earth who could match his exuberant command over the native materials of theoretical science,” writes his biographer James Gleick.

Such talents led to him being recruited to the Manhattan Project in the early Forties. Together with some of the greatest minds in physics in the 20th century, Feynman was put to work to help build an atom bomb to use against the Germans before they built one to use against the Allies. Security at the top secret Los Alamos labs was at the highest level. But for Feynman — a born iconoclast – such control was there to be challenged. When not doing physics calculations he spent his time picking locks and cracking safes to draw attention to shortcomings in the security systems.

“Anything that’s secret I try and undo,” he explained years later. Feynman saw the locks in the same way as he saw physics: just another puzzle to solve. He garnered such a reputation, in fact, that others at the lab would come to him when a colleague was out of town and they needed a document from his safe.

Between the safe cracking and the physics calculations, the pace of life at Los Alamos was relentless. But for Feynman these activities were a welcome distraction from a darker life. His wife, Arline, who was confined to her bed in a sanatorium nearby, was slowly dying of TB.

When she died in the summer of 1945, Feynman was bereft. This misery was compounded, a few weeks later, when the first operational atom bomb was dropped on Japan, killing more than 80,000 people. His original reason for applying his physics to the war effort had been to stop the Germans. But its use on the Japanese left Feynman shocked. For the first time in his life he started to question the value of science and, convinced the world was about to end in a nuclear holocaust, his focus drifted.

He became something of a womaniser, dating undergraduates and hanging out with show girls and prostitutes in Las Vegas. In a celebrated book of anecdotes about his life – Surely You’re Joking Mr Feynman – the scientist recounts how he applied an experimental approach to chatting up women. Having assumed, like most men, that you had to start by offering to buy them a drink, he explains how a conversation with a master of ceremonies at a nightclub in Albuquerque one summer prompted him to change tactics. And to his surprise, an aloof persona proved far more successful than behaving like a gentleman.

His other method of relaxation in those years was music; his passion for playing the bongos stayed with him for the rest of his life. Physics had slipped down his list of priorities, but he suddenly rediscovered his love for the subject in a most unexpected way. In the canteen at Cornell one lunchtime he became distracted by a student, who had thrown a plate into the air. As it clattered onto the floor Feynman observed that the plate rotated faster than it wobbled. It made him wonder what the relationship was between these two motions.

Playing with the equations which described this movement reminded him of a similar problem concerning the rotational spin of the electron, described by the British physicist Paul Dirac. And this, in turn, led him to Dirac’s theory of Quantum Electrodynamics (QED); a theory which had tried to make sense of the subatomic world but had posed as many questions as it answered. What followed, Feynman recalled years later, was like a cork coming out of a bottle. “Everything just poured out,” he remembered.

“He really liked to work in the context of things that were supposed to be understood and just understand them better than anyone else,” says Sean Carroll, a theoretical physicist who sits today at Feynman’s old desk at Caltech, in Pasadena. “That was very characteristic of Feynman. It required this really amazing physical intuition – an insight into what was really going on.” Applying this deep insight, Feynman invented an entirely new branch of maths to work on QED, which involved drawing little pictures instead of writing equations.

Richard’s sister, Joan, recalls him working on the problem while staying with her one weekend. Her room-mate was still asleep in the room where Richard had been working. “He said to me, ‘Would you go in the room and get my papers, I wanna start working’,” she remembers. “So I went in the room and I looked for them, but there was no mathematics. It was just these silly little diagrams and I came out and said, ‘Richard, I can’t find your papers, it’s just these kind of silly diagrams’. And he said, ‘That is my work!’” Today Feynman’s diagrams are used across the world to model everything from the behaviour of subatomic particles to the motion of planets, the evolution of galaxies and the structure of the cosmos.

Applying them to QED, Feynman came up with a solution which would win him a share of the 1965 Nobel Prize for Physics. Almost half a century later QED remains our best explanation of everything in the universe except gravity. “It’s the most numerically precise physical theory ever invented,” says Carroll.

Discovering a law of nature and winning a Nobel Prize, for most people, would represent the pinnacle of a scientific career. But for Feynman these achievements were mere stepping stones to other interests. He took a sabbatical to travel across the Caltech campus to the biology department, where he worked on viruses. He also unravelled the social behaviour of ants and potential applications of nanotechnology. And he was active beyond the world of science, trading physics coaching for art lessons with renowned Californian artist Jirayr Zorthian. (While at Caltech he also began frequenting a local strip club, where he would quietly work out his thories on napkins; he found it the ideal place in which to clear his head.)

But it was his talent as a communicator of science that made him famous. In the early Sixties, Cornell invited him to give the Messenger Lectures – a series of public talks on physics. Watching them today, Feynman’s charisma and charm is as seductive as it was 50 years ago.

“He loved a big stage,” says Carroll. “He was a performer as well as a scientist. He could explain things in different ways than the professionals thought about them. He could break things down into their constituent pieces and speak a language that you already shared. He was an amazingly good teacher and students loved him unconditionally.”

Recognising this ability, in 1965 Caltech asked him to rewrite the undergraduate physics course. The resulting Feynman Lectures on Physics took him three years to create and the accompanying textbooks still represent the last word on the history of physics. The lectures themselves were brimming with inspiring “showbiz demonstrations” as his friend Richard Davies describes them. Most memorably, Feynman used to set up a heavy brass ball on a pendulum, send it swinging across the room, and then wait for it to swing back towards him. Students would gasp as it rushed towards his face, but Feynman would stand stock still, knowing it would stop just in front of his nose. Keen to capitalise on these talents for engaging an audience, Christopher Sykes made his film for Horizon. “He took enormous pleasure in exploring life and everything it had to offer,” remembers Sykes. “More than that, he took tremendous pleasure in telling you about it.”

In the late Seventies, Feynman discovered a tumour in his abdomen. “He came home and reported, ‘It’s the size of a football’,” remembers his son Carl. “I was like ‘Wow, so what does that mean?’ And he said, ‘Well, I went to the medical library and I figure there’s about a 30 per cent chance it will kill me’.” Feynman was trying to turn his predicament into something fascinating, but it was still not the kind of thing a son wanted to hear from his father.

A series of operations kept Feynman alive and well enough to work on one final important project. In 1986, he joined the commission set up to investigate the Challenger disaster. The space shuttle had exploded 73 seconds after launch, killing the entire crew of seven astronauts. Feynman fought bureaucratic intransigence and vested interests to uncover the cause of the accident: rubber O-ring seals in the shuttle’s solid rocket boosters that failed to work on the freezing morning of the launch. At a typically flamboyant press conference, Feynman demonstrated his findings by placing a piece of an O-ring in a glass of iced water. But the inquiry had left him exhausted. With failing kidneys and in a great deal of pain he decided not to go through surgery again and went into hospital for the last time in February 1988.

His friend Danny Hillis remembers walking with Feynman around this time: “I said, ‘I’m sad because I realise you’re about to die’. And he said, ‘That bugs me sometimes, too. But not as much as you’d think. Because you realise you’ve told a lot of stories and those are gonna stay around even after you’re gone.’” Twenty-five years after his death, thanks to the web, Feynman’s prophecy has more truth than he could ever have imagined.

IMPORTANT: the copyright of this article belongs to the original author(s) and/or publisher(s). Please contact the admin if you have questions regarding the copyright: Contact


Related Articles:
New pursuit of Schrödinger’s cat
Time Regained


« Older Newer »