A Beautiful Question by Frank Wilczek, review: 'worth the effort'

5 August 2015

origin

By Lewis Dartnell

In a short story by the science fiction writer Alastair Reynolds, a man accepts the offer from an alien species to further extend the cognitive capabilities of his brain so that he may come to understand the true nature of the universe. The absolute truth of reality is compared to the patterned floor of a room, buried beneath layer upon layer of carpets. The process of coming to know the fundamental nature of the cosmos is a process of finding a flaw in the uppermost carpet, then tugging at this loose thread to remove the flawed rug and reveal the underlying layer, and so on down through the layers of description until you reach the floor.

Frank Wilczek’s A Beautiful Question is the first book I’ve read in which I’ve felt that almost vertiginous sensation of peering through layers of theories down to the true nature of the universe. Wilczek, a Nobel Prize-winning theoretical physicist, sets out to answer a deceptively simple question: “Does the world embody beautiful ideas?” Or to rephrase this in a slightly more useful way: “Is the physical universe, and the equations that physicists have derived to explain it, beautiful?”
The author’s contention is that the standard model of particle physics (or the “Core Theory”, as Wilczek calls it) is indeed beautiful, but to appreciate this the reader must first understand what the standard model actually is. So the bulk of this book is a summary of the development of key notions in the history of physics, and how we have come to uncover the “fundamental operating system” of nature.

Time Reborn: From the Crisis of Physics to the Future of the Universe, by Lee Smolin

Wilczek starts his “meditation” with Pythagoras and his theorem on right-angled triangles that revealed a deep relationship between geometry and number, and his investigations into music and the link between harmony and number. These, Wilczek argues, were the first inklings towards the deep numerical order underlying the world, and he returns throughout the book to these themes of order, pattern, symmetry and simplicity in the laws governing the universe.
The sense of these layers of understanding is perhaps clearest in the story of light. Isaac Newton’s experiments with prisms showed that white light was made up of a blending of all colours, and that colour was something intrinsic to the light ray itself. But it was not for another century that we came to understand the substance of light.

While working on the seemingly related phenomena of electricity and magnetism, James Clerk Maxwell derived a set of equations to describe how the two interrelated. His equations showed that since an electric field that changed over time generated a magnetic field, and conversely a varying magnetic field induced an electric field, then the two ought to self-support and produce an electromagnetic wave that rippled through empty space. As Wilczek so poetically describes, the situation “takes on a life of its own, with the fields dancing as a pair, each inspiring the other”. Maxwell could also use his equations to calculate how fast such an electromagnetic undulation would travel, and he found that it matched what had already been measured for the speed of light. For Maxwell, it was obvious that this correspondence was no coincidence; that, in fact, the underlying agent of light, a mystery that had foxed Newton, was no more than a mutually supporting disturbance in magnetic and electric fields. To use the parlance of modern physics, this was a monumental feat of reductionism, as Maxwell had achieved in one go the unification of the theories of electricity, magnetism and light.

A sweeping account of the birth of modern science argues that physics and astronomy were the driving forces behind the revolution

More recently, Maxwell’s equations have been incorporated with quantum mechanics to produce the theory of quantum electrodynamics (QED), and we’ve realised that this electromagnetic force is one of four fundamental forces of the universe; alongside gravity there is also the strong force holding together quarks in subatomic particles (accurately described by the theory of quantum chromodynamics, QCD) and the weak force. And these forces act on a range of different particles of matter, which can themselves be organised into related families. So the essence of the standard model is of a number of forces directing how particles of matter behave, and this overarching mathematical framework contains an astonishing degree of pattern and symmetry.

As Wilczek explains, physicists have become so accustomed to finding that the laws of nature they infer from experiments possess deep symmetries that the reverse process is now attempted – the proposition of equations containing lots of symmetry, followed by the study of whether nature uses them. The theory of supersymmetry, or SUSY, has been developed to resolve the apparent duality in the universe between forces and matter by explaining the two as simply manifestations of the same underlying structure.
At times this is a challenging text, but it is well worth the effort. Wilczek is admirably clear in his explanations. But the book falls short of answering whether the world embodies beautiful ideas. Despite the fact that Wilczek says the answer is a resounding “yes!”, he fails to provide a convincing discussion of the meaning of “beauty” and whether the physical laws satisfy this.
The links between seemingly disparate phenomena and the symmetries inherent in the equations are profound, and the scientists who devise and confirm these mathematical descriptions are extraordinarily creative, but is this necessarily beauty? Nature is undeniably economical in the rules she uses to mould the universe, but is this minimalism necessarily beautiful? Does Wilczek think that beauty is something beyond order, symmetry, simplicity? If not, why not just use these terms to relate to the laws of the universe? But if so, the onus is upon him to clearly lay out why he feels this may be.


IMPORTANT: the copyright of this article belongs to the original author(s) and/or publisher(s). Please contact the admin if you have questions regarding the copyright: Contact

,


---

The Man Who Invented Modern Probability

2 September 2013

BY SLAVA GEROVITCH
origin

If two statisticians were to lose each other in an infinite forest, the first thing they would do is get drunk. That way, they would walk more or less randomly, which would give them the best chance of finding each other. However, the statisticians should stay sober if they want to pick mushrooms. Stumbling around drunk and without purpose would reduce the area of exploration, and make it more likely that the seekers would return to the same spot, where the mushrooms are already gone.

Such considerations belong to the statistical theory of “random walk” or “drunkard’s walk,” in which the future depends only on the present and not the past. Today, random walk is used to model share prices, molecular diffusion, neural activity, and population dynamics, among other processes. It is also thought to describe how “genetic drift” can result in a particular gene—say, for blue eye color—becoming prevalent in a population. Ironically, this theory, which ignores the past, has a rather rich history of its own. It is one of the many intellectual innovations dreamed up by Andrei Kolmogorov, a mathematician of startling breadth and ability who revolutionized the role of the unlikely in mathematics, while carefully negotiating the shifting probabilities of political and academic life in Soviet Russia.

As a young man, Kolmogorov was nourished by the intellectual ferment of post-revolutionary Moscow, where literary experimentation, the artistic avant-garde, and radical new scientific ideas were in the air. In the early 1920s, as a 17-year-old history student, he presented a paper to a group of his peers at Moscow University, offering an unconventional statistical analysis of the lives of medieval Russians. It found, for example, that the tax levied on villages was usually a whole number, while taxes on individual households were often expressed as fractions. The paper concluded, controversially for the time, that taxes were imposed on whole villages and then split among the households, rather than imposed on households and accumulated by village. “You have found only one proof,” was his professor’s acid observation. “That is not enough for a historian. You need at least five proofs.” At that moment, Kolmogorov decided to change his concentration to mathematics, where one proof would suffice.

It is oddly appropriate that a chance event drove Kolmogorov into the arms of probability theory, which at the time was a maligned sub-discipline of mathematics. Pre-modern societies often viewed chance as an expression of the gods’ will; in ancient Egypt and classical Greece, throwing dice was seen as a reliable method of divination and fortune telling. By the early 19th century, European mathematicians had developed techniques for calculating odds, and distilled probability to the ratio of the number of favorable cases to the number of all equally probable cases. But this approach suffered from circularity—probability was defined in terms of equally probable cases—and only worked for systems with a finite number of possible outcomes. It could not handle countable infinity (such as a game of dice with infinitely many faces) or a continuum (such as a game with a spherical die, where each point on the sphere represents a possible outcome). Attempts to grapple with such situations produced contradictory results, and earned probability a bad reputation.

Reputation and renown were qualities that Kolmogorov prized. After switching his major, Kolmogorov was initially drawn into the devoted mathematical circle surrounding Nikolai Luzin, a charismatic teacher at Moscow University. Luzin’s disciples nicknamed the group “Luzitania,” a pun on their professor’s name and the famous British ship that had sunk in the First World War. They were united by a “joint beating of hearts,” as Kolmogorov described it, gathering after class to exalt or eviscerate new mathematical innovations. They mocked partial differential equations as “partial irreverential equations” and finite differences as “fine night differences.” The theory of probability, lacking solid theoretical foundations and burdened with paradoxes, was jokingly called the “theory of misfortune.”

It was through Luzitania that Kolmogorov’s evaluation of probability took on a more personal turn. By the 1930s, the onset of Stalinist terror meant anyone could expect a nighttime knock on the door by the secret police, and blind chance seemed to rule people’s lives. Paralyzed by fear, many Russians felt compelled to participate in denunciations, hoping to increase their chance of survival. Bolshevik activists among the mathematicians, including Luzin’s former students, accused Luzin of political disloyalty and castigated him for publishing in foreign countries. Kolmogorov, having published abroad himself, may have realized his own vulnerability. He had already displayed an apparent readiness to make political compromises for the sake of his career, accepting a position as a research institute director when his predecessor was imprisoned by the Bolshevik regime for supporting religious freedom. Now Kolmogorov joined the critics and turned against Luzin. Luzin was subject to a show trial by the Academy of Sciences and lost all official positions, but surprisingly escaped being arrested and shot by the Russian authorities. Luzitania was gone, sunk by its own crew.

The moral dimension of Kolmogorov’s decision aside, he had played the odds successfully and gained the freedom to continue his work. In the face of his own political conformity, Kolmogorov presented a radical and, ultimately, foundational revision of probability theory. He relied on measure theory, a fashionable import to Russia from France. Measure theory represented a generalization of the ideas of “length,” “area,” or “volume,” allowing the measure of various weird mathematical objects to be taken when conventional means did not suffice. For example, it could help calculate the area of a square, with an infinite number of holes in it, cut it into an infinite number of pieces, and scattered over an infinite plane. In measure theory, it is still possible to speak of the “area” (measure) of this scattered object.

Kolmogorov drew analogies between probability and measure, resulting in five axioms, now usually formulated in six statements, that made probability a respectable part of mathematical analysis. The most basic notion of Kolmogorov’s theory was the “elementary event,” the outcome of a single experiment, like tossing a coin. All elementary events formed a “sample space,” the set of all possible outcomes. For lightning strikes in Massachusetts, for example, the sample space would consist of all the points in the state where lightning could hit. A random event was defined as a “measurable set” in a sample space, and the probability of a random event as the “measure” of this set. For example, the probability that lightning would hit Boston would depend only on the area (“measure”) of this city. Two events occurring simultaneously could be represented by the intersection of their measures; conditional probabilities by dividing measures; and the probability that one of two incompatible events would occur by adding measures (that is, the probability that either Boston or Cambridge would be hit by lightning equals the sum of their areas).

The Paradox of the Great Circle was a major mathematical conundrum that Kolmogorov’s conception of probability finally put to rest. Assume aliens landed randomly on a perfectly spherical Earth and the probability of their landing was equally distributed. Does this mean that they would be equally likely to land anywhere along any circle that divides the sphere into two equal hemispheres, known as a “great circle?” It turns out that the landing probability is equally distributed along the equator, but is unevenly distributed along the meridians, with the probability increasing toward the equator and decreasing at the poles. In other words, the aliens would tend to land in hotter climates. This strange finding might be explained by the circles of latitude getting bigger as they get closer to the equator—yet this result seems absurd, since we can rotate the sphere and turn its equator into a meridian. Kolmogorov showed that the great circle has a measure zero, since it is a line segment and its area is zero. This explains the apparent contradiction in conditional landing probabilities by showing that these probabilities could not be rigorously calculated.

Having crossed from the very real world of Stalinist purges into the ephemeral zone of zero-measure conditional probabilities, Kolmogorov was soon plunged back into reality. During the Second World War, the Russian government asked Kolmogorov to develop methods for increasing the effectiveness of artillery fire. He showed that, instead of trying to maximize the probability of each shot hitting its target, in certain cases it would be better to fire a fusillade with small deviations from perfect aim, a tactic known as “artificial dispersion.” The Moscow University Department of Probability Theory, of which he had become the head, also calculated ballistic tables for low-altitude, low-speed bombing. In 1944 and 1945, the government awarded Kolmogorov two Orders of Lenin for his wartime contributions, and after the war, he served as a mathematics consultant for the thermonuclear weapons program.

But Kolmogorov’s interests inclined him in more philosophical directions, too. Mathematics had led him to believe that the world was both driven by chance and fundamentally ordered according to the laws of probability. He often reflected on the role of the unlikely in human affairs. Kolmogorov’s chance meeting with fellow mathematician Pavel Alexandrov on a canoeing trip in 1929 began an intimate, lifelong friendship. In one of the long, frank letters they exchanged, Alexandrov chastised Kolmogorov for the latter’s interest in talking to strangers on the train, implying that such encounters were too superficial to offer insight into a person’s real character. Kolmogorov objected, taking a radical probabilistic view of social interactions in which people acted as statistical samples of larger groups. “An individual tends to absorb the surrounding spirit and to radiate the acquired lifestyle and worldview to anyone around, not just to a select friend,” he wrote back to Alexandrov.

Music and literature were deeply important to Kolmogorov, who believed he could analyze them probabilistically to gain insight into the inner workings of the human mind. He was a cultural elitist who believed in a hierarchy of artistic values. At the pinnacle were the writings of Goethe, Pushkin, and Thomas Mann, alongside the compositions of Bach, Vivaldi, Mozart, and Beethoven—works whose enduring value resembled eternal mathematical truths. Kolmogorov stressed that every true work of art was a unique creation, something unlikely by definition, something outside the realm of simple statistical regularity. “Is it possible to include [Tolstoy’s War and Peace] in a reasonable way into the set of ‘all possible novels’ and further to postulate the existence of a certain probability distribution in this set?” he asked, sarcastically, in a 1965 article.

Yet he longed to find the key to understanding the nature of artistic creativity. In 1960 Kolmogorov armed a group of researchers with electromechanical calculators and charged them with the task of calculating the rhythmical structures of Russian poetry. Kolmogorov was particularly interested in the deviation of actual rhythms from classical meters. In traditional poetics, the iambic meter is a rhythm consisting of an unstressed syllable followed by a stressed syllable. But in practice, this rule is rarely obeyed. In Pushkin’s Eugene Onegin, the most famous classical iambic poem in the Russian language, almost three-fourths of its 5,300 lines violate the definition of the iambic meter, and more than a fifth of all even syllables are unstressed. Kolmogorov believed that the frequency of stress deviation from the classical meters offered an objective “statistical portrait” of a poet. An unlikely pattern of stresses, he thought, indicated artistic inventiveness and expression. Studying Pushkin, Pasternak, and other Russian poets, Kolmogorov argued that they had manipulated meters to give “general coloration” to their poems or passages.

To measure the artistic merit of texts, Kolmogorov also employed a letter-guessing method to evaluate the entropy of natural language. In information theory, entropy is a measure of uncertainty or unpredictability, corresponding to the information content of a message: the more unpredictable the message, the more information it carries. Kolmogorov turned entropy into a measure of artistic originality. His group conducted a series of experiments, showing volunteers a fragment of Russian prose or poetry and asking them to guess the next letter, then the next, and so on. Kolmogorov privately remarked that, from the viewpoint of information theory, Soviet newspapers were less informative than poetry, since political discourse employed a large number of stock phrases and was highly predictable in its content. The verses of great poets, on the other hand, were much more difficult to predict, despite the strict limitations imposed on them by the poetic form. According to Kolmogorov, this was a mark of their originality. True art was unlikely, a quality probability theory could help to measure.

Kolmogorov scorned the idea of placing War and Peace in a probabilistic sample space of all novels—but he could express its unpredictability by calculating its complexity. Kolmogorov conceived complexity as the length of the shortest description of an object, or the length of an algorithm that produces an object. Deterministic objects are simple, in the sense that they can by produced by a short algorithm: say, a periodic sequence of zeroes and ones. Truly random, unpredictable objects are complex: any algorithm reproducing them would have to be as long as the objects themselves. For example, irrational numbers—those that cannot be written as fractions— almost surely have no pattern in the numbers that appear after the decimal point. Therefore, most irrational numbers are complex objects, because they can be reproduced only by writing out the actual sequence. This understanding of complexity fits with the intuitive notion that there is no method or algorithm that could predict random objects. It is now crucial as a measure of the computational resources necessary to specify an object, and finds multiple applications in modern-day network routing, sorting algorithms, and data compression.

By Kolmogorov’s own measure, his life was a complex one. By the time he died, in 1987 at the age of 84, he had not only weathered a revolution, two World Wars, and the Cold War, but his innovations left few mathematical fields untouched, and extended well beyond the confines of academe. Whether his random walk through life was of the inebriated or mushroom-picking variety, its twists and turns were neither particularly predictable nor easily described. His success at capturing and applying the unlikely had rehabilitated probability theory, and had created a terra firma for countless scientific and engineering projects. But his theory also amplified the tension between human intuition about unpredictability and the apparent power of the mathematical apparatus to describe it.

For Kolmogorov, his ideas neither eliminated chance, nor affirmed a fundamental uncertainty about our world; they simply provided a rigorous language to talk about what cannot be known for certain. The notion of “absolute randomness” made no more sense than “absolute determinism,” he once remarked, concluding, “We can’t have positive knowledge of the existence of the unknowable.” Thanks to Kolmogorov, though, we can explain when and why we don’t.


IMPORTANT: the copyright of this article belongs to the original author(s) and/or publisher(s). Please contact the admin if you have questions regarding the copyright: Contact

,


Related Articles:
Noam Chomsky on Where Artificial Intelligence Went Wrong
SLAVES TO THE ALGORITHM

---

James Q. Wilson and the Defense of Moral Judgment

12 August 2013

origin

By Sally Satel
Thursday, August 8, 2013

Twenty years ago, James Q. Wilson powerfully articulated the idea that humans’ moral sense is innate, not learned.

This summer marks the twentieth anniversary of James Q. Wilson’s The Moral Sense. Written in a time of creeping moral relativism, Wilson wrote in defense of judgment — and, in particular, of humans’ natural predisposition to form moral assessments.

One purpose of The Moral Sense was, as Wilson put it, “to help people recover the confidence with which they once spoke about virtue and morality.” The other goal was to trace the origins of human morality. Summoning an array of anthropological evidence, Wilson elaborated on the idea that our moral sense is innate, acquired not through learning but through evolution. These sentiments do not spring to life fully formed; instead, they are cultivated within family and society. Adam Smith and Thomas Jefferson had advanced the idea of inborn moral affinity but Wilson enlarged upon it, proposing that the moral sense rested upon four foundational pillars: sympathy, fairness, self-control, and duty.

Latest Findings from the Lab

How would Wilson’s argument be evaluated today? In the last 20 years, the field of moral psychology has changed radically, and largely along the lines Wilson laid out. In the 1980s, researchers emphasized moral reasoning and were skeptical of the idea that morality was in part innate. But nowadays, developmental psychologists such as Paul Bloom and Karen Wynn at Yale, who have studied the minds of babies, have identified inborn predilections at a very early age. Through dozens of studies, they have demonstrated the capacity of infants to make certain types of moral judgments, such as distinguishing between actors and circumstances that adults would recognize as good and bad, kind and cruel, equal and unfair. In one series of studies, for example, Bloom and Wynn found that toddlers who watched a puppet show in which one puppet “stole” a ball and another puppet returned the ball to its rightful owner were far more likely to give candy to the “helper” puppet than to the “bad” puppet and also to take candy away from the “bad” puppet.

Wilson has also been largely vindicated on the notion of moral foundations. NYU psychologist Jonathan Haidt and colleagues, for example, have identified six “moral foundations” that they say are like the innate “taste buds” of the moral sense. They are: care/harm, fairness/cheating, liberty/oppression, loyalty/betrayal, authority/subversion, and sanctity/degradation. Just as people develop their culinary senses within their family and culture, Haidt argues, people develop a particular morality within their families and cultures as well. But whatever variations culture proposes must be consistent with those innate taste buds.

Wilson, recall, named four foundations: sympathy, fairness, self-control, and duty. Sympathy clearly corresponds with Haidt’s care/harm foundation, which stems from the attachment systems we share with other primates and underlies virtues of kindness, gentleness, and nurturance. Wilson’s fairness matches Haidt’s fairness – the concern can be traced to an evolutionarily preserved process of reciprocal altruism (“you scratch my back,” etc.) which gives rise to ideas of justice, proportionality, and rights. Haidt’s authority/subversion and loyalty/betrayal axes would seem to relate to Wilson’s duty. Self-control, the only one of Wilson’s moral foundations that does not fit neatly into Haidt’s scheme, does, however, have some overlap with Haidt’s sanctity/degradation foundation – that one is partly about treating the body as a temple, and resisting one’s carnal urges.

Controversially, Wilson hypothesized that men and women differ in their moral orientation, with men more inclined to emphasize justice and emotional control and women more likely to express sympathy, caring, and cooperation. When Haidt and his team looked at whether moral sense differed by sex, however, they found only small differences. Yet when they analyzed the data according to ideological affiliations, the differences in moral orientation were striking. Individuals who call themselves liberal tend to value care and fairness most highly, whereas self-identified conservatives tend to value all six of the moral foundations equally. On the other hand, according to a 2009 poll conducted by Gallup, men are 30 percent more likely to say they are Republicans than are women, regardless of age (41 percent versus 32 percent) which supports the pattern that Wilson hypothesized and Haidt discovered.

Wilson’s commitment to the idea of inborn moral substrate is largely consistent with the work of experimental psychologists, yet he was leery of the social implications of biological determinism. As he asked in a 2010 article in National Affairs, will “understanding human behavior at the level of genetics and neurobiology make it unreasonable or impossible to hold people accountable for what they do?” No, he said. And rightly so, in my view.

Biological Explanations of Behavior and Virtue Co-exist

It is only natural that advances in knowledge about the brain would make us think more mechanistically about ourselves. Although we generally think of ourselves as free agents who make choices, a number of prominent scholars claim that we are mistaken. “Our growing knowledge about the brain makes the notions of volition, culpability, and, ultimately, the very premise of the criminal justice system, deeply suspect,” contends Stanford University biologist Robert Sapolsky. “Progress in understanding the chemical basis of behavior will make it increasingly untenable to retain a belief in the concept of free will,” writes biologist Anthony R. Cashmore.

Philosopher-neuroscientist Joshua Greene and psychologist Jonathan Cohen contend that neuroscience has a special role to play in giving age-old arguments about free will more rhetorical bite. “New neuroscience will affect the way we view the law, not by furnishing us with new ideas or arguments about the nature of human action, but by breathing new life into old ones,” they write. “[It] can help us see that all behavior is mechanical, that all behavior is produced by chains of physical events that ultimately reach back to forces beyond the agent’s control,” Greene adds. Other neuroscientists hope to see a general attitude “shift from blame to biology.”

In some instances, such a move may be warranted, but to date, brain-based findings cannot distinguish between criminal impulses that are irresistible and those which are not resisted, difficult as resistance may sometimes be. Indeed, the relationship between brain-based explanations of behavior and what they mean for holding that person responsible is by no means straightforward. To be sure, everyone agrees that people can be held accountable only if they have freedom of choice. But there is a longstanding metaphysical debate about the kind of freedom that is necessary. Some contend that we can be held accountable as long as we are able to engage in conscious deliberation, follow rules, and generally control ourselves. That a long chain of physical causes that are beyond our control lead up to a crime does not undermine the law’s capacity, and duty, to blame and punish.

Others, like Sapolsky, Cashmore, Green, and Cohen, seem to disagree, insisting that our deliberations and decisions do not make us free because they are dictated by neuronal circumstances. They hope that as the general public becomes more familiar with the latest discoveries about the workings of the brain, it will inevitably come to accept
their view on moral agency. In turn, they predict, we’ll be compelled to adopt a strictly utilitarian model of justice dedicated solely to preventing crime through deterrence, incapacitation, and rehabilitation.

As Wilson understood, this free-will question remains one of the great conceptual impasses of all time, far beyond the capacity of brain science to resolve. Unless, that is, investigators can show something truly spectacular: that people are not conscious beings whose actions flow from reasons and who are responsive to reason.

True, we do not exert as much conscious control over our actions as we think we do. Every student of the mind, beginning most notably with William James and Sigmund Freud, knows this. But it does not mean we are powerless. Indeed, deliberative reasoning is a crucial aspect of moral psychology, a fact that is too often downplayed in our “Blink-ified” culture.

The belief that discoveries in neuroscience will threaten morality seems unrealistic. After all, the high degree of consensus across cultures regarding the value of proportionate punishment suggests that human intuitions about fairness and justice are deeply entrenched. That babies too young to have absorbed social rules from their parents behave as if guided by these foundations bolsters the view that reciprocity, proportionality, and the impulse to punish violators are so deeply rooted in evolution, psychology, and culture that new neuroscientific revelations are unlikely to dislodge them easily, if at all.

This is not because people are immune to change. On the contrary, attitudes can shift over time, and recent history bears this out. Within the last two centuries alone, we have witnessed profound moral transformations, ranging from the abolition of slavery to legal protections against racial and sexual inequality and to the endorsement of same-sex marriage by millions. Yet these milestones of moral progress would not have come about at all but for the universal human hunger for fairness and justice.

By failing to reflect the moral values of the citizenry, which encompass fair punishment, the law would lose some, if not most, of its authority. What’s more, a blameless world would be a very chilly place, inhospitable to the warming sentiments of forgiveness, redemption, and gratitude. In a milieu where no individuals are accountable for their actions, the so-called moral emotions would be unintelligible. If we no longer brand certain actions as blameworthy and punish transgressors in proportion to their crimes, we forgo precious opportunities to reaffirm the dignity of their victims and to inculcate a shared vision of a just society. In the words of Wilson, “if we allow ourselves to think that explaining behavior justifies [them] … virtue then becomes just as meaningless as depravity — a state of affairs in which no society could hope to remain ordered or healthy.”


IMPORTANT: the copyright of this article belongs to the original author(s) and/or publisher(s). Please contact the admin if you have questions regarding the copyright: Contact

,


Related Articles:
The Lethality of Loneliness

---

Unhappy Truckers and Other Algorithmic Problems

1 August 2013

By Tom Vanderbilt

origin

When Bob Santilli, a senior project manager at UPS, was invited in 2009 to his daughter’s fifth grade class on Career Day, he struggled with how to describe exactly what he did for a living. Eventually, he decided he would show the class a travel optimization problem of the kind he worked on, and impress them with how fun and complex it was. The challenge was to choose the most efficient route among six different stops, in a typical suburban-errands itinerary. The class devised their respective routes, then began picking them over. But one girl thought past the question of efficiency. “She says, my mom would never go to the store and buy perishable things—she didn’t use the word perishable, I did—and leave it in the car the whole day at work,” Santilli tells me.

Her comment reflects a basic truth about the math that runs underneath the surface of nearly every modern transportation system, from bike-share rebalancing to airline crew scheduling to grocery delivery services. Modeling a simplified version of a transportation problem presents one set of challenges (and they can be significant). But modeling the real world, with constraints like melting ice cream and idiosyncratic human behavior, is often where the real challenge lies. As mathematicians, operations research specialists, and corporate executives set out to mathematize and optimize the transportation networks that interconnect our modern world, they are re-discovering some of our most human quirks and capabilities. They are finding that their job is as much to discover the world, as it is to change it.

The problem that Santilli posed to his daughter’s class is known as a traveling salesman problem. Algorithms solving this problem are among the most important and most commonly implemented in the transportation industry. Generally speaking, the traveling salesman problem asks: Given a list of stops, what is the most time-efficient way for a salesman to make all those stops? In 1962, for example, a Procter and Gamble advertisement tasked readers with such a challenge: To help “Toody and Muldoon,” co-stars of the Emmy-award-winning television show Car 54, Where Are You?, devise a 33-city trip across the continental United States. “You should plan a route for them from location to location,” went the instructions, “which will result in the shortest total mileage from Chicago, Illinois, back to Chicago, Illinois.”

A mathematician claimed the prize, and a regal $10,000. But the contest organizers could only verify that his solution was the shortest of those submitted, and not that it was the shortest possible route. That’s because solving a 33-city problem by calculating every route individually would require 28 trillion years—on the Department of Energy’s 129,000-core supercomputer Roadrunner (which is among the world’s fastest clusters). It’s for this reason that William J. Cook, in his book In Pursuit of the Traveling Salesman, calls the traveling salesman problem “the focal point of a larger debate on the nature of complexity and possible limits to human knowledge.” Its defining characteristic is how quickly the complexity scales. A six-city tour has only 720 possible paths, while a 20-city tour has—by Cook’s quick calculations on his Mac—more than 100 quadrillion possible paths.

Modeling the real world, with constraints like melting ice cream and idiosyncratic human behavior, is often where the real challenge lies.

There are answers to some traveling salesman problems. Cook himself has produced an iPhone app that will crack 100 cities, using relaxed linear programming and other algorithmic techniques. And every few years or so, teams armed with sophisticated hardware and programming approaches set the bar higher. In 2006, for example, an optimal tour was produced by a team led by Cook for a 85,900-city tour. It did not, of course, given the computing constraints mentioned above, involve checking each route individually. “There is no hope to actually list all the road trips between New York and Los Angeles,” he says. Instead, almost all of the computation went into proving that there is no tour shorter than the one his team found. In essence, there is an answer, but there is not a solution. “By solution,” writes Cook, “we mean an algorithm, that is a step-by-step recipe for producing an optimal tour for any example we may now throw at it.”

And that solution may never come. The traveling salesman problem is at the heart of an ongoing question—the question—in computer science: whether or not P equals NP. As summarized with blunt elegance by MIT’s news office, “roughly speaking, P is a set of relatively easy problems, NP is a set of incredibly hard problems, and if they’re equal, then a large number of computer science problems that seem to be incredibly hard are actually relatively easy.” The Clay Mathematics Institute offers a $1 million reward to a meta-problem hovering like a mothership over the Car 54 challenge and its ilk: proving that P does or does not equal NP.

By now it should be clear that we are not talking just about the routing needs of salesmen, for even the most trenchant of regional reps does not think about hitting 90,000 far-flung burghs on a call. But the Traveling Salesman Problem, and its intellectual cousins, are far from theoretical; indeed, they are at the invisible heart of our transportation networks. Every time you want to go somewhere, or you want something to get to you, the chances are someone is thinking at that very moment how to make that process more efficient. We are all of us traveling salesmen.

A position like Bob Santilli’s, performing optimization, didn’t always exist at UPS. Times used to be simpler. Until the early 1980s, UPS drivers used to have one simple goal: to get all the packages in their truck delivered by the end of the day. Those were the “ground” days. “The only thing we had was time sensitivity for commercial drivers,” notes Santilli. Jeff Winters, who heads operations research for UPS, says “everything was a human-scalable problem to solve.” And it had to be. “We had individual carload diagrams for drivers every day,” says Santilli (UPS calls its brown delivery vans “cars”). On paper. Routes were laid out via pushpins on a map. At the end of the day, everything was filed, and the process began again.

But then, in 1982, the world changed: Next-day air delivery was introduced. Suddenly, there were an increasing variety of time “commits”; packages had to be at one address by 10:30 a.m., another by 1:30 p.m., another by noon. There were new time constraints for package pickups as well. It was no longer just an optimal routing problem, but an optimal scheduling problem. And the one thing UPS suspected was that it was not doing things optimally. One imagines that within these walls there is no greater sin. Even the insides of vans are subjected to a kind of routing algorithm; the next time you get a package, look for a three-letter letter code, like “RDL.” That means “rear door left,” and it is so the driver has to take as few steps as possible to locate the package. In one typical aside, Santilli told me that when a driver stops the van, he “has nine seconds to select a package and get out of there.” His tone suggested he was talking about a member of a bomb disposal unit.

In 1986, UPS purchased Roadnet, a logistics company that created optimal routes for businesses like beer distributors. There was just one problem: the drivers that Roadnet worked with typically plied the same route every day or so, with few hard time constraints. And so UPS began what would become known as Project ORION (Onroad Integrated Optimization and Navigation), a project spanning many years and many millions of dollars, whose algorithmic efforts are just beginning to bear fruit in its fleet of nearly 58,000 trucks. At the core of ORION was the travelling salesman problem. Each UPS truck—which makes around 130 stops per day—is essentially a traveling salesman problem on wheels.

ORION’s promise was and is clear: For each mile saved, per driver, per year, UPS saves $30 million. The mathematics required to arrive at some solution to the traveling salesman problem, even if approximate, is also clear. But in trying to apply this mathematics to the real world of deliveries and drivers, UPS managers needed to learn that transportation is as much about people and the unique constraints they impose, as it is about negotiating intersections and time zones. As Jeff Winters put it to me, “on the surface, it should be very easy to come up with an optimized route and give it to the driver, and you’re done. We thought that would take a year.” That was a decade ago.

When a driver stops the van, he “has nine seconds to select a package and get out of there.”

For one thing, humans are irrational and prone to habit. When those habits are interrupted, interesting things happen. After the collapse of the I-35 bridge in Minnesota, for example, the number of travelers crossing the river, not surprisingly, dropped; but even after the bridge was restored, researcher David Levinson has noted, traffic levels never got near their previous levels again. Habits can be particularly troublesome for planning fixed travel routes for people, like public buses, as noted in a paper titled, “You Can Lead Travelers to the Bus Stop, But You Can’t Make Them Ride,” by Akshay Vij and Joan Walker of the University of California. “Traditional travel demand models assume that individuals are aware of the full range of alternatives at their disposal,” the paper reads, “and that a conscious choice is made based on a tradeoff between perceived costs and benefits.” But that is not necessarily so.

People are also emotional, and it turns out an unhappy truck driver can be trouble. Modern routing models incorporate whether a truck driver is happy or not—something he may not know about himself. For example, one major trucking company that declined to be named does “predictive analysis” on when drivers are at greater risk of being involved in a crash. Not only does the company have information on how the truck is being driven—speeding, hard-braking events, rapid lane changes—but on the life of the driver. “We actually have built into the model a number of indicators that could be surrogates for dissatisfaction,” said one employee familiar with the program.

This could be a change in a driver’s take-home pay, a life event like a death in the family or divorce, or something as subtle as a driver whose morning start time has been suddenly changed. The analysis takes into account everything the company’s engineers can think of, and then teases out which factors seem correlated to accident risk. Drivers who appear to be at highest risk are flagged. Then there are programs in place to ensure the driver’s manager will talk to a flagged driver.

In other words, the traveling salesman problem grows considerably more complex when you actually have to think about the happiness of the salesman. And, not only do you have to know when he’s unhappy, you have to know if your model might make him unhappy. Warren Powell, director of the Castle Laboratory at Princeton University’s Department of Operations Research and Financial Engineering, has optimized transportation companies from Netjets to Burlington Northern. He recalls how, at Yellow Freight company, “we were doing things with drivers—they said, you just can’t do that.” There were union rules, there was industry practice. Tractors can be stored anywhere, humans like to go home at night. “I said we’re going to need a file with 2,000 rules. Trucks are simple; drivers are complicated.”

At UPS, a program could come up with a great route, but if it violated, say, the Teamsters Union rules, it was worthless. For instance, time windows need to be built in for driver’s breaks and lunches. And while the filmed testimonials of drivers I saw at UPS were largely positive in their description of ORION, it is interesting to note that the latest contract between the company and the union included a provision that no driver would be “discharged based solely on information received from GPS or any successor system.”

Powell’s biggest revelation in considering the role of humans in algorithms, though, was that humans can do it better. “I would go down to Yellow, we were trying to solve these big deterministic problems. We weren’t even close. I would sit and look at the dispatch center and think, how are they doing it?” That’s when he noticed: They are not trying to solve the whole week’s schedule at once. They’re doing it in pieces. “We humans have funny ways of solving problems that no one’s been able to articulate,” he says. Operations research people just punt and call it a “heuristic approach.”

This innate human ability was at work in Santilli’s daughter’s class, too. The fifth graders got it about right. As James MacGregor and Thomas Ormerod note, “the task of the traveling salesman problem may happen to parallel what it is natural for the perceptual system to do in any case when presented with an array of dots.” Curiously, using this heuristic approach, they note, subjects in experiments were “exceptionally good at finding the optimum tours.” In other experiments, when subjects were shown images of optimal tours, they were thought to be more aesthetically pleasing than sub-optimal tours.

Of course, even the human balks at understanding the behavior of other humans. Powell showed me a sample diagram he’ll sometimes give to a roomful of trucking executives. “I’ve got a load that has to be picked up. I’ve got a driver 60 miles away. All I have to do is dispatch him and it’s done. I’ve got another driver who, once he unloads, will be about 30 miles away—should be in around midafternoon.” Do you take the sure thing, even if it consumes more miles? Or do you take the risk on the shorter trip, which might get in later? When he asks for a show of hands which choice is better, the response is typically mixed. “Driver B saves me miles. But he hasn’t arrived yet, and what if he doesn’t?” he says, his voice dropping to a conspiratorial whisper. “Different people will answer it differently. They’ll say, ‘I know that driver, he’ll make it.’ Welcome to real-world decision making.”

In the end, the best approach turns out to be building on what people are already doing. When Powell consulted with Schneider trucking a decade ago, curiously, he did not model some über-efficient idealized vision of what Schneider could be. He essentially modeled Schneider as it was. “Our objective wasn’t to get the best solution,” says Ted Gifford, a longtime operations research specialist at Schneider. “Our objective was to try to simulate what the real world planners were really doing.” When I suggest to Gifford that he’s trying to understand the real world, mathematically, he concurs, but adds: “The word ‘understand’ is too strong—we are happy to get positive outcomes.”

At UPS, too, allowing for human behavior in a quantitative way was key. “In the old business rule,” says UPS’s Winters, “we held the drivers accountable for how closely they follow the trace”—UPS speak for route. Even worse, he says, drivers, while not being given guidance, were being penalized—“coached and counseled”—for errors and delays, or “breaking trace.” This, he says, was a “dumb rule,” a hangover of legacy thinking inherited from the company’s origins as a ground-only delivery service. “The right thing to do,” Winters says, “is build your route with just your air work, and then look at expensive ground areas that you can do to eliminate the need to go back to that area. It turned a business problem on its head.”

Transport optimization, then, is hard, and understanding how to implement it can be harder still. “One of the hardest things to teach a math analytics group,” Santilli tells me, “is the difference between a feasible solution and an implementable solution. Feasible just means it meets all the math constraints. But implementable is something the human can carry out.”

But optimization has succeeded in coming a very long way. Modeling has become much more sophisticated, a development that Powell outlines in his life’s work, a book called Approximate Dynamic Programming: Solving the Curses of Dimensionality. “You went to the math literature, and it was all toy problems. It was literally about 20 years before I could go to a whiteboard and say, you know what, I can actually write down the problem.” Mapping has improved dramatically. A few decades ago, the mapping service UPS bought literally had people calling businesses to ask them if they actually were where the data UPS had suggested they were. Early GPS maps were also flawed. “When we got some bad results early on,” says Ranga Nuggehalli, UPS’s principal scientist, “we didn’t know whether it was because the algorithm was so bad, or our data was so bad.” It was the latter.

The data acquisition needed for tracking has since been developed. UPS’s “Delivery Information Acquisition Device,” or DIAD, the brown handheld computer upon which you have no doubt signed your name, creates a data architecture that underpins their optimization strategy.

And the end result has been one of slow but steady improvement. Powell and Santilli point to their own success stories. Yellow Freight used to have some 700 “end of lines,” Powell says, which are sorting terminals where cargo is transferred to its end customers. Powell developed a model that delivered a counterintuitive message: Trucks were traveling farther to get to the customer with so many terminals. Today, he says, Yellow Freight has 400 end of lines. “That was the right number,” he says. As for UPS, Santilli notes that a driver in Gettysburg, Pa. is now driving nearly 25 miles less per day, from an original route of more than 150 miles down to 126 miles—with the same number of stops.

We are getting smarter at moving things around. But the less obvious story is that we are getting a grasp on how to model the human element in transportation. Just as we are getting to know algorithms, algorithms are getting to know us.


IMPORTANT: the copyright of this article belongs to the original author(s) and/or publisher(s). Please contact the admin if you have questions regarding the copyright: Contact

,


Related Articles:
Noam Chomsky on Where Artificial Intelligence Went Wrong
SLAVES TO THE ALGORITHM

---

Does life have a purpose?

4 July 2013

by Michael Ruse
origin

One of my favorite dinosaurs is the Stegosaurus, a monster from the late Jurassic (150 million years ago), noteworthy because of the diamond-like plates all the way down its back. Since this animal was discovered in the late 1870s in Wyoming, huge amounts of ink have been spilt trying to puzzle out the reason for the plates. The obvious explanation, that they are used for fighting or defence, simply cannot be true. The connection between the plates and the main body is way too fragile to function effectively in a battle to the death. Another explanation is that, like the stag’s antlers or the peacock’s tail, they play some sort of role in the mating game. Señor Stegosaurus with the best plates gets the harem and the other males have to do without. Unfortunately for this hypothesis, the females had the plates too, so that cannot be the explanation either. My favourite idea is that the plates were like the fins you find in electric-producing cooling towers: they were for heat transfer. In the cool of the morning, as the sun came up, they helped the animal to heat up quickly. In the middle of the day, especially when the vegetation consumed by the Stegosaurus was fermenting away in its belly, the plates would have helped to catch the wind and get rid of excess heat. A superb adaptation. (Sadly for me, no longer a favoured explanation, since latest investigations suggest that the plates may have been a way for individuals to recognise each other as members of the same species).

But this essay is not concerned with dinosaurs themselves, rather with the kind of thinking biologists use when they wonder how dinosaur bodies worked. They are asking what was the purpose of the plates? What end did the plates serve? Were they for fighting? Were they for attracting mates? Were they for heat control? This kind of language is ‘teleological’ — from telos, the Greek for ‘end’. It is language about the purpose or goal of things, what Aristotle called their ‘final causes’, and it is something that the physical sciences have decisively rejected. There’s no sense for most scientists that a star is for anything, or that a molecule serves an end. But when we come to talk about living things, it seems very hard to shake off the idea that they have purposes and goals, which are served by the ways they have evolved.

As I have written about before in Aeon, the chemist James Lovelock got into very hot water with his fellow scientists when he wanted to talk about the Earth being an organism (the Gaia hypothesis) and its parts having purposes: that sea lagoons were for evaporating unneeded salt out of the ocean, for instance. And as Steven Poole wrote in his essay ‘Your point is?’ in Aeon earlier this year, the contemporary philosopher Thomas Nagel is also in hot water since he suggested in his book Mind and Cosmos (2012) that we need to use teleological understanding to explain the nature of life and its evolution.

Some have thought that this lingering teleological language is a sign that biology is not a real science at all, but just a collection of observations and facts. Others argue that the apparent purposefulness of nature leaves room for God. Immanuel Kant declared that you cannot do biology without thinking in terms of function, of final causes: ‘There will never be a Newton for a blade of grass,’ he claimed in Critique of Judgment (1790), meaning that living things are simply not determined by the laws of nature in the way that non-living things are, and we need the language of purpose in order to explain the organic world.

Why do we still talk about organisms and their features in this way? Is biology basically different from the other sciences because living things do have purposes and ends? Or has biology simply failed to get rid of some old-fashioned, unscientific thinking — thinking that even leaves the door ajar for those who want to sneak God back into science?

Biology’s entanglement with teleology reaches right back to the ancient Greek world. In Plato’s dialogue the Phaedo, Socrates describes himself as he sits awaiting his fate, and he asks whether this can be fully explained mechanically ‘because my body is made up of bones and muscles; and the bones… are hard and have joints which divide them, and the muscles are elastic, and they cover the bones’. All of this, says Socrates, is not ‘the true cause’ of why he sits where and how he does. The true cause is that ‘the Athenians have thought fit to condemn me and I have thought it better and more right to remain here and undergo my sentence’. Socrates describes this as a confusion of causes and conditions: he cannot sit without his bones and muscles being as they are, but this is no real explanation of why he sits thus. In the Timaeus Plato develops this further, describing a universe brought into being by a designer (what Plato called the Demiurge). An enquiry into the purpose of the bones and muscles was not only an enquiry into the ways of men, but ultimately an enquiry into the plans of the Demiurge.

Now, however, the governing metaphors of nature changed. No longer did scientists think in terms of organisms: they thought in terms of machines

Aristotle, Plato’s student, didn’t want God in the business of biology like this. He believed in a God, but not one that cared about the universe and its inhabitants. (Rather like some junior members of my family, this God spent Its time thinking mostly of Its own importance.) However, Aristotle was very interested in final causes, and argued that all living things contain forces that direct them towards their goal. These life forces operate in the here and now, yet in some sense they have the future in mind. They animate the acorn in order that it might turn into an oak, and likewise for other living things. Like Plato, Aristotle used the metaphor of design but unlike Plato he wanted to keep any supervisory, conscious intelligence out of the game.

All of this came crashing down during the Scientific Revolution of the 16th and 17th centuries. For both Plato and Aristotle, the question of final causes had applied to physical phenomena — the stars, for example — as much as to biological phenomena. Both thought of objects as being rather like organisms. Why does the stone fall? Because being made of the element earth it wants to find its proper place, namely as close to the centre of the Earth as possible. It falls in order to achieve its right end: it wants to fall.

Now, however, the governing metaphors of nature changed. No longer did scientists think in terms of organisms: they thought in terms of machines. The world, the universe, is like a gigantic clock. As the 17th-century French philosopher-scientist René Descartes insisted, the human body is nothing but an intricate machine. The heart is like a pump, and the arms and legs are a system of levers and pulleys and so forth. The 17th-century English chemist and philosopher Robert Boyle realised that as soon as you start to think in the mechanical fashion, then talking about ends and purposes really isn’t very helpful. A planet goes round and round the Sun; you want to know the mechanism by which it happens, not to imagine some higher purpose for it. In the same way, when you look at a clock you want to know what makes the hands go round the dial — you want the proximate causes.

But surely machines have purposes just as much as organisms do? The clock exists in order to tell the time just as much as the eye exists in order to see. True, but as Boyle also saw, it is one thing to talk about intentions and purposes in a general, perhaps theological way, but another thing to do this as part of science. You can take the Platonic route and talk about God’s creative intentions for the universe, that’s fine. But, really, this is no longer part of science (if it ever was) and has little explanatory power. In the words of EJ Dijksterhuis, one of the great historians of the Scientific Revolution, God now became a ‘retired engineer’.

On the other hand, if you wanted to take the Aristotelian approach and explain the growth and development of individual organisms by special vital forces, that was still theoretically possible. But since no one, as Boyle pointed out, seemed to have the slightest clue about these vital forces or what they did, he and his fellow mechanists just wanted to drop the idea altogether and get on with the job of finding proximate causes for all natural phenomena. The organic metaphor did not lead to new predictions and the other sorts of things one wants from science, especially technological promise. The machine metaphor did.

Yet even Boyle realised that it is very hard to get rid of final-cause thinking when it comes to studying actual organisms, and not just using them as metaphors in the rest of the physical world. He was particularly interested in bats, and spent some considerable time discussing their adaptations — how their wings were so well-organised for flying and so on. In fact, almost paradoxically, in the 18th century the study of living things became more interested in teleology, even as the physical sciences were turning away from it.

‘Running fast in a herd while being as dumb as shit, I think, is a very good adaptation for survival’

The expansion of historical thinking played a key role here. History no longer seemed static and determined, and the belief that humans could make things better through their own unaided efforts meant that there was no longer a need to appeal to Providence for help. This secular ideal (or ideology) of progress put talk of ends and directional change very much in the air. If we as a society aim for certain ends, let us say an improved standard of living or education, could it be that history itself has ends too — ends that are not dictated so much by the Christian religion (judgment and salvation or condemnation) but that come as part of a general end-directed force or movement? Could life, and human history, be directed upward and forward from within?

Alongside philosophers and historians such as Hegel, in the 19th century natural historians began to speculate about organisms in proto-evolutionary ways, and to talk of goals — usually, one admits, goals involving the arrival of the best of all possible organisms, namely Homo sapiens. Here is ‘The Temple of Nature’ (1802) by Erasmus Darwin, Charles Darwin’s physician grandfather:
Organic Life beneath the shoreless waves
Was born and nurs’d in Ocean’s pearly caves;
First forms minute, unseen by spheric glass,
Move on the mud, or pierce the watery mass;
These, as successive generations bloom,
New powers acquire, and larger limbs assume;

The lordly Lion, monarch of the plain,
The Eagle soaring in the realms of air,
Whose eye undazzled drinks the solar glare,
Imperious man, who rules the bestial crowd,
Of language, reason, and reflection proud,
With brow erect who scorns this earthy sod,
And styles himself the image of his God;
Arose from rudiments of form and sense,
An embryon point, or microscopic ens!

In the writings of some of the early evolutionists, notably the French biologist Jean-Baptiste Lamarck, we get a strong odour of Aristotelian vital forces pushing life up the ladder to the preordained destination of humankind. No longer was teleological language confined to the purpose of individual organisms and organs such as the hand or the acorn, but now it seemed to explain a general direction for the development of life itself.

It was in this atmosphere of fascination for the history of life that Charles Darwin developed his theory of natural selection. Darwin’s On the Origin of Species (1859) was the watershed. He nailed the question of individual final causes, by explaining why organisms are so well-adapted to their environments. Teleological language was appropriate because such features as eyes and hands were not designed, but design-like. The eye is like a telescope, Stegosaur plates are like the fins you find in cooling towers. So we can ask about purposes. (Of course, questions about the dinosaur could not have been Darwin’s own: when the Origin was published, Stegosaurus still slumbered undiscovered in the rocks of the American West.)

Natural selection explained how design-like features could arise, without a designer or a purpose. There need not be any final cause. There is a struggle for existence among organisms, or more precisely a struggle for reproduction. Some will survive and reproduce, and others will not. Because there are variations in populations and new variations always arriving, on average those surviving will be different from those not surviving, in ways that will have contributed to their greater success. Over time, this adds up to change in the direction of adaptation, of design-like features. No God is needed — even if he exists, he works at ‘arms-length’ — and neither are any vital forces. Just plain old laws working in a good mechanical fashion. The teleological metaphor was just a metaphor: underneath it lay quite simple mechanical explanations.

So this cracked one side of the teleology problem: that of why individual organisms were well adapted to their environments. But what about the other side, the question of whether life itself had some overall direction, some overall sense of progress? What about the process that led to the development of humans? Darwin did believe in some kind of progress of this nature — what the Victorians called ‘monad to man’ — but he wanted nothing at all to do with Germanic, Hegelian kinds of world spirits taking life ever upwards. That smacked too much of a kind of pseudo-Christian faith, which he did not share.

There was a Newton of the blade of grass and his name was Charles Darwin

Characteristically, Darwin thrashed about on the matter of whether evolution had a direction. He agonised in his notebooks, and never really came up with a definitive answer. The closest he got was suggesting that improvement comes about naturally because each generation, on average, is going to be better than the previous one. Adaptations improve, and eventually brains appear, and get bigger and bigger. Hence humans. Darwin wrote: ‘If we look at the differentiation and specialisation of the several organs of each being when adult (and this will include the advancement of the brain for intellectual purposes) as the best standard of highness of organisation, natural selection clearly leads towards highness.’ What Darwin never really considered is the fact that brains are very expensive things to maintain, and big brains are not necessarily a one-way ticket to evolutionary success. In the immortal words of the late American paleontologist Jack Sepkoski: ‘I see intelligence as just one of a variety of adaptations among tetrapods for survival. Running fast in a herd while being as dumb as shit, I think, is a very good adaptation for survival.’

Darwin might have solved the teleological problem in biology once and for all, but his solution was not an immediate success. Most people really could not get their heads around natural selection, and frankly most people were not troubled by the question of whether the evolution of life had an end point. Obviously humans were it, and were bound to appear. All sorts of neo-Platonists were happy to believe a Christian interpretation of Darwin’s view of life: God set evolution going in order to ascend to Man. They could have Jesus and evolution too! In the words of Henry Ward Beecher — the charismatic preacher, prolific adulterer, and brother of Harriet Beecher Stowe — ‘Who designed this mighty machine, created matter, gave to it its laws, and impressed upon it that tendency which has brought forth the almost infinite results on the globe, and wrought them into a perfect system? Design by wholesale is grander than design by retail.’

While Christians could interpret evolution in a Platonic frame, as the working out of a Divine creator’s purpose, some biologists revived Aristotle’s idea of vital forces that impelled living things towards their ends. At the turn of the 20th century, the German embryologist Hans Driesch described such forces that he called ‘entelechies’, which he described as being ‘mind-like’. In France, the philosopher Henri Bergson supposed ‘élan vital’, a vital spirit that created adaptations and that gave evolution its upwards course. In England, the biologist Julian Huxley — the grandson of Darwin’s great supporter Thomas Henry Huxley and the older brother of the novelist Aldous Huxley — was always drawn to vitalism, seeing in evolution a kind of substitute for Christianity which provided people with a sense of meaning and direction: what he called ‘religion without revelation’. But even he could see that, scientifically, vitalism was a non-starter. The problem was not that no one could see these forces: no one could see electrons either. Rather it was that they didn’t provide any new explanations or predictions. They seemed to do no real work in the physical world, and mainstream biology rejected them as a hangover from an earlier age.

So what of now? Today’s scientists are pretty certain that the problem of teleology at the individual organism level has been licked. Darwin really was right. Natural selection explains the design-like nature of organisms and their characteristics, without any need to talk about final causes. On the other hand, no natural selection lies behind mountains and rivers and whole planets. They are not design-like. That is why teleological talk is inappropriate, and why the Gaia hypothesis is so criticised. And overall that is why biology is just as good a science as physics and chemistry. It is dealing with different kinds of phenomena and so different kinds of explanation are appropriate. There was a Newton of the blade of grass and his name was Charles Darwin.

But historical teleology — the question of whether evolution itself takes a direction, in particular a progressive one, is a trickier problem, and I cannot say that there is yet, nor the prospect of there ever being, a satisfactory answer. One popular way to explain the apparent progress in evolution is as a biological arms race (a metaphor coined by Julian Huxley, incidentally). Through natural selection, prey animals get faster and so in tandem do predators. Perhaps, as in military arms races, eventually electronics and computers get ever more important, and the winners are those who do best in this respect. The British evolutionary biologist Richard Dawkins has argued that humans have the biggest on-board computers and that is what we expect natural selection to produce. But it is not obvious that arms races would result in humans — those physically feeble and mentally able omnivorous primates. Nor that lines of prey and predator evolve in tandem more generally.

I’ll offer no final answers here, but one final question. Could a full-blown teleology, of the more scientific Aristotelian kind, reappear, complete with vital forces? There’s no logical reason to say this is impossible, and that is why I think it is legitimate for Nagel to raise the possibility. Two hundred years ago, people would have laughed at the idea of quantum mechanics, with all its violations of common-sense thinking. But there is a big difference: quantum mechanics was invented because it filled a big explanatory gap. This is Nagel’s big mistake: his argument for returning to the idea of purposes and goals in biology is not based on an extensive engagement with the science, but a philosophical skim across the surface. Quantum mechanics is weird, but it works. There is nothing in the idea of final causes to encourage such wishful thinking.

So what’s a Stegosaur for? We can ask what adaptive function the plates on its back served, as good Darwinian scientists. But the beast itself? It’s not for anything, it just is — in all its decorative, mysterious, plant-munching glory.


IMPORTANT: the copyright of this article belongs to the original author(s) and/or publisher(s). Please contact the admin if you have questions regarding the copyright: Contact

,


Related Articles:
Philosophy vs science: which can answer the big questions of life?
Planet of the apes

---

« Older