\n"; echo $styleSheet; ?>
include("http://www.corante.com/admin/header.html"); ?>

My brother Ben is now a respectable consultant for the Oxford English Dictionary, but when he was a kid, he was a puzzle freak, pure and simple. In fourth grade he'd spend hours paging through a big unabridged Webster's, looking for obscure words that he could use to create a fiendish rebus. Little did I know that one day one of his favorite puzzles--the doublet--would become useful to me in thinking about evolution.
The challenge of a doublet is to turn one word into another. You are allowed to change one letter at a time, but each change must produce a real word. Here's a doublet that suits a post on evolution: Change APE to MAN.
Give up?
APE
APT
OPT
OAT
MAT
MAN
Now imagine that having solved the APE-to-MAN puzzle, you tell a friend about your triumph.
Your friend scoffs. "That's ridiculous," he says. "I don't believe you've found a missing link between APE and MAN. It doesn't exist."
You furrow your brow. "Wait," you say. "No, I think maybe you didn't hear how the puzzle works--"
"I mean, what comes in between?"
"Well, there's APT, and then--."
"APT? Please! That's nothing like MAN. They don't have a single letter in common. It's just a completely separate word on its own."
"But then there's OPT--"
"OPT? Are you kidding me? That's just as irrelevant. You can't just go from APE to MAN through OPT."
"But what about MAT? That's a lot like MAN."
"Sure," your friend says, rolling his eyes. "But what on Earth does it have to do with APE?"
Is he really not getting it, you might ask yourself, or is he just pretending not to understand what I'm saying? That's how I felt when someone sent me an email to tip me off about an attack at the creationist web site Answers in Genesis. It is based on either a misunderstanding or a misrepresentation of what evolution is all about. And doublets help to explain why.
The attack concerns an interview I gave recently to an Australian radio talk show. The Aussies called me up to talk about President Bush's endorsement of discussing Intelligent Design in schools. Along the way, I explained why creationism has failed to win support in the scientific community. For one thing, creationists often base their arguments on supposed gaps in evolution, such as "missing links" in the fossil record. I talked about how creationists used to talk about the absence of intermediate fossils that would show how whales had evolved from land mammals. But once paleontologists began to find walking whales, the creationists no longer made that argument, moving on to some other gap.
I guess the creationists in Australia were listening to me that day, because now Mark Looiy of Answers in Genesis is here to tell you that in fact "creationists have been devoting many a printed (and web) page—and public lectures—to assertively debate the evolutionary whale claim."
Let's set aside the fact that scientific debates take place at conferences of scientific societies or in the pages of peer-reviewed biology journals. What exactly are the creationists offering in these pages and lectures? They claim that the fossils of early whales don't support the argument that whales evolved from land mammals, but their claims are unfounded for a number of reasons.
For one thing, Looiy's article (and a book by Jonathan Sarfati that he links to as evidence that creationists are still on the whale evolution case) are simply riddled with factual errors. To choose just one example, Sarfati claims that the fossil of Ambulocetus, an alligator-like whale with big feet, is "(conveniently) missing" the pelvis and other parts that are supposedly crucial to establishing the transition from land to sea. I imagine here a paleontologist gasping at the sight of a pelvis that disprove evolution and smashing it with his rock hammer. In fact, Hans Thewissen, the paleontologist who discovered Ambulocetus in Pakistan, has gone back year after year and has now found its pelvis and almost every other bone in this creature. And the complete skeleton supports his initial conclusion that this whale used its legs to kick through the water like an otter.
But there's a more fundamental problem with Looiy and Sarfati's take on whales. They look at individual fossils of whales and declare that each one tells us nothing about how whales evolved into marine mammals. The oldest whale, the goat-like Pakicetus, had fully terrestrial legs, so it tells us nothing. Much later, the fully aquatic whale Basiolosaurus retained tiny legs complete with ankles, but since it was completely marine, it also tells us nothing.
What they either don't know or don't want to explain is that scientists reconstruct evolutionary history by looking not at one species, but as many species as they can. They draw evolutionary trees by analyzing fossils or DNA, and they look at the traits that are shared by species on different branches of the tree. Pakicetus does look to have been very terrestrial, but it also had peculiar structures in its skull that are only found in whales. Over time, whale legs appear to have changed as whales adapted to the water--first becoming otter-like in the case of Ambulocetus, and then more seal-like in the case of Rodhocetus. Basilosaurus was much further along in this evolution, with much reduced legs that offered no help in swimming at all. And today, whales carry vestiges of hips.
No one species bridged the entire transition from land mammal to marine whale, just as no word bridges the transition from APE to MAN. What's more, many of these early whale fossils--while related to living whales--did not give rise to them directly. They're more like aunts and uncles to today's living whales. In some cases, such as a group called remingtonocetids, walking whales branched off in weird directions of their own, in some cases evolving bizarre heron-shaped heads. A couple years ago Thewissen summarized all the information available on fossil and living whales with this tree--a tree that continues to support the evolution of whales from terrestrial ancestors. It may not be the full solution to the doublet LAND MAMMAL to MARINE WHALE, but it's a very good start.




"March of the Penguins," the conservative film critic and radio host Michael Medved said in an interview, is "the motion picture this summer that most passionately affirms traditional norms like monogamy, sacrifice and child rearing." --from an article describing how some religious leaders and conservative magazines are embracing the blockbuster documentary.
Well, it's 2010, and what a remarkable five years it's been. The blockbuster success of March of the Penguins in 2005 triggered a flood of wonderful documentaries about animal reproduction, all of which provide us with inspiring affirmation of the correct way to live our lives. Here are just a few of the movies that can guide you on your path...
Dinner of the Redback Spiders: This documentary follows the heartwarming romance between two spiders that ends with the male somersaulting onto the venomous fangs of his mate, his reproductive organs still delivering semen into the female as she devours him.
Toxic Love of the Fruit Flies: In this movie, male fruit flies demonstrate their ingenuity and resourcefulness by injecting poisonous substances during sex that make it less likely that other males will successfully fertilize the eggs of their mates. Sure, these toxins cut the lifespan of females short, but who said life was perfect?
Harem of the Elephant Seals: Meet Dad: a male northern elephant seal who spends his days in bloody battles with rivals who would challenge his right to copulate with a band of females--but doesn't life a finger (or a flipper) to help raise their kids.
Step-fathers of the Serengeti: Guess who's moving in? It's a male lion taking over a pride of females. Watch him affirm traditional norms by killing their cubs so that they can father his own offspring.
Funky Love of the Bonobos: The sexual shenanigans of some of our closest living ape relatives. Male-female, female-female, and on and on it goes. Warning: Definitely not suitable for children.


(Warning: this post contains some journalistic/blogging inside-baseball material.)
Back in the dark ages (otherwise known as the 1990s), writing about science felt a bit like putting messages in a bottle. I'd write an article, a few weeks or months later it would appear in a magazine, and a few weeks or months later I might get a response from a reader. In some cases, an expert might point out an error I made. In other cases, she or he might explain the real story which I had missed. The delay could make for some disconcerting experiences. The first time I met the late Stephen Jay Gould, to interview him for a book I was working on, I was still lowering myself into a chair when he began complaining about the cover headline to a story I had written about fossil birds over a year beforehand. I stared at him blankly for a while as I reached back into my memory banks to figure out what he was talking about.
It's much better these days, now that people can hammer me with emails seconds after my stories is are published. Science is a murky, complex endeavor, and my job has never stopped feeling like an apprenticehip, as I learn from mistakes.
But this new arrangement comes with a downside. Some criticisms are unjustified, and instead of simply emailing me these complaints, people sometimes decide to publish them for all to see.
John Hawks, an anthropologist at the University of Wisconsin, has done just this. He has written a long complaint about an article I wrote for the latest issue of Discover. The issue celebrates the 25th anniversary of the magazine, and it contains a series of two-page spreads that take a look at different fields in science and where they're headed. The editors asked me to contribute a piece on human evolution. I included an interview with Tim White of Berkeley, an essay on the growing role of scanning in studies on hominid fossils, and a large graphic showing how scientists used CT-scans to reconstruct the skull of Sahelanthropus, the oldest known skull of a hominid.
Hawks makes a series of complaints about the piece, but rather than sticking to the article itself, he tends to focus on the "subtext," which he alone has the mysterious power to read. For example, the subtext apparently says that "anything high tech must be better." I never made such a claim, and it would have been silly for me to add a disclaimer to that effect: "Warning--not all things high tech are better." Healthy skepticism is certainly a virtue, but Hawks is ignoring the fact that the entire issue is dedicated to promising new scientific developments. (Here's an article on research on using lasers in art conservation. I suppose Hawks would complain that the article didn't mention that lasers can also kill people.)
When Hawks does actually deal with the article itself, he makes some serious mistakes. He mocks the conclusion of my piece, in which I describe some new applications of fossil scans--such as reconstructing wounds, simulating hominids walking, and making the scans available online to other researchers who can't see the originals. "So utopian," he sneers.
As evidence, he turns to my interview with Tim White, in which White talked about the importance of other kinds of technology to the study of human evolution--such as the global positioning system, advances in dating fossils. "No CT scans there," Hawks declares.
Hawks shouldn't argue from the absence of evidence. Actually, White talked to me at length about the promise of CT scans, including some of the applications I mentioned in the article. It would have been redundant to include his comments. Hawks may not be impressed by scans, but he shouldn't count White on his side.
So why isn't Hawks impressed with scans? For one thing, scientists can make mistakes with them, producing recontructions as biased as any handmade reconstruction. "The principle of 'garbage in, garbage out' is everlasting," he says.
True, but so what? I remember the same argument being made in the 1990s, when some biologists were starting to reconstruct the tree of life by using computers to analyze DNA sequences and morphological features, rather than relying on a more intuitive sense of what evolved from what (a method known as cladistics). Critics warned that the cladists were just dumping bad data into their computers, and so their conclusions couldn't be trusted. In fact, the cladists were producing testable hypotheses with explicit assumptions that anyone could challenge. Of course there are cases in which this approach may face problems (in comparing populations of the same species, for example, or species that can swap genes, for example). But that hasn't stopped cladistic trees from becoming the standard for the field. The garbage-in-garbage-out complaint is equally beside the point when it comes to predicting the importance of scans to the study of human evolution.
Descending again into my subtext, Hawks writes that "a read of the article gives the impression that every finding from this new advanced technology supports splitting hominids into several species." If I may indulge in a little subtext-divining myself, I think we're getting somewhere now. Hawks is a long-time proponent of the idea that too many hominid fossils have been designated as separate species. It just so happens that a couple of recently published scans--one of Neanderthal children and one of the "Hobbit" brain--have been interpreted by the authors of these studies as supporting the idea that these fossils do not belong to humans, but to other species. But instead of directing his wrath at these scientists, Hawks directs it at me. In order to do so, however, he has to ignore the fact that I write about many other applications of scans that don't support splitting hominids into several species.
Hawks is perfectly entitled to attack hominid-splitting (and on his blog he has done a great job of documenting new research that supports his attack). But I don't appreciate him distorting my own writing to serve that agenda. It's particularly unfair to do so when most people haven't had a chance to read the article for themselves, and have to rely instead on Hawks's misleading summary.
Update, Wed. 1pm: Another improvement on the dark ages: when I attack, the attackee can respond. John Hawks defends his post in the comments. I agree that CT scans of hominid fossils are not now being freely shared on the net. But I think there's reason to be optimistic--see, for example, the Digimorph Project, which is building up a big database of scans of bones from living and fossil animals. Would it have been utopian to predict Digimorph a decade ago?


I'm back from a computer-free vacation, and of course I have returned to mountains of emails and a long chain of fascinating new links. In place of any original thoughts of my own, let me just point you to a few things that look interesting (if you have any mental space not presently occupied by the horrors of Katrina).
1. Over the past couple years I've enjoyed watching Chris Mooney's blogging and articles evolve into a full-blown book, The Republican War on Science, which has just come out. Tonight he hits the big time tonight on the Daily Show.
2. Mooney is actually just part of the opening night of a week-long evolution series on the Daily Show. A couple years ago my wife and I decided to give up cable because we feared we'd be use up what little free time we had watching has-been celebrity biographies or movies about evil mechanical sharks. (I should point out that my wife is strangely immune to the lure of the mechanical sharks.) It's times like these, though, when I wish we still had just a little cable.
3. Human brains are evolving. Questionable Authority recaps. Will Bruce Lahn get the Nobel Prize someday?
4. Parasites are manipulating. Latest case: grasshoppers hurling themselves to their death on behalf of hairworms. Not a public health threat like malaria's sweet perfume, but very high on the science-fiction-meter.
5. Life on Titan? Astronomer David Grinspoon thinks all the raw ingredients are there.


Clint, the chimpanzee in this picture, died several months ago at a relatively young age of 24. But part of him lives on. Scientists chose him--or rather, his DNA--as the subject of their first attempt to sequence a complete chimpanzee genome. In the new issue of Nature, they've unveiled their first complete draft, and already Clint's legacy has offered some awesome insights into our own evolution.
The editors of Nature have dedicated a sprawling space in the journal to this scientific milestone. The main paper is 18 pages long, not to mention the supplementary information kept on Nature's web site. In addition, the journal has published three other papers that take a closer look at particularly interesting (and thorny) aspects of the chimpanzee genome, such as what it says about the different fates of the Y chromosome (the male sex chromosome) in chimpanzees and humans. Other scientists offer a series of commentaries on topics ranging from brain evolution to chimpanzee culture. The journal Science has also gotten in on the action, with a paper comparing the expression of chimp and human genes as well as comments on the importance of chimpanzee conservation and research. (Thankfully, some of this material is going to be made available online for free.)
Why all the attention to the chimpanzee genome? One important reason is that it can tell us what parts of the human genome make us uniquely human--in other words, which parts that were produced by natural selection and other evolutionary processes over the past six million years or so, since our hominid ancestors diverged from the ancestors of our closest living relatives, chimpanzees. (Bonobos, sometimes known as pygmy chimpanzees, are also our first cousins, having split off from chimpanzees 2-5 million years ago.) Until now, scientists could only compare the human genome to the genomes of more distantly related species, such as mice, chickens, and fruit flies. They learned a lot from those comparisons, but it was impossible for them to say whether the differences between humans and the other species were unique to humans, or unique to apes, or to primates, or to some broader group. Now they can pin down the evolutinary sequence much more precisely. Until scientists rebuild the Neanderthal genome--if they ever do--this is going to be the best point of comparison we will ever get. (For more of the background on all this, please check out my new book on human evolution, which will be out in November.)
The analysis that's being published today is pretty rudimentary. It's akin to what you'd expect from a reporter who got to spend an hour flipping through 10,000 pages of declassified government documents. But it's still fascinating, and I'd wager that it serves as a flight plan for research on the evolution of the human genome for the next decade.
First off, scientists can get a more precise figure of how different human and chimpanzee DNA is. In places where you can line up stretches of DNA precisely, there are 35 million spots where a single "letter" of the code (a nucleotide) is different. That comes to about 1.2% of all the DNA. The scientists also found millions of other spots in the genomes where a stretch of DNA had been accidentally deleted, or copied and inserted elsewhere. This accounts for about a 3% difference. Finally, the scientists found many genes that had been duplicated after the split between humans and chimps, corresponding to 2.7% of the genome.
By studying the human genome, scientists have also gotten a better picture of the history of the genomic parasites that we carry with us. About half of the human genome consists of DNA that does not produce proteins that are useful to our well-being. All they do is make copies of themselves and reinsert those copies at other spots in the genome. Other animals have these virus-like pieces of DNA, including chimpanzees. Some of the genomic parasites we carry are also carried by chimpanzees, which means that we inherited them from our common ancestor. Many of these parasites have suffered mutations that make them unable to copy themselves any longer. But in some cases, these parasites have been replicating (and evolving) much faster in one lineage than the other. One kind of parasite, called SINES, have spread three times faster in humans than in chimps. Some 7,000 genomic parasites known as Alu repeats exist in the human genome, compared to 2,300 in the chimp genome. While a lot of these parasites have no important effect on our genome, others have. They've helped delete 612 genes in humans, and they've combined pieces of some 200 other genes, producing new ones.
In some cases, the interesting evolution has occurred in the chimpanzee lineage, not in our own ancestry. Scientists have noted for a long time that the Y chromosome has been shrinking for hundreds of millions of years. Its decline has to do with how it is copied each generation. Out of the 23 pairs of our chromosomes, 22 have the same structure, and as a result they swap some genes as they are put into sperm or egg cells. Y chromosomes do not, because their counterpart, the X, is almost completely incompatible. My Y chromosome is thus a nearly perfect clone of my father's. Mutations can spread faster when genes are cloned than when they get mixed together during recombination. As a result, many pieces of the Y chromosome have disappeared over time, and many Y genes that once worked no longer do.
Scientists have discovered that Clint and his fellow chimpanzee males have taken a bigger hit on the Y than humans have. In the human lineage, males with mutations to the Y chromosome have tended to produce less offspring than those without them. (This is a process known as purifying selection, because it strips out variations.) But the scientists found several broken versions of these genes on the chimpanzee Y chromosome.
Why are chimpanzees suffering more genetic damage? The authors of the study suggest that it has to do with their sex life. A chimpanzee female may mate with several males when she is in oestrus, and so mutations that give one male's sperm an edge over other males are ben strongly favored by selection. If there are harmful mutations elsewhere on that male's Y chromosome, they may hitchhike along. We humans are not so promiscuous, and the evidence is in our Y chromosome.
As for the mutations that make us uniquely human, the researchers point out some suspects but make no arrests. The researchers found that a vast number of the differences between the genomes are inconsquential. In other words, these mutations didn't have any appreciable effect on the structure of proteins or on the general workings of the human cell. But the scientists did identify a number of regions of the genome, and even some individual genes, where natural selection seems to have had a major impact on our own lineage. A number of these candidates support earlier studies on smaller parts of the genome that I've blogged about here. Some of these genes appear to have helped in our own sexual arms race; others created defenses against malaria and other diseases.
When scientists first lobbied for the money (some twenty to thirty million dollars) for the chimp genome project, they argued that the effort would yield a lot of insight into human diseases. The early signs seem to be bearing them out. In their report on the draft sequence, they show some important genetic differences between humans and chimpanzees that might have bearing on important questions such as why we get Alzheimer's disease and chimps don't and why chimpanzees are more vulnerable to sleeping sickness than we are, and so on.
There is also a lot of variation within our own species when it comes to disease-related genes, and here too the chimpanzee genome project can shed light. The researchers show how some versions of these genes found in humans are the ancestral form also shared by chimpanzees. New mutations have arisen in humans and spread in the recent past, possibly favored by natural selection. The ancestral form of one gene called PRSS1, for example, causes pancreatitis, while the newer form does not.
But our genetic defenses and weaknesses to diseases aren't really what we'd like to think make us truly, uniquely human. The most profound difference between the bodies of humans and chimpanzees is the brain. Much of the evolution that's been going on in genes expressed in the brain has been purifying. There are a lot of ways to screw up a brain, in other words. But some genes appear to have undergone strong positive selection--in other words, new mutation sequences have been favored over others. It's possible that relatively few genes played essential roles in producing the human brain.
You can feel the excitement of discovery thrumming through these papers, but it comes with a certain sadness as well. It doesn't come just from the fact the chimpanzee whose DNA made this all possible died before he became famous. Lots of chimpanzees are dying--so many, in fact, that conservationists worry that they may become extinct from hunting, disease, and habitat destruction. And once a species is gone, it takes a vast amount of information about evolutionary history with it.
I was reminded of this fact when I read another chimpanzee paper that appears in the same issue of Nature, reporting on the first fossil of a chimpanzee ever discovered. It may be hard to believe that no one had found a chimp fossil before. A big part of the problem, scientists thought, was that chimpanzees were restricted to rain forests and other places where fossils don't have good odds of surviving. The fossils that have now been discovered don't amount to much--just a few teeth--and they raise far more questions than they answer. They date back about 500,000 years, to an open woodlands in Kenya where paleoanthropologists have also found fossils of tall, big-brained hominids that may have been the direct ancestors of Homo sapiens. So apparently chimpanzees once coexisted with hominids in the open woodlands that were once thought to be off-limits to them. More chimpanzee fossils will help address this puzzle, but they may never fully resolve it.
The chimpanzees of Kenya became extinct long ago, and now other populations teeter on the brink. To make sense of Clint's genome, scientists need to document the variations both within and between chimpanzee populations--not just genetic variations, but variations in how they eat, how they organize their societies, how they use tools, and all the other aspects of the lives. If they don't get that chance, the chimpanzee genome may become yet another puzzling fossil.



Our genes are arrayed along 23 pairs of chromosomes. On rare occasion, a mutation can change their order. If we picture the genes on a chromosome as
ABCDEFGHIJKLMNOPQRSTUVWXYZ
a mutation might flip a segment of the chromosome, so that it now reads
ABCDEFGHISRQPONMLKJTUVWXYZ
or it might move one segment somewhere else like this:
ABCDLMNOPQRSTUEFGHIJKVWXYZ
In some cases, these changes can spread into the genome of an entire species, and be passed down to its descendant species. By comparing the genomes of other mammals to our own, scientists have discovered how the order of our genes has been shuffled over the past 100 million years. In tomorrow's New York Times I have an article on some of the latest research on this puzzle, focusing mainly on two recent papers you can read here and here.
One of the most interesting features of our chromosomes, which I mention briefly in the article, is that we're one pair short. In other words, we humans have 23 pairs of chromosomes, while other apes have 24. Creationists bring this discrepancy up a lot. They claim that it represents a fatal blow to evolution. Here's one account, from Apologetics Press:
If the blueprint of DNA locked inside the chromosomes codes for only 46 chromosomes, then how can evolution account for the loss of two entire chromosomes? The task of DNA is to continually reproduce itself. If we infer that this change in chromosome number occurred through evolution, then we are asserting that the DNA locked in the original number of chromosomes did not do its job correctly or efficiently. Considering that each chromosome carries a number of genes, losing chromosomes does not make sense physiologically, and probably would prove deadly for new species. No respectable biologist would suggest that by removing one (or more) chromosomes, a new species likely would be produced. To remove even one chromosome would potentially remove the DNA codes for millions of vital body factors. Eldon Gardner summed it up as follows: “Chromosome number is probably more constant, however, than any other single morphological characteristic that is available for species identification” (1968, p. 211). To put it another way, humans always have had 46 chromosomes, whereas chimps always have had 48.
There's a lot that's wrong here, and it can be summed up up with one number: 1968.
Why would someone quote from a 37-year-old genetics textbook in an article about the science of chromosomes? It's not as if scientists have been just sitting around their labs since then with their feet up on the benches. They've been working pretty hard, and they've learned a lot. And what they've learned doesn't agree with what Apologetics Press wants to claim.
The first big discovery came in 1982, when scientists looked at the patterns of bands on human and ape chromosomes. Chromosomes have a distinctive structure in their middle, called a centromere, and their tips are called telomeres. The scientists reported that the banding pattern surrounding the centromere on human chromosome 2 bore a striking resemblance to the telomeres at the ends of two separate chromosomes in chimpanzees and gorillas. They proposed that in the hominid lineage, the ancestral forms of those two chromosomes had fused together to produce one chromosome. The chromosomes weren't lost, just combined.
Other researchers followed up on this hypothesis with experiments of their own. In 1991, a team of scientists managed to sequence the genetic material in a small portion of the centromere region of chromosome 2. They found a distinctive stretches of DNA that is common in telomeres, supporting the fusion hypothesis. Since then, scientists have been able to study the chromosome in far more detail, and everything they've found supports the idea that the chromosomes fused. In this 2002 paper, for example, scientists at the Fred Hutchinson Cancer Research Center reported discovering duplicates of DNA from around the fusion site in other chromosomes. Millions of years before chromosome 2 was born, portions of the ancestral chromosomes were accidentally duplicated and then relocated to other places in the genome of our ancestors. And this past April, scientists published the entire sequence of chromosome 2 and were able to pinpoint the vestiges of the centromeres of the ancestral chromosomes--which are similar, as predicted, to the centromeres of the corresponding chromosomes in chimpanzees.
Today geneticists sometimes encounter people with fused chromosomes, which are often associated with serious disorders like Downs syndrome. But that doesn't mean that every fusion is harmful. Many perfectly healthy populations of house mice, for example, can be distinguished from other house mice by fused chromosomes. The fusion of chromosome 2 millions of years ago may not have caused any big change in hominid biology--except, perhaps, by making it difficult for populations of hominids with 23 pairs of chromosomes to mate with populations who still had 24. As a result, it may have helped produce a new species of hominid that would give rise to our own.
Just goes to show what 37 years of scientific research can turn up.
Update: Tuesday, 3:30: Thanks to Dr. Paul Havlak for pointing out that some people with fused chromosomes suffer no ill effects. This site at the University of Utah has more information.


Sometimes a picture can tell you a lot about evolution. This particular picture has a story to tell about how two species--in this case a fly and an orchid--can influence each other's evolution. But the story it tells may not be the one you think.
Coevolution, as this process is now called, was one of Darwin's most important insights. Today scientists document coevolution in all sorts of species, from mushroom-farming ants to the microbes in our own gut. But Darwin found inspiration from the insects and flowers he could observe around his own farm in England.
Darwin's thoughts about coevolution began with a simple question: how do flowers have sex? A typical flower grows both male and female sexual organs, but Darwin doubted that a single flower would fertilize itself very often. Flowers, like other organisms, display a lot of variation, and Darwin thought that the only way flowers could vary was if individuals mates, mixing their characters. (Sex turns out not essential for creating variation, but it does do a good job of creating it.) But in order to have sex, plants can't walk around to find a mate. Somehow the pollen of one flower has to get to another. Not just to any flower, moreover, but to a member of its own species.
The random wind might suffice for some plants. But Darwin also knew that bees visited many flowers to gather their nectar. He began to study what happened on those visits. He would watch bees land on scarlet kidney bean plants, for example, and climb up a petal to get to its nectar. The flower's pollen-bearing organs, Darwin found, were located in precisely the right spot to brush pollen onto the back of feeding bees. When the bees traveled to another scarlet kindey bean plant, they unloaded the pollen. The bees depended on the flowers for food, and the flowers depended on the bees for sex. Without each other, they could not survive.
In the Origin of Species, Darwin offered some thoughts on how this sort of partnership between bees and clover could have evolved. Imagine that the flowers are pollinated by other insects, but the insects go extinct in some region. Now all their nectar goes uneaten. Honeybees might visit the flowers sometimes, and variations that allowed them to reach the nectar--a longer tongue-like proboscis, for example, more easily might be favored by natural selection.
Meanwhile, the flowers would be experiencing intense natural seleciton of their own. Without their old pollinators, their chances of producing offspring plummeted. Any variation that would make it easier for honeybees to pollinate them would bring a huge increase in reproductive success. Gradually, the flowers anatomy would come to match that of the honeybees, just as the honeybees were adapting to the flower.
"Thus I can understand," Darwin wrote, "how a flower and a bee might slowly become, either simultaneously or one after the other, modified and adapted in the most perfect manner to each other, by continued preservation of individuals presenting mutual and slightly favoruable deviations of structure."
Around the time that Darwin published the Origin of Species, he developed a fondness for orchids. He was not alone; at the time a rising orchid fever was seizing England's upper class. Aristocrats would dispatch explorers to the Amazon or to Madagascar, where they would strip entire hillsides of the rare plants. Some prized specimens sold for hundreds of pounds at auctions in London and Liverpool. If, as many people then believed, the only meaning of natural beauty was as a gift from God, orchids were the most exquisite gifts of all. They could have only one purpose: to please the eye of man.
Darwin had other ideas.
In orchids, he discovered the same evolutionary pressures at work as in other flowers, but the results were supremely baroque and bizarre. Despite the prices orchids might fetch at auction, their beauty did not exist for beauty's sake. It was, Darwin showed, an elaborate means for luring insects into their sex lives. He documented case after case of these adaptations. One species, for example, had its pollen loaded in a crossbow-like structure that bees triggered by walking across a petal.
Darwin described this and many other adaptations in The Various Contrivances by Which British and Foreign Orchids are Fertilized by Insects, and on the Good Effects of Intercrossing. Darwin guided the reader from orchid to orchid, showing how each flower's design was not simply beauty for beauty's sake, but some of nature's most elaborate forms of sex. He showed how orchids were simply highly evolved flowers. All the various parts of ordinary flowers had simply been stretched and twisted and otherwise transmogrified into new structures such as crossbows.
Darwin was so confident that orchids were adapted to their pollinators that he made a bold prediction in his book. He pointed out how many orchids produce their nectar at the bottom of long tubes called nectaries. The insects that feed on them are equipped with tongues that are almost the same length. Short-tongued insects visit flowers with shallow nectaries, and long-tongued insects visit deep nectaries. In every case, the insect has to press its head against the flower to reach the bottom of the tube. The orchid's pollen is invariably positioned in a place where it can stick to the insect's head while it drinks.
Darwin saw the evolution of these tubes and tongues as the result of a race between flower and insect. If an insect could drink nectar without pressing its head against the orchid, it couldn't pass on its pollen. Natural selection would thus favor orchids with longer tubes. At the same time, an insect with a tongue that was too short for the tube wouldn't be able to drink all the nectar.
In some cases, this race between orchid and insect might drive each partner to absurd extremes. Darwin once received an orchid from Madagascar, called Angraeceum sequipedale, with a whip-shaped nectary over eleven inches long, with a drop of nectar tucked away at its very base. Only an animal with a suitably long tongue could drink it. Darwin predicted that somewhere in Madagascar there must live just such an insect.
The orchid's pollen, he declared, "would not be withdrawn until some huge moth, with a wonderfully long proboscis, tried to drain the last drop."
When Darwin died in 1882, the Madagascar orchid was still without a partner. But in 1903 entomologists discovered an extraordinary Madagascar hawkmoth. Normally its proboscis remained curled up like a watch spring. But when it approached orchids, it pumped fluid into the proboscis to straighten it out like a party balloon, and then insert it into the flower, as carefully as a tailor threads a needle's eye.
Scientists have found many other orchids and other flowers with an equally intimate relationship with their pollinators. Steven Johnson, a South African biologist, has documented lots of them in his part of the world, as he descirbed in an excellent article this spring in Natural History.
Now, in the August issue of the American Journal of Botany, Johnson and his colleagues have published a paper about a new orchid, shown in this picture. Disa nivea is a rare orchid found only in a few places in South Africa, and until Johnson came to study it, no one knew how it was pollinated. After a lot of patient orchid-watching, he and his colleagues discovered that it is visited exclusively by the fly shown in the picture. Its proboscis is well-matched to the length of the orchid, and the orchid grows pollen in just the right place so that they get stuck to the fly. You can see them in this picture--the two dangling yellow packets on the fly's snout.
There's just one catch: when the fly manages to get its proboscis all the way down to the bottom of the orchid's nectary, it finds no nectar.
To explain this deceit, Johnson and his colleagues observe that the orchids are always found intermingled with a similar-looking plant related to foxgloves. These plants are also pollinated by the same fly, but unlike the orchid, they reward visiting flies with nectar. Johnson and his colleagues argue that the orchid has evolved to mimic the rewarding flower, luring the flies with the same cues but deceiving them in the end.
To test this hypothesis, the scientists looked at five populations of the rewarding flower, measuring their dimensions. They found that from one population to another, the orchids mimic their local models. In some places, the rewarding flower is twice as long as in other places; the same goes for the orchid. Where the rewarding flowers are wide, so are the orchids; where they are narrow, the orchids are as well. These patterns are evidence that the evolution of this deceit is not a thing in the past, but an ongoing process.
Darwin would have not believed that such a deceitful plant could exist. Botantists had reported nectarless orchids as early as 1798, but Darwin thought they had to be wrong. Insects were too smart to be fooled for long. They would learn how to recognize a deceitful plant and avoid it, and the deceivers would become extinct. That turns out to be quite wrong. Over 8,000 species of orchids are believed to practice deceit. Most, like Disea nivea, mimic a food-supplying plant in their shape and odor. Others lure flies with growths that look and smell like feces. Others produce sex pheromones to lure male insects and sometimes even produce shapes that look and feel like female insects--so much so that the males try to mate with them. (More on wasp-on-orchid kinkiness here.) Orchids can in fact outfox insects, but only by continually reshaping their deceptions. Scientists suspect that the main benefit of deceit is that insects tend to fly far away after getting fooled. As a result, tend to fertilize more distant orchids, which gives the flowers a healthy supply of genetic variation.
It's fascinating to compare the story of Disea nivea to Angraeceum sequipedale. In one case, Darwin was right, and in the other he was wrong--at least in the details. His rough ideas about coevolution have developed over nearly 150 years into a huge body of knowledge about how partners shape one another over time. It just turns out that sometimes coevolution can push life in directions he couldn't imagine.
(Note: I adapted parts of the historical material in this post from my book Evolution.)
Update, Sunday 2 pm: For some reason the comments aren't going through for this post. We'll try to fix the bug today.
Update, Monday 11 am: Okay, comments are working again.


Well, Dr. Chopra has given us part two of his ruminations on evolution with a post that will make physicists cringe as much as biologists.
My favorite line: "Consciousness may exist in photons, which seem to be the carrier of all information in the universe."
Excuse me while I chat with my flashlight.


From an article on how John McCain may be positioning himself for a presidential run in The Arizona Star:
McCain told the Star that, like Bush, he believes "all points of view" should be available to students studying the origins of mankind.
"Available" is a wonderfully vague word.
Senator, Senator, a follow-up question please? Just a clarification? Do you mean that teachers just drop some pamphlets by the door that explain how we were designed by aliens? Or should that be on the final exam?


Scientists have been making some remarkable discoveries about viruses recently that may change the way we think about life. One place to start understanding what it all means is by looking at this picture.
You can't help put see a bright triangle with its three corners sitting on top of the black circles. But the triangle exists only in your mind. The illusion is known as a Kanisza triangle, and psychologists have argued that it plays on your brain's short-cuts for recognizing objects. Your brain does not bother to interpret every point of light that hits your retina in order to tell what you're looking at. Instead, it pulls out some simple features quickly and makes a hypothesis about what sorts of objects they belong to. It's fast and pretty reliable, allowing you to make quick decisions. For getting us through our ordinary lives, it's good enough. But as a guide to objective reality, it is far from perfect. What's really weird about the Kanisza trinagle is that even when you accept that it doesn't exist (cover up the circles and watch it disappear) you can still can't stop yourself from seeing it. You just have to accept that your brain's short-cuts are fooling you.
Scientists have documented lots of illusions that may expose many other mental short-cuts. And it's possible that one of them may interfere with the way we think about life. For most of the history of Western thought, natural philosophers tried to divide up living things into species and other groups on the belief that each group shared an underlying nature--an essence. Birds all have feathers, setting them off from other animals. People always give birth to people, rather than rabbits or trout. But recent psychological research suggests that essentialism is not something we come to after years of careful thought. We are essentialists from childhood. (For a nice summary of this research, see this recent article by University of Michigan psychologist Susan Gelman.) Children seem to put things into categories and come to believe that there are deep, non-obvious differences between the categories, even if they don't know what those differences are. The essence of these things is stable, children believe, and intrinsic--particularly when those things are species.
Why do we have this essence-perceiving faculty in our brains? One possibility--an adaptationist explanation--is that it helps us to predict how things will act, and allows us to come up with a reliable response. If you meet a lion, you don't need to sit down and get to know that individual lion to figure out how it will act. A lion is a lion, and you run. Of course, that particular lion might be blind or tame or a guy in a lion suit. But you're probably better off just letting the essence of lions be your guide.
Essences can act as a rough guide to organizing the world. A bird guide distinguishes different species by their unique colors and shapes. But our essentialist brains can also get us into trouble. In the 1700s naturalists could not draw clear lines between species of plants that could clearly hybridize. The discovery of the platypus in the early 1800s--an animal that nursed its young like mammals but laid eggs unlike any other mammal--posed an enormous headache. When Darwin and other scientists began arguing that humans shared a common ancestry with chimpanzees and gorillas, anatomists such as Richard Owen desperately tried to find traits in the human brain that would firmly set us apart--signs, as it were, of our unique essence. Owen failed, and today's research on the human genome helps to show what a futile effort he was making. Humans are different, just like each species is, but they are also linked to other species by common descent. They have no more of a special essence than the branches on a tree.
Which brings us to viruses. Viruses have traditionally been considered fundamentally different than "true" organisms, such as bacteria, animals, and plants. That's because all viruses that scientists studied were just simple bags of genes, made up of tiny bits of genetic material encased in protein shells. They were not truly alive, because their few genes could only be copied and turned into proteins with the help of a cell's biochemical machinery. Outside a cell, they were inert, lifeless packages drifting through the world, waiting to bump into a new host.
Last year this essence of viruses began to blur. Scientists discovered a gigantic virus capable of making 150 proteins, including enzymes for repairing DNA and for translating a gene's code into protein. Its entire genome is 1.2 million base pairs long--about twice as long as the smallest genomes of parasitic bacteria. These viruses are not rare flukes. Just a few days ago, scientists reported on how they plumbed a database of DNA gathered by Craig Venter from the Sargasso Sea and found signs that there are a lot of these giant viruses floating out in the oceans.
Today, viruses from another part of the world blurred their essence even more. Scientists reported in Nature the discovery of strange viruses from hot springs in Italy. The viruses reproduce inside microbes, and when they burst out of their host, they do not remain inert. Instead, they continue developing, growing tails made out of filament-shaped proteins that are encoded by their own genes. It's not clear from the report whether the viruses can make the proteins themselves, or if their hosts make them and then squirt them out into the surrounding water. But whichever the case, the scientists conclude that viruses "may be even more biologically sophsticated than previously recognized."
The discoverers of the "living" virus compared some of its genes to those of other organisms and argued that it has an ancient history, descending from organisms that lived four billion years ago, before the major branches of life had emerged. Some critics have argued that these viruses actually stole the genes from their hosts and incorporated them into their own genome, but the original team has rebutted them in a paper submitted to Virus Research. It is still possible that these viruses stole some of their genes from their hosts, because the evidence of viral gene theft is now overwhelming. On the other hand, viruses seem to have sometimes donated their genes to their hosts. Some researchers have even argued that many of the key components of our own cells, from DNA-copying enzymes to DNA itself--began as viruses.
So try to ignore that urge to see viruses as a separate kind from us, just as you try to ignore the triangle that isn't there. Despite what we may think, life is a wonderful blur.





The red blob in this picture is a human red blood cell, and the green blob in the middle of it is a pack of the malaria-causing parasites Plasmodium falciparum. Other species of the single-celled Plasmodium can give you malaria, but if you're looking for a real knock-down punch, P. falciparum is the parasite for you. It alone is responsible for almost all of the million-plus deaths due to malaria.
How did this scourge come to plague us? In a paper to be published this week in the Proceedings of the National Academy of Sciences, scientists have reconstructed a series of molecular events three million years ago that allowed Plasmodium falciparum to make us its host. They argue that a change in the receptors on the cells of hominids was the key. Ironically, this same change of receptors may have also allowed our ancestors to evolve big brains. Malaria may simply be the price we pay for our gray matter.
To uncover this ancient history, the researchers compared the malaria humans get to the malaria of our closeest relatives, chimpanzees. In 1917, scientists discovered Plasmodium parasites in chimpanzees that looked identical to human Plasmodium falciparum. But when some ethically challenged doctors tried to infect people with the chimpanzee parasites, the subjects didn't get sick. Likewise, chimpanzees have never been known to get sick with Plasmodium falciparum from humans. In the end, scientists recognized that chimpanzees carry a separate species of Plasmodium, known today as Plasmodium reichenowi. Studies on DNA show that Plasmodium rechnowi is the closest living relative to Plasmodium falciparum--just as chimpanzees are the closest living relatives of humans.
The authors of the new study set out to find the difference between these parasitic cousins. They focused on how each species of Plasmodium gets into red blood cells. Every Plasmodium species uses special molecular hooks on its surface to latch onto receptors on the cell, and then noses its way through the membrane to get inside. The parasite has a number of hooks, each of which is adapted to latch onto particular kinds of receptors. One of the most important groups of receptors that Plasmodium needs to latch onto are sugars known as sialic acids, which are found on all mammal cells.
These sugars play a crucial but mysterious role in human evolution. As I've written here (and here), almost all mammals carry a form of the sugar called Neu5Ac on their cells, as well as a modified version of it, known as Neu5Gc. In most mammals, this modified form, Neu5Gc is very common. In humans, it's nowhere to be found. That's because the enzyme that converts the precursor Neu5Ac into Neu5Gc doesn't work. We still carry the gene for the enzyme, but it became mutated about three million years ago and stopped working.
Since chimpanzees make Neu5Gc and we don't, the researchers hypothesized that the two Plasmodium species must use different strategies to latch onto red blood cells. To test their hypothesis, they genetically engineered cells to produce the molecular hooks used by human Plasmodium falciparum, and other cells to produce the chimp parasite hooks. The researchers then mixed the engineered cells with red blood cells from humans and chimpanzees to see how well they attached. In another set of experiments, they made human blood cells more chimpanzee-like by adding Neu5Gc sugars to them, to see if the change helped the chimpanzee parasites attack them, or if it impaired the attacks of human parasites.
Their results show that humans are uniquely vulnerable to Plasmodium falciparum because our ancestors lost the Neu5Gc sugar. Plasmodium falciparum prefers to bind to Neu5Ac, the sugar we still carry. At the same time, the sugar we lost somehow blocks Plasmodium falciparum's hooks from latching onto Neu5Ac. That's why chimpanzees don't get sick with Plasmodium falciparum, despite carrying both kinds of sugars. On the other hand, we don't get sick with chimpanzee malaria, because Plasmodium reichenowi prefers attaching to Neu5Gc, the sugar we lost.
The scientists argue that some seven million years ago the common ancestor of chimpanzees and humans carried both kinds of sugars on their cells. This ancient ape would sometimes get sick with malaria, caused by the common ancestor of today's P. rechnowi and P. falciparum. This ancient parasite preferred to latch onto Neu5Gc to get into its host's blood cells.
Hominids then branched off from other apes, walking upright and moving out of the jungle into open woodlands. They still got sick with the old malaria, because they still produced both kinds of sugars. But then, about three million years ago, our ancestors lost the ability to make Neu5Gc. Initially this was a great relief, because the malaria parasites had a much harder time gaining entry into our cells.
But this relief did not last, the scientists argue. Sometimes mutant parasites emerged that did a better job of latching onto the one sugar hominids still made, Neu5Ac. They now could get into hominid red blood cells, while other Plasmodium parasites were still making do with the other apes. Over time these parasites evolved a better ability to infect hominids. But at the same time, they surrendered the ability to infect other apes, such as chimpanzees. Thus Plasmodium falciparum was born.
This new research is yet another example of how studying evolution yields new insights into medicine. (I've blogged before about similar examples with tuberculosis and HIV.) And it may also reveal something about the downside of our unique intelligence. Our ancestors lost Neu5Gc around the time that the hominid brain began to get significantly bigger than a chimp's.
In other animals, Neu5Gc is abundant on the cells of most organs, but exceedingly rare in the brain. It is very peculiar for a gene to be silenced in the brain, which suggests that it might have some sort of harmful effect. Once a mutation knocked out the gene altogether, hominids didn't have to suffer with any Neu5Gc in the brain at all.
Perhaps Neu5Gc limited brain expansion in other mammals, but once it was gone from our ancestors, our brains exploded. Along with a big brain, however, came our very own form of malaria.


New branches on the tree of life have just turned up in Africa. Some are cuter than others.
In Madagascar, our primate family was enlarged by two adorable species of mouse lemurs. Meanwhile, other scientists made an uglier discovery in the small country of Djibouti, in the Horn of Africa. They found a surprising diversity of bacteria that cause tuberculosis. When most people think about the joys of biodiversity, they probably don't think about the hidden expanses of parasites waiting to be discovered. But in cases such as this one, they can have a fascinating story to tell--one that may prove to be important to the welfare of our own species.
Tuberculosis is, like malaria and HIV, an infectious disease so vast in its success that it's hard to fathom. Every second someone somewhere in the world gets infected with the bacteria Mycobacterium tuberculosis, and each year TB kills about 1.75 million people. Many scientists have wondered how long these bacteria have been attacking the lungs of our ancestors. Hippocrates described cases that appear to be tuberculosis, and ancient mummies show signs of the disease. For earlier chapters in the evolution of TB, scientists have begun to turn to the bacteria's DNA.
The first studies pointed to a relatively recent origin of the disease. The bacteria that scientists sampled turned out to have nearly identical DNA. If a long time had passed since the common ancestor of living strains of TB, then they would have expected to find more mutations setting the strains off from one another. Instead, they esimated that a single successful ancestor gave rise to all current strains about 20,000 to 35,000 years ago.
But French researchers have found that people in Djibouti carry strains of TB that are significantly different than anything seen before. They have many more genetic differences than have been found in human TB strains from anywhere else in the world. Yet they are more closely related to other human TB than to the Mycobacterium species that infect cattle and other animals. The scientists then turned the mutations of the Djibouti strains into a molecular clock. They estimate that the ancestor of today's human TB existed some three million years. The results have just been published in the new open access journal PLOS Pathogens.
If tuberculosis was infecting our ancestors three million years ago, it was infecting early, small-brained hominids. All of the hominids known from that time lived in Africa, and hominids would not be found outside the continent for over a million years. Our own species is believed to have evolved much later in Africa, and to have spread to Asia and Europe roughly 50,000 years ago. So it's telling that all these ancient strains are found in Africa, not far from some of the richest lodes of hominid fossils in Ethiopia. The genetic diversity of these bacteria reflects the genetic diversity of living Africans.
Some diseases are new to our species, and some are old enemies. HIV probably made the jump from chimpanzee to human in just the past century. Like other emerging diseases, its evolution is a reflection of our times. It probably is the result of roads being pushed through African rain forests for logging, allowing hunters to kill chimpanzees and sell the meat to a growing, increasingly mobile society. Other diseases appear to have gotten their start thanks to earlier opportunities. Yersinia pestis, the cause of bubonic plague, rapidly emerged a couple thousand years ago, probably taking advantage of flea-infested rats that were thriving in cramped communities. Malaria appears to have emerged a few thousand years before that, when early African farmers spend their days clearing forests and creating lots of standing water in which mosquitoes could breed, only to go to bed nearby and become easy targets for the insects.
The new study suggests that tuberculosis came long before them. But it apparently has not been with us forever--or even for five or ten million years. For some reason it appeared three million years ago, and it's intriguing think why. The new paper doesn't hazard a guess, but I'm reminded of a similar study I came across while researching my book Parasite Rex. It has to do with tapeworms.
Today tapeworms have a life cycle that take them between pigs or cows and humans, where they can grow up to 60 feet long in their intestines. In the 1940s, researchers proposed that the three tapeworm species that infect humans descend from ancestors which pioneered our guts when cattle and pigs were first domesticated some 10,000 years ago. But a close look at their DNA showed otherwise. Scientists found that the closest relatives of human tapeworms did not make relatives of cows or pigs their intermediate hosts. Instead, they lived inside East African herbivores such as antelopes, and made he lions and hyenas that kill them their final hosts. The researchers then looked at the amount of variation between the DNA from different species of tapeworms. According to the agricultural hypothesis, that variation should have pointed to a common ancestor 10,000 years ago. But the scientists concluded that this common ancestor could have lived as long as a million years ago.
The scientists proposed that tapeworms began adapting to our hominid ancestors when they began putting more meat in their diet. By scavenging or hunting on the East African savannas, our ancestors became an attractive new habitat for the tapeworms, and new species evolved that were specialized only to live inside us. Only hundreds of thousands of years later did they make cows and pigs their intermediate hosts.
Given TB's similar antiquity, I wonder if it may have made a similar leap. Many closely relatives to Mycobacterium tuberculosis live in bovids--cows and their relatives--which hominids might have encountered as they began to scavenge meat. Could a sick wildebeest have been our patient zero?
Still, the question remains: why is so much TB diversity hiding out in Djibouti, while one branch seems to have exploded about 30,000 years ago and spread around the world, such that today it makes up the vast majority of TB cases? The paper's authors hazard that this lineage spread out of Africa with the migration of humans to other parts of the world. That makes sense up to a point. The bacteria that cause ulcers, Helicobacter pylori, spread this way--so faithfully in fact that it acts as a marker for human migrations to different parts of the world. But the new TB 30,000 years ago was able to spread much more aggressively than the other strains, which apparently are still restricted to the region where they've been for millions of years. It's hard to understand what sort of social or ecological change could have created the conditions that would favor such a superior bug.
Neverthless, it may be possible to pinpoint how this new lineage evolved into such a killer by comparing it to the older strains. If scientists can identify its special weapon, they might be able to figure out how to attack it with a drug. Here, then, is one potential benefit of exploring the diveristy of parasites: you can learn how to fight the really nasty ones.


This article in the New York Times is a pretty useful overview of the political and financial support behind the Discovery Institute, the main anti-evolution think tank. It describes how the Institute has spent $3.6 million dollars to support fellowships that include scientific research in areas such as "laboratory or field research in biology, paleontology or biophysics."
So what has that investment yielded, scientifically speaking? I'm not talking about the number of appearances on cable TV news or on the op-ed page, but about scientific achievement. I'm talking about how many papers have appeared in peer-reviewed biology journals, their quality, and their usefulness to other scientists. Peer review isn't perfect--some bad papers get through, and some good papers may get rejected--but every major idea in modern biology has met the challenge.
It's pretty easy to get a sense of this by perusing two of the biggest publically available databases, PubMed (from the National Library of Medicine) and Science Direct (from the publishing giant Reed Elsevier). They don't cover the entire scientific literature, but between them, you can search thousands of journals covering everything from geochronology to genetic engineering. Look for the topics that have won people Nobel Prizes--the structure of DNA, the genes that govern animal development, and the like--and you quickly come up with hundreds or thousands of papers.
A search for "Intelligent Design" on PubMed yields 22 results--none of which were published by anyone from the Discovery Insittute. There are a few articles about the political controversy about teaching it in public schools, and some papers about constructing databases of proteins in a smart way. But nothing that actually uses intelligent design to reveal something new about nature. ScienceDirect offers the same picture. (I'm not clever enough with html to link to my search result lists, but try them yourself if you wish.)
Here's another search: "Discovery Institute" and "Seattle" (where the institute is located). One result comes up: a paper by Jonathan Wells proposing that animal cells have turbine-like structures inside them. It describes no experiments, only a hypothesis.
Perhaps the other prominent fellows of the Discovery Institute (Michael Behe, Stephen Meyer, and William Dembski) have published scientific papers that have a bearing on intelligent design, without identifying their affiliation. Aside from a couple letters to the editor, the databases yielded only one paper, in which Behe offers a simple model of gene duplication and expresses doubt that new genes could evolve by this process. Given that other scientists have published 2266 papers exploring gene duplication's role in evolution, it's safe to say that his is not a view held by most experts.
PubMed has a very nice feature that lets you get a rough gauge of how influential a paper has been. If you select "Cited in PMD" from the display option list, you get a list of papers in PudMed that have cited the paper you're looking at. The 2001 paper revealing the rough draft of the human genome has already been cited 777 times in the past four years.
Try it on the Behe and Wells papers. Total citations? Zero.
Here's one more way to put these results in perspective: compare the two papers I turned up to the work of a single evolutionary biologist. From the thousands I could choose from, I'll pick Douglas Emlen, a young biologist at the University of Montana. He studies horns on beetles as an example of how embryonic development changes during evolution (a fascinating topic I blogged on a couple months back). I visited his publication web site and counted the papers that dealt directly with evolution (leaving out the book chapters and the papers on straight physiology and such). The total so far comes to 23. Over ten times the output I found from the entire Discovery Institute staff.
Someone's not getting their money's worth.
Update: Quallitative directs my attention to the Discovery Institute's list of peer-review literature. The first item on the rather short list is a paper that has been retracted by the journal that published it, which stated that "contrary to typical editorial practices, the paper was published without review by an associate editor." Their statement also added that "there is no credible scientific evidence supporting ID [Intelligent Design] as a testable hypothesis to explain the origin of organic diversity." I don't see much more that I could add.
Update, 8/23 11pm:Steven Smith reports on his own search on another scientific database, Biosys. An independent test of my hypothesis, in true scientific spirit--and with the same results.


In today's New York Times I have an article about the quest to create a virtual organism—a sort of digital Frankenstein accurate down to every molecular detail. The creature that the scientists I write about want to reproduce is that familiar denizen of our gut, Escherichia coli.
There are two things about this enterprise I find particularly delicious. One is that this little microbe is just too complex for today's computers to handle. For now scientists are just laying the groundwork for a day that might come in 10 or 20 years when they have enough processing power to handle E. coli. Another delicious fact is that despite fifty years of intense research, scientists don't know what a lot of E. coli's genes are for. All told, this black box swallows up about a quarter of its genome.
The creationist frenzy of the past couple weeks gives these two facts special meaning. Creationists like to point out that life is very complex. They like to point out that despite years of work, scientists have yet to figure out the complete series of events by which much of that complexity evolved. This state of affairs does not represent unfinished business, according the creationists, but an outright failure. And that failure is proof that life could not have evolved. Therefore, the argument goes, life must have been directly designed by some powerful being.
To see why this argument impresses so few scientists, consider E. coli. Scientists are confident that they can explain how this microbe works with a purely mechanistic account—in other words, with the interactions of atoms, molecules, modules made of genes and proteins, and the like. It's worked reasonably well so far, allowing them to create good hypotheses how E. coli strings together proteins, builds cell walls, and so on.
But despite decades of intense research, much of E. coli remains unexplained. In their obsession with mechanistic explanations, scientists have failed to find a complete account for how E. coli works. If you buy the argument for design, you must conclude that microscopic supernatural beings dwell inside E. coli, operating it like a microbial submarine.
Of course, nobody who actually does actual research on E. coli says this. They're too busy trying to figure out how E. coli works. If you want to find examples of their work, go to scientific journals, or visit Thierry Emonet's site. If, on the other hand, you want to find people claiming that the yet-to-be-discovered is evidence of supernatural intervention, you'll have to look elsewhere. Op-ed pages are always a good place to start.


Mole rats are a pretty ugly, obscure bunch of creatures. They live underground in Africa, where they use their giant teeth to gnaw at roots. Those of you who know anything about mole rats most likely know about naked mole rats, which have evolved a remarkable society that is more insect than mammalian, complete with a queen mole rat ruling over her colony. But according to a paper in press at the Journal of Human Evolution, mole rats are important for another reason. Their evolution and our own show some striking parallels that may shed light on how our ancestors diverged from other apes.
The authors of the paper, Greg Laden of the University of Minnesota and Harvard's Richard Wrangham, believe that the rise of hominids was marked by a shift in food. Reviewing the evidence from fossils and living apes, they argue that common ancestor of humans and our three closest relatives (chimpanzees, bonobos, and gorillas) dwelled in a rain forest. If this ancient ape was anything like living chimps and gorillas, it depended mainly on fruits. When it couldn't find fruits, it turned to other so-called "fallback foods" such as soft leaves and pith.
Judging from the fossils of plants and animals found alongside early hominid bones, it seems that hominids shifted from dense rain forests to woodlands, and much later to open, arid savannas. It would have been harder to survive on the diet of a gorilla or a chimpanzee in such places. Laden and Wrangham point out that in Gabon, gorillas that live in rainforests don't venture into the surrounding savannas, despite the fact that the savannas get a lot of rain. The problem is that outside of rainforests, there just aren't enough of their fallback foods to sustain them.
So how did hominids survive? Laden and Wrangham argue that they began to rely on a new fallback food: roots, tubers, and other "underground storage units."(To me this term sounds too much as if it came from a subterranean Ikea catalog, so I'll just use the word tubers.) The idea was first proposed in 1980 by other scientists who observed that one important difference between hominids and other apes is their teeth. Chimpanzees and gorillas have shearing edges on their teeth that help them slice up leaves. Hominids had teeth that resembled those of pigs and bears, which can chew tough, fiber-rich food. Pigs dig up tubers with their snouts, bears with their claws. Fossil discoveries suggest that hominids might have used sticks or horns. But they all chewed the tubers in much the same way.
In the new paper (posted by Laden here), Laden and Wrangham explore this idea in much more detail. They point to evidence that tubers are more diverse in savannas than in rain forests, and grow at densities that can be hundreds of times higher. This makes intuitive sense when you consider that tubers are probably adaptations to dry, unpredictable climates where plants need to store away energy underground. In the stable dampness of a rain forest, there isn't much use for a tuber. Laden and Wrangham also point out that human foragers who live where lots of tubers grow take advantage of them. They prefer other food, like ripe fruits, but in tough times they dig up their meals.
Laden and Wrangham then turn from the present to the past. If their hypothesis is right, hominids must have lived in places where they might have eaten tubers. That's a tricky question to answer directly for most sites where hominid fossils have been found, because scientists haven't found enough plant fossils associated with them.
Enter the mole rats.
Mole rats love tubers, and where you find mole rats, you generally find a lot of tubers for them to gnaw on. What's more, mole rats and humans have a taste for many of the same species that produce underground storage units. Mole rats have left a long fossil record in Africa since they first appeared some 20 million years ago--not coincidentally when tuber-rich habitats may have begun to spread through Africa.
Laden and Wrangham predicted that hominids and mole rats should tend to have left fossils in the same habitats. They looked at fossil sites from six million years ago to half a million years ago in eastern and southern Africa, where hominids lived. They then picked out sites where either hominids or mole rats had been found, or both. Of the 21 sites that had mole rats, 17 also had hominids. Less than a fifth of the sites without mole-rats had hominid fossils. The pattern suggests that mole-rats and hominids both evolved to take advantage of the rich supply of tubers in African savannas. They came at the tubers from below, we from above.
Dribs and drabs of this hypothesis have trickled out over the past six years. In a 1999 paper in the journal Current Anthropology, Laden and Wrangham and their colleagues suggested that tubers were important to hominids and then became really important about 1.9 million years ago. At that time, hominids began emerging who were much taller and bigger-brained than their ancestors, and who also had smaller teeth. Laden and Wrangham argued that hominids at this time must have discovered fire, which would have allowed them to cook down tubers, liberating much of the nutrition in them. In this 2002 article Natalie Angier offers a nice summary of their thinking at the time—along with the skeptical reaction it drew from some experts. One big problem is that the oldest good evidence for fire is only a few hundred thousand years old, not almost two million.
The new paper doesn't address the skepticism about this later part of their scenario. Instead, it looks back at the first four million years of our life with tubers. Laden and Wrangham propose testing their hypothesis by looking at the trace elements and isotopes in tubers to see if the patterns are reflected in the composition of hominid fossils. I also wonder about how they got hold of the tubers. Were the earliest hominids able to fashion digging sticks, or were they merely using their hands, the way savanna baboons do today? How exactly, I wonder, did we get to be the upright mole rats?
(Update: 8/15 10 am: Thanks to Hoopman for pointing out some new findings that may show evidence of fire 1.5 million years ago. Here's a BBC article with some details. As far as I can tell, though, the results have only been presented at a conference. They haven't been published in a journal.)


It's bad enough to see basic scientific misinformation about evolution getting tossed around these days. USA Today apparently has no qualms about publishing an op-ed by a state senator from Utah (who wants to have students be taught about something called "divine design") claiming there is no empirical evidence in the fossil evidence that humans evolved from apes. I'm not sure what we're supposed to do with the twenty or so species of hominids that existed over the past six million years. Perhaps just file them away under "divine false starts."
But history takes a hit as well as science. Creationists try whenever they can to claim that Darwin was directly responsible for Hitler. The reality is that Hitler and some other like-minded thinkers in the early twentieth century had a warped view of evolution that bore little resemblance to what Darwin wrote, and even less to what biologists today understand about evolution. The fact that someone claims that a scientific theory justifies a political ideology does not support or weaken the scientific theory. It's irrelevant. Nazis also embraced Newton's theory of gravity, which they used to rain V-2 rockets on England. Does that mean Newton was a Nazi, or that his theory is therefore wrong?
Creationists are by no means the only people who are getting history wrong these days. Yesterday in Slate, Jacob Weisberg wrote an essay in which he claimed that evolution and religion are incompatible. He claims to find support for his argument in Darwin's own life.
That evolution erodes religious belief seems almost too obvious to require argument. It destroyed the faith of Darwin himself, who moved from Christianity to agnosticism as a result of his discoveries and was immediately recognized as a huge threat by his reverent contemporaries.
I get the feeling that Weisberg has yet to read either of the two excellent modern biographies of Darwin, one by Janet Browne and the other by Adrian Desmond and James Moore. I hope he does soon. Darwin's life as he actually lived it does not boil down to the sort of shorthands that people like Weisberg toss around.
Darwin wrestled with his spirituality for most of his adult life. When he boarded the Beagle at age 22 and began his voyage around the world, he was a devout Anglican and a parson in the making. As he studied the slow work of geology in South America, he began to doubt the literal truth of the Old Testament. And as he matured as a scientist on the journey, he grew skeptical of miracles. Nevertheless, Darwin still attended the weekly services held on the Beagle. On shore he sought churches whenever he could find them. While in South Africa, Darwin and FitzRoy wrote a letter together in which they praised the role of Christian missions in the Pacific. When Darwin returned to England, he was no longer a parson in the making, but he certainly was no atheist.
In the notebooks Darwin began keeping on his return, he explored every implication of evolution by natural selection, no matter how heretical. If eyes and wings could evolve without help from a designer, then why couldn't behavior? And wasn't religion just another type of behavior? All societies had some type of religion, and their similarities were often striking. Perhaps religion had evolved in our ancestors. As a definition of religion, Darwin jotted down, "Belief allied to instinct."
Yet these were little more than thought experiments, a few speculations that distracted Darwin every now and then from his main work: of discovering how evolution could produce the natural world. Darwin did experience an intense spiritual crisis during those years, but science was not the cause.
At age 39, Darwin watched his father Robert slowly die over the course of months. His father had confided his private doubts about religion to Darwin, and he wondered what those doubts would mean to Robert in the afterlife. At the time Darwin happened to be reading a book by Coleridge called Friend and Aids to Reflection, about the nature of Christianity. Nonbelievers, Coleridge declared, should be left to suffer the wrath of God.
Robert Darwin died in November, 1848. Throughout Charles's life, his father had shown him unfailing love, financial support, and practical advice. And now was Darwin supposed to believe that his father was going to be cast into eternal suffering in hell? If that were so, then many other nonbelievers, including Darwin's brother Erasmus and many of his best friends, would follow him as well. If that was the essence of Christianity, Darwin wondered why anyone would want such a cruel doctrine to be true.
Shortly after his father's death, Darwin's health turned for the worse. He vomited frequently and his bowels filled with gas. He turned to hydropathy, a Victorian medical fashion in which a patient is given cold showers, steam baths, and wrappings in wet sheets. He would be scrubbed until he looked "very like a lobster," he wrote to his wife Emma. His health improved, and his sprits rose even more when Emma discovered that she was pregnant again. In November 1850 she gave birth to their eighth child, Leonard. But within a few months death would return to Down House.
In 1849 three of the Darwin girls, Henrietta, Elizabeth, and Anne suffered bouts of scarlet fever. While Henrietta and Elizabeth recovered, nine-year old Anne remained weak. She was Darwin's favorite, always throwing her arms around his neck and kissing him. Through 1850 Anne's health still did not rebound. She would vomit sometimes, making Darwin worry that "she inherits I fear with grief, my wretched digestion." The heredity that Darwin saw shaping all of nature was now claiming his own daughter.
In the spring 1851 Anne came down with the flu, and Darwin decided to take her to Malvern, the town where he had gotten his own water-cure. He left her there with the family nurse and his doctor. But soon after, she developed a fever and Darwin rushed back to Malvern alone. Emma could not come because she was pregnant again and just a few weeks away from giving birth to a ninth child.
When Darwin arrived in Anne's room in Malvern, he collapsed on a couch. The sight of his ill daughter was awful enough, but the camphor and ammonia in the air reminded him of his nightmarish medical school days in Edinburgh, when he watched children operated on without anesthesia. For a week--Easter week, no less--he watched her fail, vomiting green fluids. He wrote agonizing letters to Emma. "Sometimes Dr. G. exclaims she will get through the struggle; then, I see, he doubts.--Oh my own it is very bitter indeed."
Anne died on April 23, 1851. "God bless her," Charles wrote to Emma. "We must be more & more to each other my dear wife."
When Darwin's father had died, he had felt a numb absence. Now, when he came back to Down House, he mourned in a different way: with a bitter, rageful, Job-like grief. "We have lost the joy of our household, and the solace of our old age," he wrote. He called Anne a "little angel," but the words gave him no comfort. He could no longer believe that Anne's soul was in heaven, that her soul had survived beyond her unjustifiable death.
It was then, 13 years after Darwin discovered natural selection, that he gave up Christianity. Many years later, when he put together an autobiographical essay for his grandchildren, he wrote, "I think that generally (and more and more as I grow older), but not always, that an agnostic would be the most correct description of my state of mind."
Darwin did not trumpet his agnosticism. Only by poring over his private autobiography and his letters have scholars been able to piece together the nature of his faith after Anne's death. Darwin wrote a letter of endorsement, for example, to an American magazine called the Index, which championed what it called "Free Religion," a humanistic spirituality in which the magazine claimed "lies the only hope of the spiritual perfection of the individual and the spiritual unity of the race."
Yet when the Index asked Darwin to write a paper for them, he declined. "I do not feel that I have thought deeply enough [about religion] to justify any publicity," he wrote to them. He knew that he was no longer a traditional Christian, but he had not sorted out his spiritual views. In an 1860 letter to Asa Gray—a Harvard botanist, the leading promoter of Darwin in America, and an evangelical Christian--he wrote, "I am inclined to look at everything as resulting from designed laws, with the details, whether good or bad, left to the working out of what we may call chance. Not that this notion at all satisfies me. I feel most deeply that the whole subject is too profound for human intellect. A dog might as well speculate on the mind of Newton."
In private Darwin complained about social Darwinism, which was being used to justify laissez-faire capitalism. In a letter to the geologist Charles Lyell, he wrote sarcastically, "I have received in a Manchester newspaper rather a good quib, showing that I have proved 'might is right' and therefore that Napoleon is right, and every cheating tradesman is also right." But Darwin decided not to write his own spiritual manifesto. He was too private a man for that.
Despite his silence, Darwin was often pestered in his later years for his thoughts on religion. "Half the fools throughout Europe write to ask me the stupidest questions," he groused. The inquiring letters not only tracked him down to Down House but reached deep into his most private anguish. To strangers, his responses were much briefer than the one he had sent to Gray. To one correspondent, he simply said that when he had written the Origin of Species, his own beliefs were as strong as a prelate's. To another, he wrote that a person could undoubtedly be "an ardent theist and an evolutionist," and pointed to Asa Gray as an example.
Yet to the end of his life, Darwin never published anything about religion. Other scientists might declare that evolution and Christianity were perfectly in harmony, and others such as Thomas Huxley might taunt bishops with agnosticism. But Darwin would not be drawn out. What he actually believed or didn't, he said, was of "no consequence to any one but myself."
Darwin and and his wife Emma rarely spoke about his faith after Anne's death, but he came to rely on her more with every passing year, both to nurse him through his illnesses and to keep his spirits up. At age 71, a few weeks before his death, he looked over the letter she had written to him just after they married. At the time she was beginning to become worried about his faith and urged him to remember what Jesus had done for him. On the bottom he wrote, "When I am dead, know that many times, I have kissed & cryed over this."
It is a disservice to Darwin, and to history, to turn his tortured, complex life into a talking point in a culture war.
(Much of this post is adapted from the last chapter of my book, Evolution.)


I'll close the week with an open letter to President Bush just released by the American Astronomical Society's president, Prof. Robert Kirschner, to express disappointment with his comments on bringing intelligent design into the classroom. Astronomers may not deal with natural selection or fossils, but as a general principle, they don't like seeing non-science and science getting confused.
Washington, DC. The American Astronomical Society is releasing the text of a letter concerning "intelligent design" and education that was sent earlier today to President George W. Bush by the President of the Society, Dr. Robert P. Kirshner.
August 5, 2005
The President
The White House
1600 Pennsylvania Ave, NW
Washington, DC 20500
Dear Mr. President,
As President of the American Astronomical Society, I was very disappointed by the comments attributed to you in an article in the August 2nd, 2005 Washington Post regarding intelligent design. While we agree that “part of education is to expose people to different schools of thought”, intelligent design has neither scientific evidence to support it nor an educational basis for teaching it as science. Your science adviser, John H Marburger III correctly commented that “intelligent design is not a scientific concept.”
Scientific theories are coherent, are based on careful experiments and observations of nature that are repeatedly tested and verified. They aren’t just opinions or guesses. Gravity, relativity, plate tectonics and evolution are all theories that explain the physical universe in which we live. What makes scientific theories so powerful is that they account for the facts we know and make new predictions that we can test. The most exciting thing for a scientist is to find new evidence that shows old ideas are wrong. That’s how science progresses. It is the opposite of a dogma that can’t be shown wrong. “Intelligent design” is not so bold as to make predictions or subject itself to a test. There’s no way to find out if it is right or wrong. It isn’t part of science.
We agree with you that “scientific critiques of any theory should be a normal part of the science curriculum,” but intelligent design has no place in science classes because it is not a “scientific critique.” It is a philosophical statement that some things about the physical world are beyond scientific understanding. Most scientists are quite optimistic that our understanding will grow, and things that seem mysterious today will still be wonderful when they are within our understanding tomorrow. Scientists see gaps in our present knowledge as opportunities for research, not as a cause to give up searching for an answer by invoking the intervention of a God-like intelligent designer.
The schools of our nation have a tough job—and there is no part of their task that is more important than science education. It doesn’t help to mix in religious ideas like “intelligent design” with the job of understanding what the world is and how it works. It’s hard enough to keep straight how Newton’s Laws work in the Solar System or to understand the mechanisms of human heredity without adding in this confusing and non-scientific agenda. It would be a lot more helpful if you would advocate good science teaching and the importance of scientific understanding for a strong and thriving America. “Intelligent design” isn’t even part of science – it is a religious idea that doesn’t have a place in the science curriculum.
Sincerely,
Robert P. Kirshner
President, American Astronomical Society
Harvard College Professor and Clowes Professor of Science at Harvard University


A statement from the National Science Teachers' Association on Bush's remarks about Intelligent Design:
NSTA Disappointed About Intelligent Design Comments Made by President Bush
2005-08-03 - NSTA
The National Science Teachers Association (NSTA), the world's largest organization of science educators, is stunned and disappointed that President Bush is endorsing the teaching of intelligent design - effectively opening the door for nonscientific ideas to be taught in the nation's K-12 science classrooms.
"We stand with the nation's leading scientific organizations and scientists, including Dr. John Marburger, the president's top science advisor, in stating that intelligent design is not science. Intelligent design has no place in the science classroom," said Gerry Wheeler, NSTA Executive Director.
Monday, Knight Ridder news service reported that the President favors the teaching of intelligent design so "so people can understand what the debate is about."
"It is simply not fair to present pseudoscience to students in the science classroom," said NSTA President Mike Padilla. "Nonscientific viewpoints have little value in increasing students' knowledge of the natural world."
NSTA strongly supports the premise that evolution is a major unifying concept in science and should be included in the K-12 education frameworks and curricula. This position is consistent with that of the National Academies, the American Association for the Advancement of Science, and many other scientific and educational organizations.


The American Geophysical Union just issued a press release in response to Bush's comments about intelligent design. It's not online at their web site yet, so I've posted it here. (Update: It's on line now.) This is not the first time that the 43,000 members of the AGU have spoken out against creationism. They protested the sale of a creationist account of the Grand Canyon in National Park Service stores, and condemned the airing of a creationist movie about cosmology at the Smithsonian Institution. But this is the first time they've taken on the President.
American Geophysical Union 2 August 2005 AGU Release No. 05-28 For Immediate Release
AGU: President Confuses Science and Belief, Puts Schoolchildren at Risk
WASHINGTON - "President Bush, in advocating that the concept of ?intelligent design' be taught alongside the theory of evolution, puts America's schoolchildren at risk," says Fred Spilhaus, Executive Director of the American Geophysical Union. "Americans will need basic understanding of science in order to participate effectively in the 21st century world. It is essential that students on every level learn what science is and how scientific knowledge progresses."
In comments to journalists on August 1, the President said that "both sides ought to be properly taught." "If he meant that intelligent design should be given equal standing with the theory of evolution in the nation's science classrooms, then he is undermining efforts to increase the understanding of science," Spilhaus said in a statement. "?Intelligent design' is not a scientific theory." Advocates of intelligent design believe that life on Earth is too complex to have evolved on its own and must therefore be the work of a designer. That is an untestable belief and, therefore, cannot qualify as a scientific theory."
"Scientific theories, like evolution, relativity and plate tectonics, are based on hypotheses that have survived extensive testing and repeated verification," Spilhaus says. "The President has unfortunately confused the difference between science and belief. It is essential that students understand that a scientific theory is not a belief, hunch, or untested hypothesis."
"Ideas that are based on faith, including ?intelligent design,' operate in a different sphere and should not be confused with science. Outside the sphere of their laboratories and science classrooms, scientists and students alike may believe what they choose about the origins of life, but inside that sphere, they are bound by the scientific method," Spilhaus said.
AGU is a scientific society, comprising 43,000 Earth and space scientists. It publishes a dozen peer reviewed journal series and holds meetings at which current research is presented to the scientific community and the public.


After a day-long road trip from Ohio, I finally had the chance to read the news that President Bush thinks that schools should discuss Intelligent Design alongside evolution, so that students can "understand what the debate is about."
As Bush himself said, this is pretty much the same attitude he had towards creationism when he was a governor. His statements back in Texas didn't actually lead to any changes in Texas schools, and I doubt that these new remarks will have much direct effect, either. But, like Chris Mooney, I'm a journalist, and like him I would have loved to have been in the crowd of reporters when Bush made these remarks.
Mooney would have asked Bush how he squares his comments with those of his own science advisor, John Marburger, who dismisses Intelligent Design out of hand. I would follow up on his question by expanding it to a much bigger scale.
Mr. President, I would ask, how do you reconcile your statement that Intelligent Design should be taught alongside evolution with the fact that your administration, like both Republican and Democratic administrations before it, has supported research in evolution by our country's leading scientists, while failing to support a single study that is explicitly based on Intelligent Design? The National Institutes of Health, the National Science Foundation, and even the Department of Energy have all decided that evolution is a cornerstone to advances in our understanding of diseases, the environment, and even biotechnology. They have found no such value in Intelligent Design. Are they wrong? Can you tell us why?
For plenty of other comments, you can follow the links at Pharyngula
Update 8/2 7:45 pm: I might also ask the President to respond to 43,000 scientists who think he's putting schoolchildren at risk.
Update 8/3 5:30 pm: Or 55,000 science teachers who are shocked and disappointed by his remarks.
Update 8/6 9:30 am: Or the nation's astronomers, who think his remarks are bad for all science.





I've been fascinated by this picture since I first saw it over the weekend. It's a hint of how we may be visualizing life in years to come.
As Darwin was trying to figure out how new species could evolve from old species, he began to think of evolution as a tree. He scribbled some simple branches in a notebook, and then published a more elaborate one in The Origin of Species. Darwin didn't actually put any animals or plants on the branches of these trees; he was just thinking about the process itself. Today, though, evolutionary trees are a common sight in scientific journals, whether scientists are reconstructing the origin of a new strain of HIV or are trying to figure out how animals evolved from single-celled ancestors.
But scientists have also realized that drawing trees is harder than it once seemed. Evolution, at its heart, is about changes to DNA. For some organisms, like ourselves, DNA changes almost entirely as the result of mutations when parents bequeath their genes to their offspring. But it is possible for genes from one organism to hop to another. This happens most often in microbes. A bacterium may jam a needle into another bacterium and inject some genes. In other cases, viruses may pick up the genes of one host and bring them to a new host. Once a gene makes this jump, it may then get carried down through normal heredity to the receiver's descendants, spreading into new species that evolve from it.
Scientists are trying to figure out how important this kind of species hopping has been over the history of life. In a sense, scientists are asking what is the shape of the tree of life? Is it for the most part an ordinary tree as Darwin pictured it, with a few vines representing jumping genes? Or are the vines so dense that they obscure parts of the tree altogether? This debate is not a huge issue when it comes to the evolution of animals (although viruses have shaped our genome). Most of the evidence for these vines comes from bacteria and other microbes, which are very promiscuous with their genes. Most of the diversity of life is microbial, and microbes were the only game in town for the first couple billions years of the history of life. So the stakes of this debate are big.
This picture is a splendid representation of this debate. Scientists at the European Bioinformatics Institute created it by comparing 184 microbes. The scientists first identified genes that the microbes all inherited from a common ancestor that they then passed down in conventional parent-to-offspring fashion. By comparing their different sequences, the scientists were able to draw a conventional tree of the sort Darwin had in mind. Next, they scanned the genomes of these microbes for jumping genes. They drew the jumpers as vines from one branch to the next. They then produced this three-dimensional picture.
As you can see, the branches rise from a common ancestor, but they are enmeshed in vines. What's particularly fascinating about it is the way in which the vines connect the branches. It is not a random mesh. Instead, a few species are like hubs, with spokes radiating out to the other species. This is the same pattern that turns up in many networks in life, from the genes that interact in a cell to the nodes of the Internet. These hubs can bring a vast number of nodes into close contact. It's why you can play Six Degrees of Kevin Bacon. In the microbial world, this network allows genes to move quickly through the tree of life, whether those genes provide resistance to antibiotics or allow microbes to cope with some other change in the environment. The Kevin Bacons of the microbial world, at least in the current study, seem to be species that live in habitats where they may come in intimate contact with other species, such as in plant roots. They then act as gene banks from which other species can make withdrawals.
Of course, 184 species of microbes represent a vanishingly small sample of the diversity of life on Earth. It remains to be seen if the Kevin-Bacon structure survives as more branches and vines get added to this picture. But this is an important step forward in how we envision life. Perhaps in the future, this tangled tree will take its place alongside Darwin's notebook scribbles.


Science Magazine is celebrating its 125th anniversary with 125 big questions that scientists will face in the next 25 years. You can read them all for free here. For the 25 biggest questions, the editors commissioned short essays. I addressed the minor matter of how and where life began.
Fortunately, I get to ask the question. I don't have to provide the final answer. A science writer's prerogative.


This week a few more tantalizing clues about the origin of language popped up.
I blogged here and here about a fierce debate over the evolution of language. No other species communicates quite the way humans do, with a system of sounds, words, and grammar that allows us to convey an infinite number of ideas. While particular languages are the products of different cultures, the basic capacity for language appears to be built into our species. Some scientists argue that language is primarily the product of natural selection working within the hominid lineage over the past few million years. Others suggest (argue might be too strong a word) that a lot of the components of language may have already been in place before our ancestors parted evolutionary ways with other apes. That would leave natural selection with a relatively small role in giving rise to human language.
Debates in evolutionary biology can be fierce and sometimes even ugly, and as a result they can give the misleading impression that the two sides are as different as black and white. Usually, however, the debate is over how much natural selection was responsible for shaping a feature in its current form and function. Take the evolution of bird feathers. Birds use them for flight, and they are exquisitely adapted for flight in their subtlest details. But fossils suggest that dinosaurs had feathers long before birds flew. So natural selection for flight did not produce feathers. As the ancestors of today's birds started flying, however, natural selection probably then sculpted their feathers for better performance in the air.
In the case of the language debate, both sides agree that hominids inherited a set of capacities that may now play a role in language. Both sides agree that at least some natural selection helped shaped those capacities. The question is how far to either side the balance actually was. The best way to set that balance is to find more clues about the origin of language.
The first clue comes from squeaky mice.
In 2001 scientists identified a gene involved in spoken language. They found it by studying a Pakistani family in which half the members suffered from a disorder that interfered with their ability to understand grammar and to speak. The scientists tracked the disorder back to a single mutation to a single gene, which is now known as FOXP2.
FOXP2 belongs to a family of genes found in animals and fungi. They all produce proteins that regulate other genes, giving them a powerful role in the development of the body. FOXP2 in particular exists in other mammals, in slightly different forms. In mice, for example, the part of the gene that actually encodes a protein is 93.5% identical to human FOXP2.
The following year another group of scientists compared the human version of FOXP2 to the sequence in our close primate relatives. They found that chimpanzees have a version of the gene that's hardly different from the gene in mice. But in our own lineage, FOXP2 underwent some fierce natural selection. By comparing the minor differences in FOXP2 carried by different people, the scientists were able to estimate when that natural selection took place--roughly 100,000 years ago. That's about the time when archaeological evidence suggests that humans began using language. (For a good review of all this work, go here.)
What exactly was FOXP2's role in the evolution of language? A group of scientists decided to see what sort of role it played in other animals. They genetically engineered mice lacking FOXP2 (some had one copy, others had none). Then they watched the mice develop. As the scientists reported this week, the mice experienced many changes, but the most tantalizing one reported in a paper this week was in their squeaks. Mice communicate with one another a great deal with ultrasonic sounds, and their squeaks can convey a lot of information. They are particularly important for pups, so that they can get help from their mothers. Pups missing FOXP2 had serious trouble squeaking for Mom when the scientists removed them from their nests. The trouble did not lie anywhere in their vocal tract, which developed normally. The scientists found instead that the neurons in a region of the brain at the back of the head known as the cerebellum hadn't developed properly. The cerebellum is known to play a vital role in motor control, so perhaps that mice couldn't manipulate their throats properly.
Before this study, scientists already knew that FOXP2 was important to the development of other animals, but now the evidence suggests that the gene was already playing a role in communication in the common ancestor of mice and humans, perhaps 80 million years ago. Given that the FOXP2 gene in chimpanzees and mice is barely different, it seems to have evolved little in our ancestry from 80 million years ago to 6 million years ago. That's interesting when you consider that primates have some pretty elaborate communication systems, and that a clever chimpanzee can be taught a simple language by humans. Perhaps FOXP2 was continuing to play a role in the brain's control of the voice anatomy, while other genes were evolving to handle other aspects of communication. And if FOXP2 in fact only underwent significant evolution in our lineage after we split from other apes, the new research may give a clue as to what happened during the evolution. People with mutations to FOXP2 have trouble controlling their mouths, and they had trouble with grammar. Perhaps it took on this second role in the past 100,000 years.
As I blogged here, scientists have looked for more clues to the function of FOXP2 with brain scans. They compared activity in the brains of people with mutations to FOXP2 to people with normal versions of the gene as both sets of people did different language tasks, such as thinking of verbs that go with nouns. The scientists found that a change to FOXP2 changes the way the brain handles language. Specifically, in people with mutant copies of the gene, a language processing area of the brain called Broca's area is far less active than in people with normal FOXP2.
Broca's area has a long history in neuroscience. In 1861 the French physician Pierre Broca treated a man who had suffered a stroke that robbed him of his ability to say anything except the word "Tan." (He said it so much that he was nicknamed Tan.) Despite this devastating blow to his faculty of language, he could still understand the speech of other people. After Tan's death, Broca autopsied his brain to find exactly what part of the brain had been damaged. It turned out that the stroke had destroyed part of Tan's left frontal lobe. Broca looked at other patients with the same condition (known as aphasia), and found that they too suffered damage in the same area--what came to be known as Broca's area.
Scientists are still trying to figure out what Broca's area actually does in language. It's possible that it does several things at once, or that it's actually a collection of smaller regions that have different jobs. While Broca's area may help us control our mouths, that's not its only role. In the recent scanning experiment I mentioned, it became active even when people just thought about words.
The second clue this week comes from Broca's area—or at least the corresponding part of a monkey's brain.
Monkey brains and human brains are similar enough that scientists can find some of the regions in one species in the other, albeit in a different size and shape. There's been a lot of debate, however, about whether a counterpart to Broca's area exists in monkeys. If it didn't, that would suggest that it must have emerged in our own lineage after the split with the ancestors of living monkeys.
This week in Nature, scientists report that monkeys do have Broca's area. They show that the neurons in a patch on the left side of a monkey's brain are organized in the same ways as Broca's area. The patch also borders areas that have already been identified as being the same areas that border Broca's area. The scientists then put microelectrodes in the brain region and ran small currents through it to see what would happen. The monkeys moved their jaws and tongues. So Broca's area was already controlling the mouth 30 million years ago. At some point later, apparently, it became more adapted for speech in our lineage. Exactly how FOXP2 got involved in Broca's area remains a mystery.
These two clues don't show exactly where to set the balance between "pre-adaptation" and natural selection when it comes to language. But they do help reveal the building blocks that were put to use at some point.


Last year I went to a fascinating symposium in honor of the great evolutionary biologist George Williams. The March issue of the Quarterly Review of Biology ran a series of papers written by the speakers at the meeting that offered much more detail on how Williams had influenced them in their various fields. Randolph Nesse of the University of Michigan gave one of the most interesting talks at the meeting on maladaptation and what it means to human medicine. You can download the pdf from his web site.
To whet your appetite, here's a nice passage on the eye:
"It works well when it works, but often it does not. Nearly a third of us have hereditary nearsightedness, and almost no one over 55 can read a phone book unassisted (except for those who have been nearsighted for decades!). The lovely mechanism that regulates intraocular pressure often fails, causing glaucoma. Then there is the blind spot, a manifestation of the abject design failure of nerves and vessels that penetrate the eyeball in a bundle and spread out along the interior surface instead of penetrating from the outside as in the betterdesigned cephalopod eye. Octopi not only have a full field of vision, but they need not worry about retinal detachment. They also need neither the tiny jiggle of nystagmus that minimizes the shadows cast by vessels and nerves on the vertebrate retina nor the brain processing mechanisms that extract the visual signal from the nystagmus noise. In short, the vertebrate eye is a masterpiece not of design, but of jury-rigged compensations for a fundamentally defective architecture."


In the comments, Doug gets exasperated with some recent posts of mine:
“Isn't it amazing how everything seems to provide evidence for evolution? The brain shrinks in some form of pygmy homo erectus. Thats evolution! Ancient genes survive millions of years unchanged. That's evolution?! Women have orgasms. That's evolution! Although not all women have orgasms and they still manage to reproduce hmm luckily with the right spin...That's evolution! We live in a civil society with people working for cooperative goals. That's evolution! Unfortunately some people murder and rape. Just an unfortunate side-effect, but that's evolution.
“Not only is everything evidence for evolution but evolution explains everything! No its not circular reasoning its Evolution! Thank goodness we don't need to resort to God to explain the world around. Now we have Evolution! Its the all-encompassing answer to the ultimate question (I always thought it was 42). The evolutionist has reached the omniscient nirvana. maybe we should start meeting at the biology lab on Sunday mornings. We can sing some Evolution Hymns. Do they exist? Don't worry they'll evolve. I'll just start selectively pressing some keys on the organ and type a few letters while blindfolded. Okay I'm getting a little carried away...chalk it up to evolution.” [sic]
I find that in situations like this, it helps to step back for a moment from evolution and look at the other major scientific theories of the past couple centuries that explain a lot about the natural world. You could translate Doug's complaints about evolution into complaints about any of them.
Take the theory of plate tectonics. According to this theory, the Earth is covered in plates of crust. Each plate grows along one margin with molten rock that rises from the Earth's interior. The margin on the other side of the plate is cold and sinks down into the interior, where it is remelted and mixed up with the rock down there. Continents ride on top of these plates. In some cases they crash into each other, such as India and Asia, forming mountains. In other cases, a new rift splits a plate apart, pushing continents away, as with Africa and South America.
From the 1920s to 1960s, geologists put together this theory as a way to explain patterns on the Earth. They couldn't actually see the continents crash into each other like bumper cars, because the process takes millions of years. Instead, they had to develop hypotheses that they could then test by looking at the Earth. For example, they calculated the age of rocks around mid-ocean ridges. The rocks closer to the ridges were younger than the ones further away. Years of studies both in the field and in the lab have strengthened the theory, but they've also led scientists to expand it from its original form. The original theory didn't account for what was driving hot rock up from the interior in the first place, for example. Yet new ideas for these sorts of things do not invalidate the realization that the continents move.
Now imagine a blog about plate tectonics (I wish there was one). The blog is dedicated to new research into how all the dizzying variety of landscapes on the planet, from jagged cliffs to undersea volcanoes, are produced by the Earth's geological engine. It could even have a few posts about how plate tectonics helps explain some things you might never expect geology to explain, such as why it is that some animals in Africa and South America are surprisingly similar. Answer: their common ancestors lived at a time when the two continents were still joined together.
Imagine the sort of exasperated comments such a blog would get:
"Isn't it amazing how everything seems to provide evidence for plate tectonics? Continents split apart. That's plate tectonics! Continents crash into each other. That's plate tectonics?! Plates sink under other plates. That's plate tectonics. Although some plates actually slide past each other. That's plate tectonics. Not only is everything evidence for plate tectonics, but plate tectonics explains everything! No it's not circular reasoning, it's plate tectonics! Thank goodness we don't need to resort to God to explain the world. Now we have plate tectonics!"
Any theory that would explain the Earth's landscape has to be able to account for a huge variety of features. The same goes for any theory that would explain the Earth's biological diversity. Just consider fish. There are fish with eyes and fish without. Most fish only swim, but some fish can fly and some can crawl on dry land. A theory that could only shed light on one kind of fish wouldn't be much of a theory at all.
The theory of evolution explains this variety, but not in an arbitrary way. Fish descend from a common ancestor, and along the way they have been modified, primarily through natural selection, into different forms. Flying fish do not have wings made out of balsa wood. Their wings are actually modified fins. The fins that some fish use to crawl on land are also clearly modified from the fins other fish use to swim. Fish without eyes still retain the genes required to form eyes, but they have been modified so that the eyes never fully develop. If these fish really did evolve from a common ancestor, you'd expect that their DNA would reflect this common kinship. And it does. If these fish really did evolve from a common ancestor, you'd expect that the fossil record would be consistent with their descent. And it is.
As a result, the specific examples that Doug brings up are not circular, but rather are particular cases of well-studied patterns in evolution.
Dwarfing is not an idea that someone came up with when Homo floresiensis was discovered. It's been documented in many animals. Is there a compelling explanation for how full-sized elephants come to islands and then become the size of cows other than evolution? Let's hear it.
The genes Doug refers to are the ones found in jellyfish and humans. As animals, we descend from a common ancestor. We have lots of genes in common with jellyfish—genes for building cells, proteins, and DNA, for example. Now it turns out that some body-building genes are also conserved in humans and jellyfish. But these genes are not carbon copies of one another. They have been modified in each lineage, just as you'd expect if life did indeed evolve.
Doug's example of female orgasms raises another important point: an overarching theory about the history of life or the Earth does not automatically give you all the answers about that history. How did the Andes Mountains form? If a geologist simply says, plate tectonics, that's not a very satisfying answer. Yes, plate tectonics were involved, but how? It turns out that the best explanation geologists have is a staggeringly complex interplay of continental collision, flowing rivers, and climate change. But the issue is still very much in debate. Orgasms are also an open question, as are the precise evolutionary origins of many things in nature. Natural selection may well turn out not to have much to do with human female orgasms. We'll see.
If a scientific theory can explain an aspect of the natural world, withstand scrutiny, and lead to important new insights into how the world works, we really shouldn't hold its success against it. No one's asking for evolution hymns—certainly no more than they're asking for gravity hymns or hymns to the periodic table of the elements.


Back in 1986 a biologist named Cindy Lee Van Doverwas poking around the innards of shrimp from the bottom of the sea. They came from a hydrothermal vent in the Atlantic, where boiling, mineral-rich water came spewing up from cracks in the Earth’s crust and supported rich ecosystems of tube-worms, microbes, crabs, and other creatures. The animals that lived around these vents were generally blind, which wasn’t surprising considering that no sunlight could reach them. But Van Dover noticed that they had two flaps of tissue running along their backs that connected to nerves. Closer inspection revealed that the tissue was actually made of light-sensitive pigments and photoreceptors. What, Van Dover wondered, could these shrimp possibly be looking at. Dives to mid-ocean ridges later revealed that the vents produce a faint light of their own.
In 1996 I wrote a story for Discover about Van Dover’s obsession with deep-ocean light. At the time she was fascinated by the possibility that vents might make enough light to support photosynthesis. The sunlight that reaches the Earth’s surface is a million times brighter than vent light, but scientists have found microbes 240 feet down in the Black Sea that can survive on an equally scanty supply of photons. But at the time it was just speculation.
It is very cool to see that nine years later Van Dover hasn’t lost the obsession. In a paper just published in the Proceedings of the National Academy of Sciences, she and her colleagues report their discovery of photosynthetic bacteria living around deep-sea vents. On a cruise to the East Pacific Rise, the scientists bottled vent water and then took it to a lab to culture the microbes they had trapped. One species was able to grow only in the presence of light, which it absorbed with photosynthetic pigments. The researchers doubt that the microbes drifted into the bottles from somewhere else because they seem well-adapted to the vents. They feed on sulfur compounds that are spewed up through the vents. The water around the vents is poor in oxygen thanks to chemical reactions there, and the bacteria thrive in the absense of oxygen. The nearest place where the bacteria might enjoy these features is thousands of miles away from the vents.
Over at Cosmic Log, Alan Boyle discusses what this discovery means for the search for life elsewhere in the universe. Short answer: photosynthetic organisms might be dwelling in the dark on other planets. I found myself thinking about another implication of the discovery, which I discussed in my 1996 article. Photosynthetic life may have existed on Earth 3.7 billion years ago, and scientists would like to know how the necessary chemistry for harnessing sunlight first evolved. Van Dover and her colleagues suggested that it might have gotten its start around hydrothermal vents. It could have evolved from a means for simply detecting light. Like shrimp, microbes don’t want to get too close to the water coming directly out of a vent because they’ll get fried. Over time, these bacteria might have evolved the ability to harness the energy of the light as well. Later, some of these deep-sea photosynthesizers might have been carried to shallower vents, where they might also be able to catch light from the sun. From these migrants came the sunlight-harnessing molecules that allow bacteria to consume trillions of tons of carbon. Some algae acquired this machinery as well, probably by eating photosynthetic bacteria, and they in turn gave rise to land plants. In other words, our forests and lawns got their start at the bottom of the sea.
These newly discovered bacteria don’t clinch this argument by any means. They are not living fossils unchanged for four billion years. But they at least show that a key part of this evolutionary scenario is plausible: that photosynthetic organisms can survive around deep-sea vents. That’s certainly an idea that nobody thought of before Van Dover began poking around in dead shrimp.



I’ve got an article in today’s New York Times about jellyfish and their kin—known as cnidarians. Cnidarians look pretty simple, which helped earn them a reputation as simple and primitive compared to vertebrates like us, as well as insects, squid, and other creatures with heads and tails, eyes, and so on (known as bilaterians). But it turns out that a lot of the genes that map our complex anatomy are lurking in cnidarians, too. Scientists are now pondering what all that genetic complexity does for the cnidarians. They’re also using these findings to get a better idea of how the major groups of animals evolved between 600 and 500 million years ago.
For those interested some of the gorey details, check out PZ Myers’s take. Be sure to follow the links to earlier comments on some of the key papers on this research, plus diagrams.
In addition, curious readers can check out:
The timing of the evolution of cnidarians and bilaterians (full text)
The evolution of diploblasty (development from two embryonic layers)
Update, 4:20 pm: PZ Myers link fixed (and spelling corrected!).





It’s strange enough that beetles grow horns. But it’s especially strange that beetles grow so many kinds of horns. This picture, which was published in the latest issue of the journal Evolution, shows a tiny sampling of this diversity. The species shown here all belong to the genus Onthophagus, a group of dung beetles. The colors in this picture, which are false, show which parts of the beetle body the horns grow from. Blue horns grow from the back of the head, red from the middle of the head, and purple from the front of the head. Green horns grow from the center of the body plate directly behind the head (the pronotum), and orange horns grow from the side of the pronotum. These beetles can grow horns big and small, single or multiple, shaped like stags or like rhinos. And, as these colors show, the beetles can take very different developmental paths to get to their finished product. The biologist JBS Haldane was supposedly asked once if he could say anything about God from his study of nature. Haldane replied He must have an inordinate fondness for beetles. Add to that a fondness for putting horns on those beetles.
A century before Haldane, Charles Darwin was fascinated by the horns of these beetles. He proposed that they were produced through sexual selection. Natural selection was based on how traits helped an organism survive and have offspring—staying warm in winter, fighting off diseases, and so on. Sexual selection was based on the struggle to have sex. If females preferred to mate with males with certain traits (big tails in peacocks, for example), the males would gradually evolve more and more elaborate versions of that trait. Males might also fight with other males to get access to females, and here too their struggle could lead to baroque anatomy—such as beetle horns.
Modern evolutionary biologists have followed up on Darwin’s suggestion, and have made a close study of beetle horns. There are thousands upon thousands of species with horns to compare, and scientists can observe how the horns develop and are used by the males. A lot of fascinating work has been published on beetle horns, such the work I described in this post. The picture I’ve shown here comes from a paper that represents a big step forward in understanding this explosion of diversity. Douglas Emlen of the University of Montana and his colleagues have, for the first time, reconstructed some of the evolutionary history that produced this embarrassment of horns. (You can download the paper for free here on Emlen’s web site.)
Emlen and his colleagues focused their attention on Onthophagus dung beetles. These beetles, which are found all over the world, search out dung and then dig a tunnel underneath it. The female beetles then make a ball of the dung and lay eggs in it. The male beetles will guard the opening of these tunnels and fight off any males that try to get in and mate with the female inside. It’s here the horns come in handy, helping a guarding male make it impossible for other males to get past them.
The scientists extracted DNA from 48 different species of Onthophagus and used the sequence to figure out their evolutionary relationships. They then reconstructed the changes that evolved in the horns as new species arose. And finally, the researchers looked at the natural history of the animals—where they lived, how they lived, and the like.
It turns out that beetle horns have changed a lot. Judging from the species that sit on the oldest branches of the tree, the scientists concluded that the common ancestor of these 48 beetle species had a single horn growing from the base of the head (the second from the top on the left hand side of the photo may bear a resemblance). As new species arose, they tended to grow bigger horns, and they also tended to grow horns from new parts of their bodies. On the other hand, sometimes a lineage with elaborate horns gave rise to species with much smaller ones. Sometimes one horn became two which became one. This chart shows the tortuous paths that evolution has taken in these beetles. (The thickness of the arrows shows how often these transformations took place in different lineages.) Given that the researchers analyzed only 48 out of the 2000 Onthophagus species, the true scale of change is probably far greater.

Emlen and his collagues argue that sexual selection has driven the horns of these beetles to outrageous lengths. If you’re a male dung beetle and you want pry another male out of his tunnel, it helps to have a longer horn. If you’re that male in the tunnel, your own chances of victory depend on the horn too. So it’s the males with bigger horns that are most likely to win. And yet, as this beetle flow chart shows, these insects have lost their weaponry in some lineages. What is the countervailing force in beetle evolution?
Horns, the scientists point out, are expensive. It takes a lot of energy and the dedication of large swaths of a beetle’s body to grow horns. When you’re talking about horns that can get longer than a beetle’s entire body, the costs can be huge. In fact, growing bigger horns means that beetles have to reduce the size of other organs. Experiments have shown that the growth of horns can reduce the size of beetle eyes by 30%.
The researchers proposed that growing horns would force a trade-off with other important parts of the body, such as eyes and antennae. And the beetle tree supports their proposal. It is harder for beetles to detect the odor of dung with their antennae in a pasture than in a forest, because the odor plumes last longer in the woods. Four out of the five gains of new horns took place in forests—perhaps because beetles could afford to grow smaller antennae in a place where smelling wasn’t so hard. On the flip side, in seven of the nine cases in which horns were lost, the beetles became nocturnal. Beetles that fly at night need larger eyes, and so they can’t afford to shunt resources to big horns any more. The pressure to evolve bigger horns still exists in these lineages, but it’s been offset by other demands.
Emlen's study is a nice reminder that we don't have to stand back, slack-jawed, at nature's diversity. When you look at a line-up of beetle horns like the one at the beginning of this post, they can seem like an impenetrable mystery. But understanding of how these beetles live, and how they evolved from a common ancestor, makes them less mysterious. But no less marvelous.


Today in Science scientists reported a potentially big advance in creating embryos that can be used for stem cell transplants. Briefly put, they figured out how to take skin cells from patients, inject them into donated eggs emptied of their own DNA, and nurture them along until they had divided into a few cells. The cells were able to develop into a wide range of cell types, their chromosomes were normal, and they were so similar to the cells of the indvidual patients that they would not be rejected as foreign tissue. The research stopped there, but the dream behind this work is to heal your failing liver or heart or dopamine-producing neurons by clipping off a little skin and farm new cells that could regenerate those organs.
This research was designed in part to overcome a problem with stem cells that is part of the evolutionary baggage we carry--a problem I blogged about in January. Traditionally embyros have been nurtured by "feeder cells" from mice and calf serum. This turned out to cause make these embryos--and any stem cells derived from them--useless due to contamination. Roughly two million years ago, our ancestors lost a gene that produced a sugar on the surface our cells. Other mammals still produce it. The earliest hominids probably produced it too. But new species of hominids that emerged after two million years ago, such as our own and Neanderthals, didn't have it.
It turns out that if you feed an embryo with cells or serum from other mammals, they will absorb the sugar and stick it on their surface. To the human immune system, they look foreign. In other words, human evolution can shed light on current stem cell research. The scientists who did the new research figured out how to avoid rejection by coming up with a way to nurture the embryos with human feeder cells, so that they could avoid sticking sugars on the stem cells that our ancestors lost long ago.
Reading about this advance, I felt a grim sense of irony. As I wrote in my original post, President Bush stopped federal funding for research on stem cells using new lines derived from embryos, despite the fact that most of the already existing lines were contaminated by this lost sugar. American scientists have been making some progress with stem cells with private money and state initiatives, but guess where scientists finally figured out how to solve this evolutionary problem with cell sugars? South Korea.
Reading about this research, I was also reminded of an article I read last week during the Kansas "trial" over evolution and creationism.
Leonard Krishtalka, the director of the Kansas University Natural History Museum, was quoted pointing out how Kansas is raising $500 million to foster a bioscience and biotech industry in the state. It was ironic, he said, that the state's board of education was simulataneously "trying to remove and water down the basic fundamental concept of evolution that underlies all of biology."
Case in point: try to imagine a stem cell therapy company deciding where to set up shop. I doubt they'd be excited about a state that doesn't make sure their high school students understood mutations, natural selection, the origin of species, the fossil record, and all the other elements of evolutionary biology--that thinks it's fine just to claim that the broken sugar gene in our genome was just stuck there for reasons unknown by some mysterious designer.


Judging from fossils and studies on DNA, the common ancestor of humans, chimpanzees, and bonobos lived roughly six million years ago. Hominids inherited the genome of that ancestor, and over time it evolved into the human genome. A major force driving that change was natural selection: a mutant gene that allowed hominids to produce more descendants than other versions of the gene became more common over time. Now that scientists can compare the genomes of humans, chimpanzees, mice, and other animals, they can pinpoint some of the genes that underwent particularly strong natural selection since the dawn of hominids. You might think that at the top of the list the scientists would put genes involved in the things that set us apart most obviously from other animals, such as our oversized brains or our upright posture. But according to the latest scan of some 13,000 human genes, that's not the case. Natural selection has been focused on other things--less obvious ones, but no less important. While the results of this scan are all fascinating, one stands out in particular. The authors of the study argue that much of our evolution is the result of a war we are waging against our own cells.
It's possible to reconstruct the history of natural selection thanks to a quirk in DNA. Genes carry the code for making proteins, but it's possible to change the code without changing the resulting protein. Consider for example how cells stick new amino acids at the end of growing proteins. The nucleotides in a gene can't have a one-to-one correspondence to amino acids, since there are only four nucleotides in DNA and twenty amino acids. Instead, the cell reads three nucleotides at a time from a gene, and then chooses its amino acid. The triplet CUU makes the amino acid leucine, for example. But so does CUC and CUA. In many cases, the last nucleotide in a triplet is irrelevant.The most intriguing result of this study is that we appear to be in an intense war with our own cells.
If a hominid's genes mutated such that CUC became CUA, the mutation would have no effect for good or bad on that hominid, because the mutation didn't change its proteins. Scientists have found that mutations to "non-coding" DNA can slowly spread through an entire species thanks merely to chance. If you compare a particular gene from a human and a chimpanzee and a gorilla, you'll see that each species has picked up some silent mutations since it split off from the common ancestor of all three species.
But mutations that actually change a protein's structure are a different kettle of fish. Many of them turn out to be outright disasters, leading to diseases, spontaneous abortions, and so on. These mutations tend to be weeded out by natural selection. On the other hand, mutations that change proteins in an adaptive way can spread quickly. And if a protein is under intense natural selection, a whole series of mutations to coding DNA may build up in its gene.
One sign that a gene has undergone intense natural selection in the past is the ratio of mutations to its coding DNA to mutations to its non-coding DNA. If the coding mutations significantly outnumber the non-coding ones, it's a safe bet that this ratio is the result of intense natural selection. There are other methods for detecting natural selection, but what I've described here is the basic idea behind the new PLOS Biology paper. Expanding on an earlier scan, the researchers looked for genes that showed signs of significant natural selection by comparing their sequences in humans and chimps. They then sorted these genes according to which organs they are active in the most, and made a "Top-50" list of the genes that have undergone the most intense natural selection.
The human brain, remarkably enough, shows no sign of harboring a lot of fast-evolving genes compared to other organs. "In fact," the authors write, "genes expressed in the brain seem to be among the most conserved genes with the least evidence for positive selection." Instead, they suggest, our unparalleled brains may have evolved through adaptive changes in relatively few genes, or perhaps by borrowing existing genes that were active elsewhere in the body (I've blogged about this gene-borrowing here).
So where did all the intense selection take place? Some of it turns up in the immune system, which must battle a rapidly evolving army of parasites. Some of it turns up in the nose, possibly in order to sniff out dangerous foods or possibly to recognize suitable mates. Some of it seems to be involved in how sperm and egg recognize one another. But the most fascinating set of fast-evolving genes do something else altogether: they control the way cells kill themselves.
Suicide is essential for a healthy body. Cells kill themselves for many good reasons--to protect other cells if they are infected with a dangerous pathogen, for example, or to stop the growth of an organ once it reaches the right size. Our hands would look like webbed duck feet if the cells between our fingers didn't commit suicide.
Sperm turn out to be a particularly suicidal bunch. Three-quarters of potential sperm cells kill themselves. Some researchers have suggested that they are so prone to suicide because their population needs to be kept in balance with the other cells in the testes that nourish them. The death of the individual sperm benefits the entire population--and thus the man who carries them.
On an evolutionary level, this creates a conflict between sperm and man. If one of the cells should mutate in such a way that it could escape suicide, it could reproduce madly while other sperm cells dutifully destroyed themselves. These mutant sperm would then be more likely to reach an egg, and as a result the mutant suicide gene would become more common.
While this kind of mutation may favor an individual sperm, it may do harm to its owner. His overall sperm production might suffer as a result of this mutiny down below, for example. It might even increase his risk of cancer. After all, one of the hallmarks of cancer is the mutation of suicide genes, allowing cancer cells to grow rapidly into tumors. Once a sperm fertilized an egg, its suicide-escaping genes would wind up in every cell of the resulting person, raising their chance of turning cancerous. (See this post for more on the intersection of evolution and cancer.)
The authors of the study point out that many of the genes that end up near the top of their list have long been known to be involved in cancer. Perhaps, they suggest, many cases of cancer are the result of this pressure on sperm to escape suicide. And if their hypothesis is right, then you'd expect that a mutation that can stop these renegade sperm from wreaking havoc might be favored by natural selection. There are a number of genes that are crucial for suppressing tumors, and--as predicted--they are also among the fastest-evolving genes. In fact, some of these fast-evolving tumor suppressing genes are only active in the testes, where they may be keeping sperm in check.
This sort of two-level evolution may seem bizarre, but biologists are documenting a growing number of cases of it. It was particularly important, for example, in the evolution of multicellular animals from singe-celled protists some 700 million years ago. But it's hardly ancient history, this new study suggests. Every time cancer strikes, it makes its presence known.
Update, 7pm: PZ Meyers offers a detailed tour of the Top 50.


On Thursday I predicted that pundits would make the rediscovery of the Ivory-billed woodpecker an opportunity to criticise predictions that humans are causing mass extinctions--while conveniently ignoring evidence that goes against their claims. Today I came across the first case I know of, which appears a short Week-in-Review piece about the woodpeckers in the New York Times. (You have to scroll down a bit to the article.)
First, a conservation biologist is quoted saying that most things that scientists think are extinct are extinct. The article then ends with this:
But Stephen Budiansky, the author of several books on natural history, said the discovery points out how uncertain the business of predicting extinctions of species great and small - mostly small - can be.
All of the big numbers we have heard, of tens of thousands of extinctions worldwide, are not based on field observations," Mr. Budiansky said. "They're based on very simplistic mathematical models. But there's a huge gap between those predictions and the numbers of species we can actually confirm are extinct."
Budiansky's name may be familiar to you, especially if you followed the link to a paper by Stuart Pimm I provided in the last post. In the early 1990s, Budiansky was one of the first people to float the idea that North American birds demolish estimates of the current extinction rate based on habitat loss. Budiansky didn't actually make these claims in a scientific paper, but first in an article for U.S. News and World Report, and then later in a book, Nature's Keepers. In the paper I linked to, Pimm explicitly cites Budiansky's claims, and then proceeds to show that they are wrong. Fast forward some ten years. In that time Budiansky has, as far as I can tell from my search, never responded to Pimm's paper in a scientific journal or magazine. Nevertheless, he's still ready to hold forth about extinctions. I suppose he's trying to be controversial, but from his quote, you'd think that conservation biologists made these mathematical models in some smoke-filled back room and kept them a sworn secret. But you just need to look at Pimm's paper, or any other in this area, to see that they've always been upfront about using mathematical modeling to make predictions--just as a chemist uses mathematical models to make predictions about a chemical reaction, or a meteorologist uses models to predict the weather next week. But there's also been a long tradition of fieldwork (and experiments) to test the assumptions of the model. As for the huge gap between predictions and numbers of species we can actually confirm are extinct, if Budiansky wants to bankroll the millions of field biologists who would be needed to track the fate of all the millions of species on Earth over the next 200 years, I'm sure no conservation biologist would complain. But until then, our knowledge will have to remain imperfect.
(For those who want more: 1. Stuart Pimm gave an interesting talk on all this a couple years ago, and the transcript and audio file are available here. 2. For a similar case of flimsy "skepticism" about extinctions, see this post.)


From time to time, scientists discover that a species that was once thought to have become extinct is actually surviving in some remote place. If the species is a salamander or a lemur, it gets a quick headline and then promptly goes back to its obscure, tenuous existence. But here's one rediscovered creature that I suspect will get some major press: the Ivory-billed woodpecker is back. Science is publishing a paper in which scientists report several sightings and a video of the magnificent bird, which hadn't been seen since in the United States since 1944. Here is a report from the AP.
The challenge of studying extinctions is that it can be hard to know when a species is finally gone for good. If a species of flower lives only on a single bare island the size of a hot-dog stand, you can be pretty sure that if you don't see any of the flowers for a few years, it's gone. But if, as is the case for the Ivory billed woodpecker, a species exists in remote forests and at low density, the failure to see it may just mean scientists haven't looked everywhere. Eventually, most scientists will just give up and presume the animal extinct. As a result, ornithologists and amateur birders have been wondering for decades whether the woodpecker is actually still alive. Incredibly, it is--in some remote woods in Arkansas.
So what does it mean that today the Ivory billed woodpecker seems to be alive? Is it proof that environmentalists have been crying wolf about the dangers of extinction? Do we not need to worry? Is wildlife taking care of itself?
A couple maps can help put the discovery into perspective. This first map shows the original range of the ivory-billed woodpecker. It thrived in mature forests in the southeastern United States, particulary along the coasts and up the Mississippi. The second map shows its range in between 1900 and 1930. The striped regions are habitat that the woodpecker lost between 1900 and 1930. The orange spots were all that was left of its range in 1930.


The reports today do not mean that the woodpeckers are actually living in their former range. They don't even show that the bird exists in its 1930 range. The sightings were all made in the Arkansas patch--a tiny portion of the area in which the woodpecker once lived. The researchers say in their paper that the sightings were made in some 200,000 hectares of Arkansas forest that all might be well-suited to the woodpeckers. Is that cause for optimism? It depends on the biology and ecology of the birds. Will they be able to sustain a healthy population in a relatively small remnant of their original range? That's an open question. It is possible that the woodpecker may also be lurking in other parts of its former range, but that doesn't necessarily boost the species's odds of survival. Such a hypothetical population might well be isolated from the Arkansas population, like two islands separated by hundreds of miles of ocean. If one population disappears due to inbreeding, disease outreaks, or some other disaster, its numbers won't be boosted by immigrants from the other population.
This gets to the heart of the extinction process. Conservation biologists have argued for a long time that as habitats get fragmented, the chances of the species they are home to becoming extinct go up. Given the rate at which forests have been cleared, wetlands drained, and so on, they've warned that we face a massive pulse of extinctions. (Of course, pollution, hunting, invasive species, and other assaults don't help, either.)
Some skeptics such as Bjorn Lomborg have claimed that this is just fear-mongering. They pointed out that of the 200-some species of birds in eastern North America when Europeans arrived with their axes, only 4 were considered extinct--including, at the time, the Ivory-billed woodpecker. Given that the European settlers cleared vast swaths of forests, some simple calculations would suggest that 26 species should have become extinct.
Ten years ago Stuart Pimm, now at Duke, demonstrated that this argument was meritless. In a paper in the Proceedings of the National Academy of Sciences, he pointed out that predictions of extinction based on habitat loss have to take into consideration the range of the species and the extent of the habitat loss. Most of the birds of eastern North America lived across vast expanses. When farmers were cutting down trees in New England, those birds might be living happily in Pennsylvania and Ohio. When the settlers moved to Pennsylvania and Ohio, the birds could still live in Kentucky or Arkansas--and might even start recolonizing the forests that returned to the farmed-out regions of New England. In fact, many species of birds that live in the eastern United States can be found far north in Canada. If you consider only the birds that live in the forests of the eastern United States (between 11 and 28, depending on how strict you make the rules for membership in this club), the rate of extinction has actually been a bit higher than conservation biologists would predict.
I won't be at all surprised if various bloggers and pundits try to turn the rediscovery of the Ivory billed woodpecker into a refutation of the idea that fragmentation leads to extinction. (I'll post links to them if I come across them this week.) But I will be surprised if these pseudoskeptics actually address Pimm's paper. The paper also makes an important point that Pimm has followed up on with more recent research: a lot of the world's biodiversity is very different from the robins and crows and other birds that I see out my window here in Connecticut. A lot of biodiversity is made up of species with relatively small ranges, living in the tropics where forests are currently being wiped out at a rapid rate. These species may be able to hang on for a few decades in relatively large fragments, Pimm argues, but they're waiting out a death sentence. While extinction rates among birds in North America may be relatively low, the same process appears to be causing a catastrophe in the tropics.
It is wonderful that so many people--scientists, government officials, environmental groups, private land owners, and obsessed birders--have helped rediscover the Ivory-billed woodpecker and may be able to help it thrive in one corner of its former range. But this good news shouldn't be misused to distort the big picture.


This morning the New York Times reported that the National Geographic Society has launched the Genographic Project, which will collect DNA in order to reconstruct the past 100,000 years of human history.
I proceeded to shoot a good hour nosing around the site. The single best thing about it is an interactive map that allows you to trace the spread of humans across the world, based on studies on genetic markers. I'm working on a book about human evolution (more details to come), and I've gotten a blinding headache trying to keep studies on Y-chromosome markers in Ethiopian populations and mitochondrial DNA markers on the Andaman islands and all the rest of the studies out there straight in my head. Thank goodness somebody put them all in one place.
Of course, the project is much more than a pretty map: it's an ambitious piece of research. It's basically the brain child of Spencer Wells, a geneticist who wrote the excellent Journey of Man a few years ago. As of now, only about 10,000 people's DNA has been analyzed in studies on human migrations. Wells wants to crank that number up to 100,000. He's going to gather DNA from indigenous populations, and he's also inviting the public to get involved. You can buy a DNA kit, and when you send it back to the Genographic Project, you'll get a report on "your genetic journey" and the information will get added to Wells's database.
When Wells's book came out, I reviewed it for the New York Times Book Review. I gave it thumbs-up for the most part, although I felt that he had glided over the difficult ethical issues involved in these studies. The biotech industry is very interested in them, because they may point the way to new--and potentially profitable--medicines. An isolated population may have a pattern of genetic variation that sheds light on how a disease works its harm, or may have evolved a unique defense against a pathogen. When I wrote my review, Wells was a consultant to Genomics Collaborative, a private Massachusetts outfit that manages a medical collection of DNA and tissue samples from thousands of people around the world. It appears that he no longer is associated with them.
There's nothing wrong with this interest per se, but the fact is that it has led to some serious conficts. Critics have wondered why companies should be able to potentially reap great reward from the DNA of indigenous people, particularly when so many these groups face cultural extinction. DNA collections have in some cases ground to a halt because of these concerns. Wells didn't deal with tricky issues in The Journey of Man, which I thought was a mistake. That sort of omission, I think, only makes people unnecessarily suspicious.
The Genographic Project poses these sorts of ethical challenges once again, and it's good to see that Wells and his colleagues have confronted them head on. They have posted a long FAQ answering some of the big questions. No pharmaceutical companies are paying for the research. Instead, the Waitt Family Foundation has ponied up the cash for the fieldwork (to a total of $40 million), and IBM is supplying technology and PR.. Net proceeds from the sale of kits will go to education and conservation projects directed towards the indigenous peoples Wells will be working with. The identity of the DNA will remain confidential, but the database will not. Instead, it will be made free and public, along the lines of the Human Genome Project, so that any scientist can use it to study disease (or any other relevant question).
I'll bet that in a few years Wells will have another book to write from this experience. I hope that there's room in it this time for the ethics and the politics he's dealing with. That would help show just how relevant the wanderings of our ancestors 50,000 years ago are to our lives today.
Update 4 pm: Bad link fixed.


I have a weakness common to many bloggers--I like to check my site meter to see who's coming to my blog, and from where. Often I wind up discovering intriguing sites run by people whose interests run along the same lines as mine, such as evolutionary biology. Today, however I was surprised to see a lot of traffic coming from Answers in Genesis, a creationist web site.
First off, greetings to all visitors who come through the link. I hope you find some interesting things here.
I decided to investigate the source of the link, and the results were interesting. It turns out that today Answers in Genesis put a new page up in which a writer attacks a recent post of mine about HIV. I explained how recent research on a virulent new strain of the virus relied on evolutionary biology to investigate its origins, and how understanding natural selection helps scientists put together strategies for vaccines, antiviral treatments, and other ways to fight the disease. And I pointed out that creationism appears nowhere in this research, providing no help in understanding this particularly nasty aspect of the natural world.
Answers in Genesis takes pity on me for not having come to them for enlightenment. "Had Zimmer checked this website first, he would have known that far from creationists ducking for cover at this blinding new evidence (as his article, especially its title, implies), we wrote an article years ago Has AIDS evolved which, in principle, raised and dealt with the points his piece makes."
It's important to address some of the erroneous claims raised in the piece, but it's not easy because they are mixed together with non sequiturs and other distractions. "Blinding new evidence"--quote unquote? Do those words appear in my blog? No. Does the writer attribute them elsewhere in his piece to someone else? No. He's just putting quotation marks up arbitrarily.
And then there's the claim that the piece he refers to raised and dealt with my points "in principle." The HIV research I'm discussing was published in 2005. The piece in Answers in Genesis came out in 1990. Did the folks at Answers in Genesis know then that this paper on HIV would be coming out in fifteen years? Could they foretell its contents so well that they could explain how creationism would actually guide the research? Again, no.
What Answers in Genesis actually said in 1990 was this: when scientists observe evolutionary change in viruses such as HIV, they have not found proof that viruses evolved into people. "Viruses can have no evolutionary relationship to any other form, and so whatever may have happened to say, the AIDS virus, has no relevance to the supposed history of truly living organisms in any case," Answers in Genesis claims.
To those who find this claim impressive, I would point out a couple things.
First of all, it evades the actual point of my post, which was that scientists who are working on HIV and other pathogens do not base any of their work on creationism of any flavor, including intelligent design. You can look in medical journals all you want, but it's just not there. Mutation, natural selection, genetic drift, and the adaption to new host species are what's there. (See my follow-up post for some research on the deep history of HIV.)
Second of all, it's just flat-out wrong to say that "viruses have no evolutionary relationship to any other form." Scientists have documented many cases in which the DNA in viruses and the DNA in a bacteria, animal, or some other organisms show an evolutionary link. In some cases, viruses have permanently patched themselves into host genomes, including our own. In other cases, viruses appear to have evolved from a segment of DNA from some organism, having acquired mutations that allow them to break free and infect other hosts. In still other cases, the viruses have grabbed host genes along the way, turning into a veritable genetic mosaic. Viruses appear to have been present since the earliest stages of life on Earth and may have given rise to some of our most important celular machinery. A quick search of the scientific literature brings up a wealth of papers addressing the intimate role of viruses in our evolution--here are just a few gems:
Viruses as the source of new genes in bacteria
A catepillar virus that evolved from the wasps that parasitize catepillars.
An analysis that indicates that some of the most essential enzymes in our cells come from viruses.
I heartily suggest that people read the Answers in Genesis piece on viruses--not for any scientific enlightenment, but as an example of the bait-and-switch tactics and omission of evidence that's necessary to create the impression that there has to be some "blinding" line dividing small and large scale evolutionary change. (Quotation marks mine!)


Two of the most important stages in hominid evolution were the origin of the entire hominid branch some six to seven million years ago and the first movement of hominids out of their African birthplace. This week we now get a new look at both.
On the cover of Nature, the editors splashed the first reconstruction of Sahelanthropus, the oldest known hominid. The scientists who made the reconstruction used new material they found in the Sahara, adding to the material they described in their first report in 2002. There had been some argument over whether Sahelanthropus was an early hominid that looked a lot like other apes, or an ape that had a passing resemblance to hominids. The authors argue the former. They also claim that their new reconstruction provides new evidence that Sahelanthropus may have been bipedal. MSNBC reports that other scientists would prefer to see a nice pelvis or femur before accepting that claim.
Meanwhile, via John Hawks, National Geographic has a lovely display of some of the oldest hominids fossils found outside of Africa. Found in Georgia, they were initially assigned to Homo erectus, which is known to have spread all the way to Indonesia by 1.8 million years ago. But Homo erectus was a tall hominid with a big brain and a relatively flat face. The Georgia hominids, as you can see in NG's new reconstructions, were tiny and reminiscent of earlier hominids back in Africa. Which raises the possibility, which I've discussed before, that the "hobbits" recently found in Indonesia (Homo floresiensis) might have been the relicts of a pre-Homo erectus migration of little folks out of Africa. (NG also has an article on the hobbits this month, by the discoverers.)
Unfortunately, there's also bad new about hominids these days--the hobbit bones, which were "borrowed" last fall, are a mess.
UPDATE: Minutes later...Man, Nature is hominid crazy this week. I totally missed another paper in this issue on a new skull from the Georgia hominids. What's most interesting about this indvidual was that it was old and toothless. It somehow survived for a long time after losing its teeth, which suggests it got a lot of help from its fellow hominids. Old age and extended family bonds are usually considered to have evolved later in hominid evolution, but this old gum-sucker suggests otherwise.

I'm guessing it's only a matter of time before this guy gets a show on cable. Bryan Fry is a biologist at the University of Melbourne in Australia, and he spends a lot of his time doing this sort of thing--messing with animals you really really shouldn't mess with. In addition to being telegenic, he rattles off those delicious Australian phrases, like, "No drama, mate." (Translation: No problem.)
While Fry is comfortable milking a king cobra in a jungle, he also has a lab-jockey side, using genomic technology to dredge up vast numbers of new snake venom genes. In tomorrow's issue of the New York Times, I have an article about Fry's latest research. He has offered a rough draft of the history of venoms--a 60 million year tale of gene recruitment and gene duplications and high-speed evolution. Understanding this history is a crucial part of Fry's long-term goal of turning venoms into new drugs--a tradition that has already given rise to billions of dollars of sales each year and many lives saved. That may put him off-limits for IMAX movies, but television seems inevitable.


Spring is finally slinking into the northeast, and the backyard wildlife here is shaking off the winter torpor. Our oldest daughter, Charlotte, is now old enough to be curious about this biological exuberence. She likes to tell stories about little subterranean families of earthworm mommies and grub daddies, cram grapes in her cheeks in imitation of the chipmunks, and ask again and again about where the birds spend Christmas. This is, of course, hog heaven for a geeky science-writer father like myself, but there is one subject that I hope she doesn't ask me about: how the garden snails have babies. Because then I would have to explain about the love darts.
Garden snails, and many other related species of snails, are hermaphrodites, equipped both with a penis that can deliver sperm to other males and with eggs that can be fertilized by the sperm of others. Two hermaphroditic snails can fertilize each other, or just play the role of male or female. Snail mating is a slow, languorous process, but it also involves some heavy weaponry. Before delivering their sperm, many species (including garden snails) fire nasty-looking darts made of calcium carbonate into the flesh of their mate. In the 1970s, scientists sugested that this was a gift to help the recipient raise its fertilized eggs. But it turns out that snails don't incorporate the calcium in the dart into their bodies. Instead, love darts turn out to deliver hormones that manipulate a snail's reproductive organs.
Evolutionary biologists have hypothesized that this love dart evolved due to a sexual arms race. When a snail receives some sperm, it can gain some evolutionary advantage if it can choose whether to use it or not. By choosing the best sperm, a snail can produce the best offspring. But it might be in the evolutionary interest of sperm-delivering snails to rob their mates of their ability to choose. And love darts appear to do just that. Their hormones prevent a snail from destroying sperm with digestive enzymes, so that firing a love dart leads to more eggs being fertilized.
Recently Joris Koene of Vrije University in the Netherlands Hinrich Schulenberg of Tuebingen University in Germany set out to see how this evolutionary arms race has played out over millions of years. They analyzed DNA from 51 different snail species that produce love darts, which allowed them to work out how the snails are related to one another. They then compared the darts produced by each species, along with other aspects of their reproduction, such as how fast the sperm could swim and the shape of the pocket that receives the sperm.
Koene and Schulenberg found that love darts are indeed part of a grand sexual arms race. Love darts have evolved many times, initially as simple cones but then turning into elaborate harpoons in some lineages. (The picture at the end of this post shows eight love darts, in side view and cross section.) In the same species in which these ornate weapons have evolved, snails have also evolved more powerful tactics for delivering their sperm, including increasingly complex glands where the darts and hormones are produced. These aggressive tactics have evolved, it seems, in response to the evolution of female choice. Species with elaborate love darts also have spermatophore-receving organs that have long, maze-like tunnels through which the sperm have to travel. By forcing sperm to travel further, the snails can cut down the increased survival of the sperm thanks to the dart-delivered hormones.
Sexual conflict has been proposed as a driving force in the evolution of many species, and this new research (which is published free online today at BMC Evolutionary Biology) supports the idea that hermaphrodites are not immune to it. What's particularly cool about the paper is that all these attacks and counter-attacks co-vary. That is, species with more blades on their love darts tend to have longer rerpoductive tracts and more elaborate hormone-producing glands and so on. Only by comparing dozens of species were they able to find this sort of a relationship.
My wife always tells me that as a science writer, I ought to be well-prepared to give our children the talk about the birds and the bees. But I'm not sure the love darts would send quite the right message.



I'll be a guest tonight at 7 PM EST on NPR's talk show On Point, talking about the new wave of dinosaur science. Jack Horner will be on as well, delivering the dirt about his mind-blowing discovery of soft tissue from a T. rex. Should be interesting.
Update, 3/29/05 9:30 am: The show is now archived here. The links to the real player and windows media feeds are at the top of the page.


Today Gregor Mendel is a towering hero of biology, and yet during his own lifetime his ideas about heredity were greeted with deafening silence. In hindsight, it's easy to blame his obscurity on his peers, and to say that they were simply unable to grasp his discoveries. But that's not entirely true. Mendel got his ideas about heredity by experimenting on pea plants. If he crossed a plant with wrinkled peas with one with smooth peas, for example, the next generation produced only smooth peas. But when Mendel bred the hybrids, some of the following generation produced wrinkled peas again. Mendel argued that each parent must pass down factors to its offspring which didn't merge with the factors from the other parent. For some reason, a plant only produced wrinkled peas if it inherited two wrinkle-factors.
Hoping to draw some attention to his research, Mendel wrote to Karl von Nageli, a prominent German botanist. Von Nageli was slow to respond, and when he did, he suggested that Mendel try to get the same results from hawkweed (Hieracium), the plant that von Nageli had studied for decades. Mendel tried and failed. It's impossible to say whether von Nageli would have helped spread the word about Mendel's work if the hawkweed experiments had worked out, but their failure couldn't have helped.
After Mendel's death, a new generation of biologists discovered his work and, with the insights they had gathered from their own work, they realized he had actually been onto something. Pea plants really do pass on factors--genes--to their offspring, and sometimes the genes affect the appearance of the plants and sometimes they don't. Mendelian heredity, as it came to be known, was instrumental in the rise of the new science of genetics, and today practically every high school biology class features charts showing how dominant and recessive alleles are passed down from one generation to the next. Mendelian heredity also helped explain how new mutations could spread through a population--the first step in evolutionary change.
But what about that hawkweed? It turns out that usually Hieracium reproduces very differently than peas. A mature Hieracium does not need to mate with another plant. It does not even need to fertilize itself. Instead, it simply produces clones of itself. If Nageli had happened to have studied a plant that reproduced like peas, Mendel would have had more luck.
Hawkweed raises an important question--one that is particularly important this morning. Does it tells us that Mendel was wrong? Should teachers throw their Mendelian charts into the fire? No. Mendel found a pattern that is widespread in nature, but not a universal law. Most animals are pretty obedient to Mendel's rule, as are many plants. Many algae and other protozoans also have Mendelian heredity, although many don't. Many clone themselves. And among bacteria and archaea, which make up most of the diversity of life, Mendelian heredity is missing altogether. Bacteria and archaea often clone themselves, trade genes, and in some cases the microbes even merge together into a giant mass of DNA that then gives rise to spores.
Today in Nature, scientists found another exception to Mendelian heredity. They studied a plant called Arabidopsis (also known as cress) much as Mendel did, tracing genes from one generation to the next. They crossed two lines of cress, and then allowed the hybrids to self-fertilize for two more generations. Some of the versions of the genes disappeared over the generations from the genomes of the plants, as you'd expect. But then something weird happened: in a new generation of plants, some of the vanished genes reappeared. The authors think that the vanished genes must have been hiding somewhere--perhaps encoded as RNA--and were then tranformed back into DNA.
Is cress the tip of a genetic iceberg (to mix my metaphors hideously)? Only more experiments will tell. If it is more than just a fluke, it may turn out to play an important part in evolution, joining some other weird mechanisms, such as "adaptive mutation," in which bacteria crank up their mutation rate when they undergo stress. But hold onto those Mendelian charts. These cress plants are wonderfully weird--but no more wonderfully weird than hawkweed.


Panda's Thumb has an update on the ongoing drama over teaching creationism in public schools taking place in York, Pennsylvania. Last year a group of residents donated 58 copies of a creationst book called Of Pandas and People to the local school. The board of education reviewed them and gave them the green light. The books are now available in the school library.
Now someone has donated 23 science books, many of which deal with evolutionary biology, to see how the board deals with them. So far, the board has said it will review them as to their "educational appropriateness," and has left it at that.
It's an honor for my book Evolution: The Triumph of an Idea to be on a list that includes work by luminaries such as Stephen Hawking and Ernst Mayr. But if the donor wants to make his point--that evolution is well-established science--even more clearly, I'd suggest adding a few extra items: some of the leading college textbooks in biology, botany, microbiology, genetics, zoology, and developmental biology. Open any of them up and you're likely to find evolution acting as the backbone for all of the knowledge they have to offer. Would the board balk at them? If they did, you'd have to wonder whether they actually want their students to succeed in college.


Readers were busy this weekend, posting over fifty comments to my last post about HIV. Much of the discussion was sparked by the comments of a young-Earth creationist who claims that the evolutionary tree I presented was merely an example of microevolution, which--apparently--creationists have no trouble with. This claim, which has been around for a long time, holds that God created different "kinds" of plants and animals (and viruses, I guess), and since then these kinds have undergone minor changes, but have never become another "kind."
Some readers expressed frustration that the comments were getting side-tracked into arguments about creationism. I take a pretty relaxed attitutde to what goes on in the comment threads, though. Part of that attitude, I'll admit, comes from the fact that I don't have the time to hover over the comments all day. But I also don't relish the thought of shutting down discussion, except of course when comments come from pornography-peddling bots.
I myself find that objections to evolution frequently turn into good opportunities to discuss interesting scientific research. For example, let's take the claim that an evolutionary tree of HIV merely documents microevolution.

Here's the tree from my last post, published in The Lancet. It compares a new aggressive, resistant strain of HIV to strains taken from other patients. These viruses all descend from a common ancestor. The descendants mutated, many mutants died, and some mutants thrived, thanks to their ability to evade the immune systems of their hosts. Strains that share a closer common ancestor fall on closer branches.
This new strain belongs to a group of strains known collectively as HIV-1. What happens if you compare HIV-1 to viruses found in animals? Is it impossible to link these viruses together on a single tree? Were they all created separately, each to plague its own host? That's what one might expect if indeed the "microevolution-yes, macro-evolution no" idea was true. After all, viruses that infect different animals are generally different from one another. They can only survive if they have biological equipment suited to their host species, and different species offer different challenges to a virus.
It turns out that the same approach used to compare HIV strains found in individual people works on this larger scale. Scientists can draw a tree.
Here is the most up-to-date version of the tree, which appears in the latest issue of the Journal of Virology. The different branches of HIV-1 are marked in black. The red branches are viruses known as Simian Immunodeficiency Virus (SIV) found in certain populations of chimpanzees. The blue branches also represent chimp SIV's, but these are more distantly related to HIV-1. (A side note: the Lancet paper doesn't specify exactly which HIV-1 group the nasty new strain belongs to. That's a matter of ongoing research.)
It appears, then, that HIV-1 evolved into a human scourge not once but several times from chimp SIV ancestors. One likely route is the increasing trade in chimpanzee meat in western Africa. Hunters who get chimpanzee blood in their own wounds can become infected, and certain strains that manage to survive in our species can then evolve into better-adapted forms.
Of course, tracing back HIV-1 evolution this far leads to the question, where did the ancestors of HIV-1 come from? The authors of the review in Journal of Virology takes another step back, comparing chimpanzee SIV to SIVs from other monkeys. Does this enterprise now finally collapse? Does "microevolution" finally hit the wall, unable to explain "macroevolution"?
Nope. Here's what they find. The tree on the left is based on studies of one HIV/SIV gene called Pol, and the one on the right is based on another called Env. SIVcpz refers to chimp SIV, and the other abbreviations refer to SIV's found in various monkeys.
It turns out that different genes in chimp SIV have different evolutionary histories. This is no big surprise. Virologists have known for a long time that a single animal can get infected by two different viruses, which--on rare occassion--may combine their genetic material into a single package. The scientists hypothesize that chimp SIV evolved from SIV found in red-capped sooty mangabeys as well as SIV that infects greater spot-nosed, mustached, and mona monkeys. Just as humans hunt chimpanzees, chimpanzees hunt and eat monkeys. So they may have been infected in this manner.
You can take the same walk back in time with any virus that's been studied carefully--or any species of animal or plant. Take us. Scientists publish evolutionary trees all the time in which they compare the DNA of individual people. They also use the same methods to demonstrate that chimpanzees are our closest living relatives, that primates descend from small shrew-like mammal ancestors, that mammals and other land vertebrates descend from fish, and so on. (I don't have time this morning to grab examples of these trees, but if I have time tonight I will.) Certainly there are parts of these trees that are still difficult to make out. DNA sometimes evolves so much that a gene can wind up obscuring its own history, for example. But scientists have never hit the wall that creationists claim exists.


You may have heard last month's news about an aggressive form of HIV that had public health officials in New York scared out of their professional gourds. They isolated the virus from a single man, and reported that it was resistant to anti-HIV drugs and drove its victim into full-blown AIDS in a manner of months, rather than the normal period of a few years. Skeptics wondered whether all the hoopla was necessary or useful. The virus might not turn out to be all that unusual, some said; perhaps the man's immune system had some peculiar twist that gave the course of his disease such a devastating arc. But everyone did agree that the final judgment would have to wait until the scientists started publishing their research.
Today the first data came out in the Lancet. One of the figures jumped out at me, and I've reproduced it here. The scientists drew the evolutionary tree of this new strain. Its branch is marked here as "index case." The researchers compared the sequence of one its genes to sequences from other HIV strains, looking to see how closely related it was to them. The length of the branches shows how different the genetic sequences are from one another. The tree shows that this is not a case of contamination from some other well-known strain. Instead, this new strain sticks way out on its own. The researchers say that they're now working their way through a major database of HIV strains maintained at Los Alamost to find a closer relative.
This tree is a road map for future research on this new strain. It will allow scientists to pinpoint the evolutionary changes caused by natural selection or other factors that made this strain so resistant to anti-HIV drugs. Scientists will also be able to rely on evolutionary studies of other viruses. Often drug-resistant pathogens have to pay a reproductive cost for their ability to withstand attack from our medicines. Under normal conditions, they reproduce more slowly than resistant strains. But scientists have also found that pathogens can then undergo new mutations that compensate for this handicap and make them as nasty as their resistant counterparts. It's possible that the new strain has undergone compensatory mutations, which might make it such a threat.
So here we have evolutionary trees and natural selection at the very core of a vitally important area of medical research. Yet we are told again and again by op-ed columnists and certain members of boards of education that evolution is nothing but an evil religion and that creationism of one flavor or another is the future of science. You'd expect then that Intelligent Design or some other form of creationism would help reveal something new about this HIV. But it has not. That should count for something.
Update: 4/12/05 Greetings, visitors from Answers in Genesis. You may be interested in this new post.


I can't remember the first time I saw the dinosaur fossils at the American Museum of Natural History, but they've been good friends for over thirty years. We've all changed a lot over that time. I've grown up and gotten a bit gray, while they've hiked up their tails, gotten a spring in their step, and even sprouted feathers.
I plan to take my daughters to see the new exhibit at AMNH, Dinosaurs: Ancient Fossils, New Discoveries, this spring, and it will be strange to watch them get to know these dinosaurs all over again. In January I got a chance to slink around the exhibit while it was still under construction when I paid a visit to Mark Norell, the museum's top dinosaur guru. I asked Norell what were the biggest questions he has today about dinosaurs, and we spoke about everything from the evolution of birds to just how wrong Jurassic Park turned out to be. Conversations with him and several other leading dinosaur experts led to my cover story in the new issue of Discover.


Last week my editor at the New York Times asked me to write an article about the evolution of crying, to accompany an article by Sandra Blakeslee on colic. Both articles (mine and Blakeslee's) are coming out tomorrow. As I've written here before, human babies are by no means the only young animals that cry, and there's evidence that natural selection has shaped their signals, whether they have feathers or hair. Among animals, there's a lot of evidence that infants can benefit from manipulating their signals to get more from their parents. On the other hand, evolution may sometimes favor "honest advertisements" that prevent offspring from deceiving their parents. Human crying may be the product of the same conflict of evolutionary interested between parents and children.
This was a tricky article to write, because on the one hand there are some very interesting ideas to examine, but on the other hand, they're only hypotheses that haven't been put to much of a test in humans. I've come across two big papers in the past couple years, this one by Jonathan C.K. Wells in the Quarterly Review of Biology in 2003 and another by Joseph Soltis in the latest issue of Behavioral and Brain Sciences. They offer and evaluate a number of hypotheses for human crying. They even give some thought to colic, that maddening far end of the crying spectrum where perfectly healthy babies cry for hours, turning their parents into shambling wrecks. According to one hypothesis, colic is just a case of deceptive signals from child to mother, carried to an absurd extreme.
These are just preliminary hypotheses, though, and they face a lot of tough tests. As I mention in the article, chimpanzees show no sign of colic, which makes you wonder how deep the evolutionary roots of colic could go if it is not found among our closest living primate relatives. What I didn't have room to mention in the article were some comments published in response to Soltis's paper in Behavioral and Brain Sciences by Hillary Fouts of NIH and here colleagues. They study foraging societies in Africa, and in their years of observing how these people raise kids, they haven't seen any colic either.
One way to account for this pattern is the possibility that colic is a disease of affluence--an adaptation turned maladaptive in the modern age, like a taste for sweets that was once satisfied by fruits and can now be drowned in a sea of high-fructose corn syrup. Wells even suggests that the modern Western food supply may have cut down the cost of crying, making it easier for kids to cry more. In foraging societies, mothers nurse their children up to four times an hour, while mothers in farming and industrial societies nurse their babies far less. Babies also cry to be held (perhaps for warmth and protection from attack), and while foragers hold their babies constantly, Westerners keep their babies separated from them much of the time in cribs, carriages, and car seats. Wells suggests that when a colicky baby sends its cranked-up signal and doesn't get the right response, it cranks up even more.
Again, this is only a hypothesis--a starting point for investigation. Hillary Fouts and her colleagues show what this sort of investigation can look like. In the latest issue of Current Anthropology, they report on a study about the end of crying, comparing how babies respond to weaning in two cultures. Both cultures are found in the same rain forests of the Central African Republic. One group live as foragers, and the others as farmers. The foragers nurse their children many times a day and wean them by gradually taper off nursing. The farmers, on the other hand, cut off their children abruptly--in part because the women need to get back to working in their fields.
Fouts and her colleagues found that the farmer children fussed and cried a lot around the time of weaning, while the forager children didn't show much difference. But the researchers kept following the children and found something interesting: the farmer children stopped fussing before long and then cried a lot less in general. The forager children, on the other hand, kept crying more than the farmer children long after they had been weaned.
Fouts and her colleagues see a subtle strategy at work here. The farmer children may cry in response to weaning because it represents the end of a reliable milk supply and perhaps even because weaning raises the odds of their mothers will get pregnant with another child that will compete for the mother's investment. But once the farmer children are weaned and it is clear that their cries will not do them any more good, they don't waste any further effort on the tears.
The forager children, on the other hand, don't get that clear signal of an impending cut-off, and so they don't fuss and wail more in response. But it's also important to bear in mind that in the foraging community, the children are always around some relative who will be quick to pick up a child. So even after weaning, crying still has some value as a signal, and so the children keep it up.
What I find particularly interesting about this study is that it suggests that we shouldn't use evolution to manufacture a false sense of nostalgia. Just because our ancestors lived in a particular way doesn't mean that the way we live now is automatically bad. Our evolutionary heritage is not completely fossilized; it can in some respects alter itself in response to the conditions in which we grow up. If colic follows this pattern, it is not a cause for collective Western guilt that we don't live as foragers. Instead, it's a call to understand the evolutionary roots of the behavior of our children--both for their well-being and our own sanity.


I've got an article in today's New York Times about animal personalities.
Update: I'm not ashamed to admit I'm a regular visitor to the gossip site Gawker. But I have to say I was surprised to see the personality article turn up there. Will hordes of New York hipsters discover the strange joys of evolution, of comparative psychology? We can only hope.


In my last post, I traced a debate over the evolution of language. On one side, we have Steven Pinker and his colleagues, who argue that human language is, like the eye, a complex adaptation produced over millions of years through natural selection, favoring communication between hominids. On the other side, we have Noam Chomsky, Tecumseh Fitch, and Marc Hauser, who think scientists should explore some alternative ideas about language, including one hypothesis in which practically all the building blocks of human language were already in place long before our ancestors could speak, having evolved for other functions. In the current issue of Cognition, Pinker and Ray Jackendoff of Brandeis responded to Chomsky, Fitch, and Hauser with a long, detailed counterattack. They worked their way through many features of language, from words to syntax to speech, that they argued show signs of adaptation in humans specifically for language. The idea that almost of all of the language faculty was already in place is, they argue, a weak one.
Chomsky, Fitch, and Hauser have something to say in response, and their response has just been accepted by Cognition for a future issue. You can get a copy here. Chomsky, Fitch, and Hauser argue that Pinker and Jackendoff did not understand their initial paper, created a straw man in its place, and then destroyed it with arguments that are irrelevant to what Chomsky, Fitch, and Hauser actually said.
It was exactly this sort of confusion about language that Chomsky, Fitch, and Hauser believe has dogged research on its evolution. The first step to resolving this confusion, they argue, is to categorize the components of language. They suggest that scientists should focus on two categories, which they call the Faculty of Language Broad (FLB), and the Faculty of Language Narrow (FLN). FLN includes those things that are unique and essential to human language. FLB includes those things that are essential to human language but are not unique. They might be found in other animals, for example, or in other functions of the human mind.
Chomsky, Fitch, and Hauser argue that we don't actually know yet what belongs in FLN. The only way to find out is to explore the human mind and the minds of animals. But they argue that the road to an understanding of how language evolved must start here. Simply calling all of language an adaptation is a vague and fruitless statement, and one that leaves biologists and linguists unable to work together.
In their effort to portray language as a monolithic whole utterly unique to humans, Pinker and Jackendoff offer up evidence that Chomsky, Fitch, and Hauser consider beside the point. Consider the fact that the human brain shows a different response to speech than to other sounds. Chomsky, Fitch, and Hauser argue that you can't use the circuitry of the human brain as a simple guide to the evolution of its abilities. After all, some people who suffer brain injuries can lose the ability to read while retaining the ability to write. It would be silly to say that this is evidence that natural selection has altered the human brain because reading provides some reproductive advantage. Animals, Chomsky, Fitch, and Hauser argue, are a lot better at understanding the features of speech sounds than Pinker and Jackendoff give them credit for. In fact, they claim that Pinker and Jackendoff are behind the curve, relying on research that's years out of date. Given all that's been discovered about animal minds, Chomsky, Fitch, and Hauser argue that we should assume that any feature of language can be found in some animal until someone shows that it is indeed unique to humans.
There's a lot that's fascinating in all of the papers I've described in these two posts, but I find them frustrating. Pinker and Jackendoff may have erected a straw man to attack, but I think they can to some extent be forgiven. The 2002 paper by Chomsky, Fitch, and Hauser was murky, and their new paper, which is supposed to clarify it, is a bit of a maze as well. Consider the "almost-there" hypothesis, which they offered up in their 2002 paper. It's conceivable that FLN contains only one ingredient--a process called recursion, which I describe in my first post. If that's true, the evolution of recursion may have brought modern language into existence. On the one hand, Chomsky, Fitch, and Hauser claim to be noncommittal about the almost-there hypothesis, saying that we don't yet know what FLN actually is. On the other hand, they claim there is no data that refutes it. Doesn't sound very noncommittal to me.
I'm also not sure how meaningful the categories of FLB and FLN are. Consider the case of FOXP2, a gene associated with human language. Chomsky, Fitch, and Hauser point out that other animals have the gene, and that in humans its effects are not limited to language (it's important in embryo development, too). So it belongs in FLB, because it's not unique enough to qualify for FLN.
It is true that other animals have FOXP2, but in humans, it has undergone strong natural selection and is significantly different from the versions found in other animals. And just because it acts the human body in other ways doesn't mean that natural selection couldn't have favored its effect on human language. Chomsky, Fitch, and Hauser grant that features of language that belong to FLB may have also evolved significantly in humans. But if that's true, then deciding exactly what's FLN and what's not doesn't seem to have much to offer in the quest to understand the evolution of human language.
For now, the main effect these papers will have will probably be to guide scientists in different kinds of research on language. Some scientists will follow Pinker and Jackendoff, and try to reverse-engineer language. Others will focus instead on animals, and will probably find a lot of new surprises about what they're capable of. But until they come to a better agreement on what adaptations are, and the best way to study them, I don't think the debate will end any time soon.


Earlier this month I wrote two posts about the evolution of the eye, a classic example of complexity in nature. (Parts one and two.) I'd like to write now about another case study in complexity that has fascinated me for some time now, and one that has sparked a fascinating debate that has been playing out for over fifteen years. The subject is language, and how it evolved.
In 1990, Steven Pinker (now at Harvard) and Paul Bloom (now at Yale) published a paper called "Natural Selection and Natural Language." They laid out a powerful argument for language as being an adaptation produced by natural selection. In the 1980s some pretty prominent scientists, such as Stephen Jay Gould, had claimed that the opposite was the case--namely, that language was merely a side effect of other evolutionary forces, such as an increase in brain size. Pinker and Bloom argued that the features of language show that Gould must be wrong.
Instead, they maintained, language shows all the classic hallmarks of an adaptation produced by natural selection. Despite the superficial diversity of languages, they all share a basic underlying structure, which had first been identified by Noam Chomsky of MIT in the 1960s. Babies have no trouble developing this structure, which you'd expect if it was an in-born capacity rather than a cultural artefact.
This faculty of language could not simply be a side-effect of brain evolution, because it is so complex. Pinker and Bloom compared language to the eye. No physical process other than natural selection acting on genetic variation could have produced a set of parts that interacted so closely to make vision possible. And you can recognize this adaptiveness by its similarity--in some ways--to man-made cameras. Likewise, language is made up of syntax, the anatomy for producing complex speech, and many other features. Pinker and Bloom argued that natural selection favored the rise of language as a way for hominids to exchange information--whether that information was about how to dig up a tuber with a stick, or about how a neighboring band was planning a sneak attack. There was nothing unusual about the evolution of language in humans; the same biological concepts could explain it as could explain the evolution of echolocation in bats.
Pinker and Bloom went on to publish a number of papers exploring this idea, as well as some popular books (The Language Instinct and How the Mind Works from Pinker, and Descartes's Baby from Bloom.) But they by no means spoke for the entire community of linguists. And in 2002, one particularly important linguist weighed in: Noam Chomsky.
It was the first time Chomsky tackled the evolution of language in a serious way, which is surprising when you consider how influential he had been on the likes of Pinker and Bloom. He had offered some vague musings in the past, but now he offered a long review in the journal Science, which he coauthored with two other scientists. One was Marc Hauser of Harvard, who has carried out a staggering amount of research on the mental life of primates, and the other was Tecumseh Fitch of St. Andrews University in Scotland, who studies the production of sound by humans and other animals. (You can read more about Fitch's work in an essay I wrote for Natural History.)
The Hauser et al paper is not an easy read, but it has its rewards. The researchers argue that the only way to answer the question of how language emerged is to consider the parts that make it up. They see it as consisting of three systems. Our ability to perceive the sounds of speech and to produce speech ourselves is one (the input-output hardware, as it were). Another is a system for understanding concepts And the final ingredient of language is the computation allows the brain to map sounds to concepts.
Hauser et al see three possible explanations for how this three-part system evolved. One possibility is that all three parts had already evolved before our ancestors diverged from other apes. They introduce this hypothesis and then immediately abandon it like a junked car. The second possibility they introduce could be called the uniquely-human hypothesis: the language faculty, including all its components, has undergone intense natural selection in the human lineage. Pinker and Bloom's argument fits this description. The final hypothesis Hauser et al consider is that almost everything essential to human language can also be found in other animals. Perhaps only a minor addition to the mental toolkit was all that was necessary to produce full-blown language.
The authors point out that a lot of the data that would let them to choose between the three have yet to be gathered. Nevertheless, they devote most of their attention to the almost-everything hypothesis, and it's clearly the one they favor.
They argue that studies on animals already show that they have a lot of the ingredients required for language. Monkeys, for example, can have comprehend some surprisingly abstract concepts. They can understand number and color, for example. As for the input-output hardware for human language, it's not all that special either. Monkeys are so good at recognizing human speech sounds that they can tell the difference between two sentences spoken in different languages. And as for speech production, the researchers argue that the essential anatomy is not unique to humans, either.
Humans, for example, depend on a low larynx to give them the range of sounds necessary for speech. But did the larynx drop down as an adaptation for speech? In an earlier paper, Tecumseh Fitch showed that other species have lowered larynxes, including deer. What purpose does it serve for these nonhumans? Fitch suggests that it began as a way to deceive other animals, by making an individual sound larger than it really is. Human ancestors might have evolved a lower larynx for this function, and only later did this anatomy get co-opted for producing speech.
Hauser et al make a bold suggestion: perhaps only one thing makes human language unique. They call this special ingredient recursion. Roughly speaking, it's a process by which small units--such as words--can be combined into larger units--such as clauses--which can be combined into larger units still--sentences. Because units can be arranged in an infinite number of ways, they can form an infinite number of larger units. But because this construction follows certain rules, the larger units can be easily understood. With recursion, it's possible to organize simple concepts in to much more complex ones, which can then be expressed with the speech-producing machinery of the mouth and throat.
According to the almost-everything hypothesis, all of the components of language may not have all gradually evolved together as an adaptation. Instead, much of it was already in place when recursion evolved. It's even possible, they suggest, that recursion didn't even evolve as part of language, but for another function, such as navigation. By happenstance, it also fit together with the other elements of language and voila, we speak.
The Hauser et al paper got a lot of attention when it first came out, such as this long article in the New York Times. Steven Pinker offered a few cryptic comments about how Chomsky's huge reputation didn't leave much room for those who accepted some of his ideas but dismissed others.
But he would not be content with a couple bite-size quotes. Working with Ray Jackendoff of Brandeis University, he began work on a long reply. It has only now appeared, over two years later, in the March issue of Cognition. (But you can grab it here, on Pinker's web site.) This 36 page retort is remarkable in the sustained force with which it blasts Hauser et al. It's not just a regurgitation of 15-year old ideas; Pinker and Jackendoff marshall a lot of evidence that has only been gathered recently.
While Hauser et al may claim that speech perception is not all that special, Pinker and Jackendoff beg to differ. They point out that we use different brain circuits to perceive speech sounds and nonspeech, and that certain kinds of brain damage can cause "word deafness," which robs people of the ability to perceive speech but not other sounds. Babies also prefer speech to non-speech at an early age, and when they show this preference, language-related parts of their brain become active.
What about speech production? Again, Pinker and Jackendoff argue that humans show signs of adaptation specifically to produce speech. Humans learn to speak by imitation, and are astonishingly good at it. But humans are not good at imitating just any sound. A parrot, on the other hand, can do just as good a job at saying Polly and doing an impression of a slamming door. As for Fitch's ideas about the lowering of the larynx, even if it were true, Pinker and Jackendoff don't think it goes against their hypothesis. Even if the larynx had an earlier function, that doesn't mean that natural selection couldn't have acted on it in the human lineage. Bird wings got their start as feet that reptiles used for walking on the ground, but these limbs obviously underwent intense natural selection for flight.
Pinker and Jackendoff then explore some other aspects of language that Hauser et al didn't address at all. The first is the fact that language is built from a limited set of sounds, or phonemes. Phonemes are crucial to the infinite capacity of language, because they can be combined in so many ways. But they also require us to understand rules about how to pronounce them. Pinker and Jackendoff illustrate this with the phoneme -ed: in the words walked, jogged, and patted it has the same meaning but has three different pronunciations. As far as Pinker and Jackendoff can tell, primates have no capacity that can be compared to our ability to use phonemes. As for why phonemes might have evolved in the ancestors of humans, they point to some fascinating models produced by Martin Nowak of Harvard. Nowak argues that natural selection would favor just a few phonemes because they would be easy to distinguish from one another. Human language lets us say thousands of words without having to understand thousands of individual speech sounds.
Research on language genes are also consistent with a uniquely-human hypothesis, according to Pinker and Jackendoff. A gene called FOXP2, for example, is essential for language, and any mutation to it causes difficulties across the board, from articulating words to comprehending grammar. What's more, comparisons of the human FOXP2 gene with its counterparts in other animals shows that it has been the target of strong natural selection perhaps as recently as 100,000 years ago. If the only new feature of language to evolve in humans was recursion, then you would not expect FOXP2 mutations to do anything except interfere with recursion. They also point out that broad comparisons of the genes in humans, chimps, and mice, suggest that some genes involved in hearing may have undergone intense natural selection in our lineage. It's possible that these genes are involved in speech perception.
Pinker and Jackendoff even take issue with the one part of language that Hauser et al granted as being unique to humans: recursion. Recursion is just a basic logical operation, which you can find not just in human language but in computer programs and mathematical notation. But all humans have a capacity for one special sort of recursion: the syntax of human language. Pinker and Jackendoff declare that the case for the almost-everything hypothesis is "extremely weak."
At this point, I might have expected their rebuttal to come to a close. But instead, it takes a sudden turn. Pinker and Jackendoff find it puzzling that Chomsky would offer the almost-everything hypothesis when the facts go against it and when Chomsky himself had laid the groundwork for the uniquely-human hypothesis. For an answer, they burrow into Chomsky's head. They offer a survey of Chomsky's last decade of research, which has been dedicated to finding the indispensable core of language. As Pinker and Jackendoff describe it, Chomsky's search has led him to a single operation that combines items, which I'll nickname "Merge."
I won't go into all the details of their critique here, but the upshot is that Pinker and Jackendoff aren't buying it. By reducing the essence language to repeated rounds of Merge, Chomsky has to push aside all the things about language that linguists have been spending decades trying to figure out, such as phonemes and the order of words in sentences. The reason that they bring up Chomsky's recent work (which Chomsky calls the Minimalist Program) is because they think it is the source of his views on the evolution of language. Our pre-language ancestors may have simply been missing one thing: the Merge operation.
Pinker and Jackendoff are appalled by this. In fact, they hint that some of Chomsky's ideas about language have a creationist ring to them. Chomsky has said in the past that in order for language to be useful at all, it has to be practically perfect. How then, he wonders, could it have evolved from simpler precursors? Chomsky even likens language to a bird's wing, writing that "a rudimentary wing, for example, is not "useful" for motion but is more of an impediment. Why then should the organ develop in the early stages of evolution?"
"What good is half a wing?" is a question often asked by those who reject evolution. But it is a question with several possible answers, which scientists are currently investigating. Feathers may have evolved initially as insulation. Even stumpy wings could have helped feathered dinosaurs race up into trees, as it helps birds today. Likewise, Pinker and Jackendoff argue that language evolved gradually to its most elaborate form. In fact, imperfect languages still exist. Pidgins allow people to communicate but lack fixed word order, case, or subordinate clauses. Pinker and Jackendoff argue that modern language may have emerged from an ancient pidgin through evolutionary fine-tuning.
In sum, Pinker and Jackendoff conclude, their ideas about the origin of language fit with the evidence from both linguistics and biology, and those offered by Chomsky, Fitch, and Hauser don't.
Now what? Do we have to wait another two years to see whether Chomsky, Fitch, and Hauser crumble under this attack or have something to say in response?
As I'll explain in my next post, the answer, fortunately, is no.


In my last post, I went back in time, from the well-adapted eyes we are born with, to the ancient photoreceptors used by microbes billions of years ago. Now I'm going to reverse direction, moving forward through time, from animals that had fully functioning eyes to their descendants, which today can't see a thing.
This may seem like a ridiculous mismatch to my previous post. We start out with the rise of eyes, a complex story with all sorts of twists and turns, with gene stealing, gene borrowing, gene copying; and then we turn to a simple tale of loss, of degeneration, of a few genes mutating the wrong way and--poof!--billions of years of evolution undone.
In fact, loss is never such a simple matter. I can illustrate this fact with two disparate beasts: fleas and cavefish.
Cavefish were familiar to Darwin, as were the many other blind cave dwellers, such as salamanders and insects. Darwin saw cavefish as yet another example of an animal carrying around the vestiges of its ancestors, just as we carry around the stump of a tail. As for how cavefish lost their eyes, he set natural selection aside. Darwin could not imagine how a fish in a cave would get any benefit from eyes that did a worse job than its ancestors' eyes. "I attribute their loss soley to disuse," he wrote. By disuse, Darwin may well have been thinking along the lines of his precursor, Lamarck. As fish stopped relying on their eyes in the dark, somehow their eyes degenerated, and that degeneration was passed down to the next generation of fish.
Once scientists began to decipher the molecules of heredity, such an explanation became obsolete. Instead, some scientists translated the notion of "disuse" into the language of mutations. Like any animals, a cavefish has a small but real chance of undergoing a mutation to its DNA. In some cases, these mutations can impair the fish's eyes. In a population of surface-dwelling fish, this sort of mutation would probably make it hard for a fish to find food, and might even make it an easy target for predators. The chances of the fish passing down that mutant gene to a new generation of fish would be pretty slim. But in a cave, such a mutation would have no effect on the reproductive fortunes of a fish. Over time, the population of cavefish would accumulate lots of eye mutations, until their eyes were rendered useless.
But this "neutral mutation" hypothesis isn't the only possibility. Scientists have also proposed an "energy conservation" hypothesis. Mutations that prevent cave fish from developing eyes let them save energy, boosting their odds of survival.
Scientists have tested this hypothesis in recent years by studying the fish Astyanax mexicanus. You can find perfectly normal populations of this fish in surface waters in the U.S. , but if you go into caves, you can also find some 30 populations that are blind. This transformation has happened overnight, biologically speaking: scientists estimate that it was only 10,000 years ago that populations Astyanax moved into the caves. One vivid demonstration of just how recent this move was is the fact that a cave fish and a surface fish can mate and produce healthy hybrids. The lion's share of research on Astyanax has been carried out in the laboratory of William Jeffery at the University of Maryland, and he offers an excellent summary in a paper in press in the Journal of Heredity.
Much of Jeffery's work has gone into tracking the development of the fish from eggs. The most startling thing he has found is that cavefish grow eyes for quite a long time. Just as in surface fish, the brains of cave fish embryos bulge out to the sides, stretching into stalks that end in cups. A simple retina and lens begin to form, and growing nerves begin to link the retina to the visual centers of the fish brain. After about a day, however, the cavefish eye and surface fish eye begin to take different paths. The cave fish eye fails to develop an iris or a cornea, for example. Still, many parts of the cave fish eye continue to grow as their cells multiply.
These findings alone call into question both the neutral mutation hypothesis and the energy conservation hypothesis. If mutations were building up in the cave fish genome, you wouldn't expect that the fish could advance so far in the development of their eyes. And if energy conservation was the sole advantage driving the evolution of blindness, you wouldn't expect the fish to keep producing new eye cells, even as the eye begins to deteriorate.
Even the degeneration of the eye challenges both of these hypotheses. The eye doesn't collapse into a stew of chaos; it is dismantled in a stately choreography. The cells in the lens release some signal that instructs other eye cells to begin to commit suicide. In surface fish, the lens sends signals that do just the opposite, allowing the eye to develop fully. Jeffery and his colleagues found that if they transplanted just the lens of a surface fish into the eye of a cave fish, the cave fish grew a completely normal eye. What's more, the transplant triggered new nerve fibers to project from the retina to the brain, and the part of the cave fish's brain that handles vision even grew. It's possible that a transplanted lens allows a cave fish to see. Despite being blind, the cavefish still retains its original circuit of eye-building genes.
Jeffery and his colleagues have also tracked the degeneration of the eye at the level of genes. The neutral mutation hypothesis would lead you to expect that cave fish would express fewer genes in the eye than surface fish, because many of them would have been destroyed by mutations. But this is not the case, Jeffery and his colleagues have found. Instead, they're starting to identify some genes that make more of their proteins in the eyes of cave fish than in those of surface fish, and even some genes that aren't active in the eyes of surface fish at all.
One particularly important protein in the developpment of cavefish eyes is known as Hedgehog. In all vertebrates, Hedgehog plays a vital role in the development of the eye, starting at its earliest stage. Initially, the cells that will give rise to the eyes form a single cluster. Cells in the midline of the embryo start producing Hedgehog, which somehow signal the cells in the middle of this eye cluster to stop developing. As a result, only the cells on the far sides continue to develop, thus producing two separate eyes. Mutations that interfere with the production of Hedgehog can cause a gruesome birth defect in humans called cyclopia, in which a single cyclops-like eye develops.
Cave fish have evolved in the opposite direction: they produce more Hedgehog, rather than less. The extra protein stops the development of a wider expanse of the original eye-cell cluster, leaving few cells to progress. Jeffery and his colleagues confirmed this by boosting the production of Hedgehog in surface Astyanax. Not only do they develop smaller eyes, but they suffer the same lens-directed degeneration seen in cavefish. This means that the degeneration of cavefish eyes requires cells beyond the eyes to help coordinate the process.
What's most remarkable about this choreography is that it has evolved again and again. Studies on Astyanax DNA suggest that populations of surface fish have repeatedly invaded caves, and each time they have gone blind. Jeffery and his colleagues have started comparing the development of embryos from different populations, and they find the cavefish have evolved blindness through the same patterns of gene activity.
This parallel evolution is hardly what you'd expect from a random blast of neutral mutations. Nor does Jeffery believe that energy conservation can explain it. Males and females show no difference in the development of their eyes, despite the fact that females need a lot more energy to make their eggs. Likewise, some populations of cave fish get lots of energy because they live under colonies of bats that can drop food and guano into the water. Despite this luxurious conditions, these fish are no different than their leaner cousins.
Jeffery thinks that Hedgehog may be the key to understanding what's really driving the evolution of cavefish. Like many genes involved in development, Hedgehog has many different jobs. It is known to be essential for the development of tastebuds, for example, as well as teeth and the bones that make up the head. And in cave fish, all of these features are significantly different from surface fish. It's possible that these changes are adaptations that help the cave fish feed more efficiently. These changes were only made possible by cranking up the production of Hedgehog. A side effect of this increase was the destruction of the cave fish eyes. But because eyes aren't essential in the dark, this wasn't such a big price to pay. If Jeffery is right, Darwin's real mistake with cave fish wasn't falling back on a Lamarckian explanation. It was not recognizing how powerful natural selection could be.
Jeffery and his colleagues have managed learned so much about the evolution of cavefish eyes because they figured out how to turn Astyanax into a laboratory organism, which can be studied as carefully as a fruit fly or a lab rat. This sort of transformation takes many years, and only a few species have what it takes. Many other animals have lost their eyes, but in most cases, scientists can only glean less direct clues. Still, the stories they have to tell can be just as interesting. Most interesting of all is the fact that different evolutionary forces seem to have been at work.
Case in point: fleas.
Scientists know very little about the vision of fleas. As insects, fleas have inherited the standard insect eye, which consists of slender columns tightly packed together. But this standard insect eye has undergone drastic changes in fleas. Some fleas have what look like simple eyespots. Others seem to lack any eye at all. To learn about this transformation, a team of biologists from Brigham Young University have compared fleas to their relatives, which still have eyes.
This wouldn't have been possible even a few years ago, because scientists have only recently worked out the "flea tree." Fleas evolved from a group of insects with particularLY sharp vision. Their cousins include scorpionflies, which rely on their image-forming eyes to help them scavenge dead insects. Their closest relatives are "snow fleas" (Boreidae). These wingless insects live in mountains, where they feed on moss. They have small eyes, but can see well enough to jump away if you try to catch them. So it appears that fleas are the product of a long-term evolution towards simpler eyes.
The scientists used this tree to track the evolution of some of the molecules that are essential for vision. Known as opsins, they respond to light by triggering a chemical reaction that sends a signal from the eye to the brain. Opsins can be sensitive to different colors, depending on their shape, which depends in turn on the DNA sequence in their genes. The scientists isolated the gene for green opsins from 11 species of scorpionflies, snow fleas, and true fleas.
The scientists then compared the DNA sequences for signs of change. A mutation to an opsin gene may have no effect on the opsin molecule itself, or it may alter its structure dramatically. The difference depends on where in the DNA sequence that mutation strikes. The scientists found that most changes that occurred during the evolution of fleas had no effect on the actual opsins. They confirmed this by using the DNA sequence of the opsin genes to create computer models of the opsin molecules themselves. Even in fleas, the green opsin molecule has basically the same structure as in scorpionflies--despite their radically different eyes.
Just because a gene hasn't changed for millions of years doesn't mean that it hasn't been experiencing natural selection. The scientists found evidence that the opsin gene has been experience a special kind of natural selection in fleas and their relatives, known as purifying selection. Purifying selection occurs if even the slightest change to the structure of a molecule puts a serious dent in the reproductive success of an animal. The fact that fleas have experience purifying selection on their opsin gene means that it remains essential to their survival. (The details of their work appear in a paper in press at the journal Molecular Biology and Evolution.)
So what on Earth are the fleas doing with their opsins? The scientists doubt that the fleas are using them in their eyes. They point out that flea eyes are covered over in a tough layer of chitin, and they lack the lenses and other structures that would let them see. But in many animals, ranging from pigeons to salmon to butterflies, opsins have also been found outside the eye. In some animals, they grow inside the brain, while in others they grow on the abdomen or other parts of the body. Recent studies suggest that these opsins set the pace for biological clocks by registering the change of light from day to night.
This brings us back around to the very origin of eyes, which I described in my first post. Long before full-fledged eyes evolved, light-sensitive molecules may have existed in microbes, allowing them to change their movements during night and day. These molecules may have been incorporated into early eyes, making it possible for animals to see. But this transition didn't mean that photoreceptors could no longer serve their original function. Early insects may have used opsins both within their eyes to see and outside of their eyes as biological clocks. Later, some lineages of insects lost their eyes. Some may have lost them in dark caves. Fleas, on the other hand, lost their eyes as they became parasites. Instead of navigating through a complex landscape in search of a particular prey, they just hopped from one host to the next. But they still relied on opsins to run their biological clocks. The authors point out that scientists have also found opsins in other animals that have lost their eyes. The animals? None other than Astyanax.
What's particularly remarkable about the new study is how strongly the flea opsin resisted any evolutionary change--even after it was no longer being used in the flea eye. The molecule need the same functional structure for both jobs. As I mentioned at the beginning of my previous post, Charles Darwin recognized that the complexity of the eye might appear to pose a major challenge to his theory. To some people, it still does; they argue that the components of the eye cannot function on their own, and so they could never have existed on their own. By this reasoning, it would be impossible for one of these components--an opsin, for example--to do anything useful if it wasn't inside an eye.
The flea apparently sees things differently.


(The first of a two-part post)
The eye has always had a special place in the study of evolution, and Darwin had a lot to do with that. He believed that natural selection could produce the complexity of nature, and to a nineteenth century naturalist, nothing seemed as complex as an eye, with its lens, cornea, retina, and other parts working together so exquisitely.The notion that natural selection could produce such an organ "seems, I freely confess, absurd in the highest possible degree," Darwin wrote in the Origin of Species.
For Darwin, the key word in that line was seems. He realized that if you look at the different sort of eyes out in the natural world, and consider the ways in which they could have evolved, the absurdity disappears. The objection that the human eye couldn't possibly have evolved, he wrote, can hardly be considered real.
The more scientists study the eye, the more they recognize that Darwin was right. This is not to say that they know everything about how the eye evolved. Evolutionary biology is not an automatic answer machine that can instantly tell you every detail about how eyes--or any other organ--evolved. Instead, scientists study eyes of different animals, the proteins they are made of, and the genes that store their recipe. They come up with hypotheses about how evolution could have produced these results. Those hypotheses then point the way to new experiments. In this way, evolutionary biology is no different from geology or meteorology, or any other science that illuminates the natural world.
To be precise, I should say that scientists study the evolution of "the eye." There are millions of different eyes (and other light-detecting organs), each built by a different species from its own unique set of genes. Closely related animals tend to have similar eyes, because they descend from recent ancestors. Some scientists study how eyes can adapt over a few million years to the special circumstances of a particular species. Other scientists step a little further back, to look at how the different types of eyes have evolved from simpler precursors. And other scientists step even further back in time, to find clues about where those simpler precursors came from. In this post, I will move back through time through these different stages of eye evolution (a la Richard Dawkins's The Ancestor's Tale.)
Humans have what's known as a camera eye. Light first passes through a cornea, which refracts the light. It then passes through a lens, which refracts the light further, so that it forms a focused image on the retina. We are primates, and so it's not surprising that all other primates have a similar type of eye. But different primates have important differences in the shape of their eye. Nocturnal primates have wider, more curved corneas than primates that are active during the day. A wider cornea lets nocturnal primates make the most of the moonlight by allowing more of it into the eye. Primates active during the day benefit from small flat corneas probably because the lens can sit further forward in the eye, producing a sharper image. This arrangement doesn't let as much light in, but during the daytime, that's no great loss. Chris Kirk of the University of Texas analyzed primate eyes in the December 2004 issue of The Anatomical Record (he has posted the paper on his web site).
For the most part, nocturnal and diurnal primates fit the same patterns as other mammals. But monkeys and apes (including humans) turn out to have extremely small, flat corneas, even compared to other primates that are active in the daytime. Kirk argues that this particular group of primates (called anthropoids) has experienced natural selection that has produced even sharper vision than found in other mammals active in the daytime. Other aspecsts of the anthropoid eye also make it sharp, including its fovea, a small spot on the retina that's incredibly dense with photoreceptors. In fact, anthropoids are matched only by raptors for their sharp vision. It's possible that our ancestors evolved such sharp eyes for hunting insects; monkeys and apes are also extremely social animals, and they rely on their keen eyes to look at one another and pick up subtle cues in their faces. Our ability to make sophisticated tools may have been made possible by the evolution of tiny corneas.
Changing the shape of an eye requires changing the molecules that make it up. Molecular fine-tuning can also alter an eye's ability to block out UV rays, to refract light at different angles, or to become more sensitive to different colors. Despite the fact that all vertebrates share the same basic eye plan, you can find a wide range of molecules inside them. Some are found only in fish, some only in lizards, some only in mammals.
How does one group of animals evolve one of these new molecules? One way is to borrow it. Joram Piatigorsky of the National Eye Institute and his colleagues have identified many of the molecules that make up the lens and cornea of humans and other animals. These molecules are practically identical to molecules found elsewhere in the body. Some are essential for the development of the head in an embryo. Others protect our cells from heat and other stress, others detoxify poisons that would otherwise build up in the blood.
Originally, the evidence indicates, many of the molecules found in eyes today were only produced in other parts of the body. But then, thanks to a mutation, the same gene began producing its molecule in the developing eye. It just so happened to have the physical properties that made it well suited to being in an eye. In later generations, natural selection favored mutations that made it work better in the eye.
But this new job in the eye may have posed a trade-off for the molecule's original job. Further fine-tuning may have only been possible when the gene went through a particularly drastic (but common) mutation: it duplicated. Now one copy of the gene could adapt to the eye, while the other continued specializing in its original job. (I wrote an essay a couple years ago about some of Piatigorsky's work in Natural History.)
Darwin didn't know about gene sharing or gene duplication, but he still managed to make some important observations about how the human eye could have evolved from a simpler precursor. Early eyes might have been nothing more than a patch of photosensitive cells that could tell an animal if it was in light or shadow. If that patch then evolved into a pit, it might also have been able to detect the direction of the light. Gradually, the eye could have taken on new functions, until at last it could produce full-blown images. Even today, you can find these sorts of proto-eyes in flatworms and other animals.
The closest invertebrate relatives of vertebrates fit nicely into Darwin's predictions. Amphioxus, which looks like a sardine with its head cut off, lacks a true brain or camera eyes. But the front end of its nerve cord is slightly swollen, and is built by many of the same genes that build a human brain. What's more, they grow a pit lined with light-sensitive cells which they seem to use to navigate through the water. The genes that build this pit are nearly identical to the ones that build our own.
The fact that Aphioxus has such a simple precursor to the vertebrate eye might suggest that this organ evolved from scratch. Yet eyes can be found on many other animals--which was how Darwin first figured out what a precursor to the vertebrate eye might have looked like. Eyes can found in insects, squid, and many other animals. Did they evolve independently?
The answer is yes and no. In the 1990s, Walter Gehring of the University of Basel and his colleagues discovered an essential eye-building gene called Pax-6 that was shared by insects and humans. If he inserted the human version of the gene into a fly larva, he got fly eyes popping up all over the fly's body. Gehring has proposed that Pax-6 is a master control gene, switching on an entire circuit of eye-building genes. In insects and in humans (and in all of the animals that share a common ancestor), this circuit builds eyes. But in each lineage, a different set of genes have been incorporated into this circuit, so that they can build eyes as different as the compound eye of an insect and the camera eye of a human.
The simplest explanation for so many animals sharing this same circuit is that they all inherited it from their common ancestor--a small worm-like creature known as a bilaterian that might have lived 570 million years ago. Exactly what sort of eye these genes produced in the Precambrian mists of time isn't clear, though. And until last fall, another feature of the eye didn't seem to fit this hypothesis: its photoreceptors. Invertebrate eyes and vertebrate eyes use different photoreceptors to sense light. But researchers have found that both kinds of photoreceptors grow on a humble animal known as a ragworm, which is believed to have branched off very early in the evolution of bilaterians. It's possible that the ancestor of living bilaterians produced both kinds of photoreceptors. One kind was lost in the vertebrate lineage, and the other was lost in the lineage that led to insects and other invertebrates with full-blown eyes.
Yet eyes are not limited to bilaterians. Jellyfish belong to a branch of animals known as cnidarians that split off from the ancestors of bilaterians some 600 million years ago. Some species have simple photoreceptors, while others have full-blown camera-eyes hanging from their tentacles. Biologists want to know whether these eyes evolved independently, or share some of the ancestral toolkit that produced human eyes and fly eyes. One hint that they share a common heritage is the fact that some of the genes that jellyfish use to build eyes bear a striking similarity to Pax-6 and other genes that build bilaterian eyes. On the other hand, most cnidarians (such as sea aneomones and corals) don't have eyes. What's more, jellyfish eyes are pretty weird compared to bilaterian eyes--for one thing, they don't wire up to a brain. The larvae of one species grow photoreceptors that don't even connect to a neuron. The photoreceptors link instead to hair-like structures in the same cell. Presumably light triggers these cells to flail their hairs to make the larva swim.
In years to come, the search for the roots of eye evolution will push even further back in time. In a paper in press at the Journal of Heredity, Walter Gehring points out that the first component of animal eyes to have evolved was the photoreceptor--a molecule that could catch light and turn it into a signal. One model for the origin of animal photoreceptors comes from colonies of algae, many of which have "eyespots" that allow them to swim towards the light so that they can photosynthesize. Perhaps early animals lived in colonies as well and had similar eyespots. Later, these simple photoreceptors evolved pigments and other molecules that helped capture more light, and eventually became able to form images.
But Gehring also proposes a weird but compelling alternative: our ancestors stole their eyes. Many times over the course of evolution, organisms have been engulfed by larger organisms, and the two have become integrated into a single being. Our cells, for example, contain mitochondria that we rely on to generate energy; originally, these were free-living oxygen-consuming bacteria. Another important fusion took place over two billion years ago, when bacteria that could carry out photosynthesis were consumed by an amoebae-like host. The bacteria then became a structure called the chloroplast, which can be found today in trees and other plants, as well as various sorts of algae. Increidbly, some of these algae were engulfed by other algae, which also came to depend on the photosynthesis carried out by the bacteria. Gehring likens these organisms to Russian dolls, with the original bacteria nestled deep within other organisms.
It's likely that before the bacteria were consumed again and again, they had already evolved a light-sensing molecule that helped them harness sunlight--perhaps by acting as a biological clock. The algae that devoured the bacteria may have retained the ability to sense light for the same purpose. Gehring points out that one group of these algae--dinoflagellates--have fused with corals, jellyfish, and other animals. It's possible that early animals may have incorporated the genes for light-sensing in their own genomes. If he's right, we gaze at the world with bacterial eyes.
Coming next: Once the eye evolves, what does it take for it to disappear?


Over the next week or so, I'm going to post a couple two-part posts. I've gotten mildly obsessed with two big topics in evolution: eyes and language. There's been so much fascinating work done on both subjects in the past year or so that a single post just won't do for either of them. I know that the blog genre lends itself well to quick hits, but I'm going to stretch things a bit. We'll see how it works.


Readers of the Loom may recall an earlier post about how creationists (including proponents of Intelligent Design) misleadingly cite peer-reviewed scientific research in order to make their own claims sound more persuasive. I mentioned that when the scientists themselves find out their research has been misrepresented, they groan and protest.
In case you thought I was exaggerating, check out National Academy of Science president's Bruce Albert's letter to the editor of the New York Times in response to Michael Behe's recent creationist Op-Ed. Behe quoted Alberts describing his early impressions of the cell as a beautiful machine--which Behe takes as evidence that it really is a machine built by someone.
Alberts responds:
In “Design for Living” (Op-Ed, Feb. 7), Michael J. Behe quoted me, recalling how I discovered that “the chemistry that makes life possible is much more elaborate and sophisticated than anything we students had ever considered” some 40 years ago. Dr. Behe then paraphrases my 1998 remarks that “the entire cell can be viewed as a factory with an elaborate network of interlocking assembly lines, each of which is composed of a set of large protein machines.”
That I was unaware of the complexity of living things as a student should not be surprising. In fact, the majestic chemistry of life should be astounding to everyone. But these facts should not be misrepresented as support for the idea that life's molecular complexity is a result of “intelligent design.” To the contrary, modern scientific views of the molecular organization of life are entirely consistent with spontaneous variation and natural selection driving a powerful evolutionary process.
In evolution, as in all areas of science, our knowledge is incomplete. But the entire success of the scientific enterprise has depended on an insistence that these gaps be filled by natural explanations, logically derived from confirmable evidence. Because “intelligent design” theories are based on supernatural explanations, they can have nothing to do with science.
Bruce Alberts
President
National Academy of Sciences
Nuff said.
[Thanks to Pharyngula among others.]


Growing up as I did in the northeast, I always assumed that the really weird life forms lived somewhere else--the Amazonian rain forest, maybe, or the deep sea. But we've got at least one truly bizarre creature we can boast about: the star-nosed mole. Its star is actually 22 fleshy tendrils that extend from its snout. For a long time, it wasn't entirely clear what the moles used the star for. The moles were so quick at finding food--larvae, worms, and other creatures that turn up in their tunnels--that some scientists suggested that the star could detect the electric fields of animals.
That idea hasn't panned out, but the truth has turned out to be just as exotic. As I write in tomorrow's issue of the New York Times, the star is the most sensitive touch organ known to science. It is studded with 25,000 touch-sensitive nerve organs, which channel their sensations into 100,000 large nerve fibers (more than in your entire hand). These nerves then carry the signals to the brain, much of which is dedicated to interpreting what the star feels. As Ken Catania of Vanderbilt University reports in a paper appearing in the current issue of Nature, this heavy-duty wiring produces record-setting speed. As soon as the star-nosed mole comes into contact with food, it needs a fifth of a second to gobble it down. (The article includes a sequence, of frames from one of these filmed feasts.)
As some readers of the Times may notice, this mole article appears in the science section a day after an op-ed column appeared in the editorial section promoting Intelligent Design. Michael Behe, a Lehigh University biologist, claims that evolutionary biologists have not offered hypotheses for how complex things evolve in nature. Given this supposed lack of explanations, and given the supposedly obvious signs of design in biology, Behe concludes that life must be the product of an Intelligent Designer.
Behe is incorrect. In fact, evolutionary biologists have put together hypotheses for many complex systems, which they have published in leading peer-reviewed biology journals. The immune system is one example, which I blogged about in December. The star of the star-nosed moles is another. Ken Catania's hypothesis for its origin starts with the observation that the star is not quite as unique as it may seem at first sight. The touch-sensitive organs it uses (called Eimer organs) are found on the noses of other moles, albeit it in far lower densities. What's more, coast moles, close relatives of star-nosed moles, have small, pipe-shaped swellings at the very tip of its nose, which resemble the star on a star-nosed mole when it is still an embryo.
The star, Catania argues, evolved on a coast-mole-like ancestor. The swellings became larger, the nerves became denser, and the brain dedicated more space to processing the star's signals. Natural selection favored this trend, according to Catania, because the star-nosed moles moved from dry habitats to wetlands, which are loaded with small insect larvae. In addition to big insects, such as earthworms or crickets, star-nosed moles added these small prey to their diet. The star provided benefits to the mole long before it had taken the full-blown form it has today. The more time the star-nosed moles shaved off their performance, the more calories they could take in each second.
Catania's hypothesis takes into account all of the evidence he and others have gathered about star-nosed moles--their behavior, the microscopic structure of their star, the architecture of their brains, their ecology, and the same evidence in closely related moles. It builds on what scientists already know about variation, inheritance, and natural selection. As a hypothesis, it's open to testing, based on further observations of star-nosed moles and their relatives. And that's what Catania is doing.
As for corresponding published papers that use Intelligent Design to interpret the star-nosed mole, they do not exist. The closest I can find are some comments from Answers in Genesis. On their web site, they claim that Catania's hypothesis cannot be right because it is based on "the discredited idea of Embryonic Recapitulation." This claim is based on the fact that the nineteenth century biologist Ernst Haeckel doctored some pictures of embryos in order to fit his own notion about how evolution progressed in certain directions. Nevertheless, the scientific consensus today--based on over a century of research since Haeckel's day--holds that changes in the way embryos develop can lead to dramatic evolutionary change (Here's a good account of the current undertanding.).
The Answers in Genesis site then asks, "Why would a primitive mammal suddenly start to develop such a specialized appendage? If it was already successfully hunting food without the star, what was the evolutionary trigger for the stars development?" Catania has already laid out this part of his hypothesis: the ancestors of star-nosed moles moved into wetlands, where variations that helped them feed on insect larvae could get them more food and boost their odds of reproducing. Other mole species, living in dry soil, didn't have this incentive. What's more, the delicate star would be damaged scraping against the hard tunnels dug by other moles.
These are some of the reasons why Catania and other scientists that I interview are not swayed by the sorts of claims made by Answers in Genesis or Michael Behe (as evidenced by the lack of peer-reviewed papers that they have inspired). Instead, what excites these scientists are the common themes that arise when they study the origins of different complex traits. Consider, for example, the adaptive immune system. I won't go into detail here about the latest thinking about how it evolved (I already have here). But I will point out that it seems to have followed the same trajectory as the star-nosed mole. It did not come out of nowhere. Parts of the system--including organs, cells, and receptors, were already in place millions of years earlier, often serving different functions than they do today. These parts were then modified, connected together in new ways, and gradually took on the form they have today. The same goes for the star-nosed mole and many other case studies in complexity--even including artificial life.
In the interest of full disclosure, I cannot end this post before confessing that the evolution of complexity was not the only thing I found fascinating in working on this article. Searching for a point of comparison for the speed of star-nosed moles, I wound up at the web site for the International Federation of Competitive Eating. Did you know someone holds the record for eating cheesecake? Eleven pounds in nine minutes. Now that's bizarre.


Thanks to the many people who left comments on my recent post about some recent work on the intersection of stem cells and human evolution. I noticed that several people expressed variations on the same theme, one which deserves a response. To recap briefly: a great deal of research indicates that a couple million years ago, our hominid ancestors lost the ability to make one of the main sugars that coat mammal cells, called Neu5Gc. This ancient chapter in our history turns out to have a big effect on current research on embryonic stem cells. When human stem cells are raised on a substrate made of mouse cells or calf serum, they absorb the nonhuman Neu5Gc sugars, which ends up on their surface. Humans carry antibodies to Neu5Gc, and these antibodies attack stem cells raised on animal substrates. As a result, existing cell lines fed on this stuff would likely be destroyed if they were implanted in a person.
Some readers questioned whether the research I discussed actually supported evolution and not creationism.
Samuel asked: "If humans are missing this sugar, and the rest of the animal kingdom has it, wouldn't that make humans unique? Could this evidence also support the creationist theory?"
In a similar vein, Graham Mitchell emailed me, writing, "I do find it interesting that you detail how the loss of this sugar in hominids occurred maybe three million years ago and that most other mammals (including primates) still have the sugar, but yet you interpret this evidence as making Intelligent Design *less* likely. As I was reading along, I thought to myself, 'Wow, more evidence that a Designer created humans to be distinct from the animals.'"
I'm happy to respond to these messages, but it's tricky. Neither what Samuel calls "the creationist theory" or what Graham calls "Intelligent Design" offers an explicit hypothesis about how this aspect of our biology came to be. Was mankind created 6,000 years ago without Neu5Gc? Or did a Designer (I'll use Graham's capital D) shut down Neu5Gc 2.5 million years ago in hominid ancestors of humans, with the intent of creating a special species? This is the sort of vagueness that leaves practicing biologists cold when it comes to creationism.
The main point Samuel and Graham are making is that the lack of Neu5Gc appears to be the work of a Designer/Creator who made humans unique from animals. But the lack of Neu5Gc does not actually make us unique. Note that in my original post, I did not say that Neu5Gc is found "in the rest of the animal kingdom," as Samuel put it. I said it was found on every mammal except humans. That's an important difference. The vast majority of animal species--not mention the millions of species of fungi, plants, and bacteria--do not have Neu5Gc. What's more, we have a related sugar called Neu5Ac which all mammals do, and which non-mammals orgnaisms do not.
So our lack of Neu5Gc cannot be interpreted as an example of how the Designer made us unique. On the other hand, there's an obvious (and testable) hypothesis about this evidence that emerges from the theory of evolution. Namely, Neu5Gc and Neu5Ac evolved in the ancestors of living mammals--probably from a similar sugar that can be found on the cells of non-mammals. Then in the human lineage, one of those sugars--Neu5Gc--was lost due to a mutation.
Mind you, evolutionary biologists do not contend that humans are not unique--in the sense that you can scan the human genome and find stretches of DNA not found in any other species. But being unique is not all that...well, not all that unique. A sea slug is also unique, because it also has stretches of DNA not found in any other species. But we would be surprised to hear a sea slug declare that its genetic makeup is evidence that a Designer created it to be distinct from the animals (and not merely because sea slugs are a pretty quiet bunch).
In fact, the study of evolution very much concerned with how humans--and other species--became unique. Evolutionary biologists look at the biology we see around us today and aim to infer how those processes could have produced the diversity of life that surrounds us (and includes us). Some aspects of the history of life are harder than others to study, because there's less evidence at hand. But in the case of Neu5Gc, some things are nicely clear. The gene that makes Neu5Gc in other mammals is not missing from our genomes. It's still sitting there. But right in the middle of it is a distinctive sequence of DNA that belongs to a sort that geneticists understand quite well, called an Alu element.
Alu elements get copied by our cells and those copies get inserted all over our genomes. Scientists can watch the process up close by putting molecular tags on Alu elements in a colony of cells, and then watching it spread over time. In the real world, a fertilized egg may wind up with a new Alu element, which then gets spread to every cell in the baby's body. Out of every 200 births, one child is born with a new Alu element. Sometimes they wind up wedged in the middle of a gene, disrupting its ability to make a protein. In some cases, this leads to a disease. More often, though, the new Alu element ends up somewhere in the genome where it doesn't do much harm. As a result, Alu elements piled up in the ancestors of living humans. The human genome has 1.2 million Alu elements, making up about 10% of its entire sequence of DNA.
Alu elements all work the same way, and all have the same basic genetic sequence. But they are not identical. That's because each time an Alu element gets copied, there's a chance the copying machinery of our cells will make a mistake and introduce a mutation. So Alu elements can be grouped together into families, related by common descent, which in turn can be related to other Alu families. Humans have some unique Alu elements that emerged after we split from other apes, and we also have Alu elements that we inherited from our common ancestor with other apes.
The gene that makes Neu5Gc is interrupted by an Alu element. It's the same Alu element in the same place in the same gene in every person every studied. Scientists can document Alu elements interrupting genes today, either in laboratory experiments or in the maternity ward.
How do we explain this pattern?
On the one hand, there's evolution. The basic idea behind evolution is that mutations have continuously emerged in DNA, leading to variations between individuals in a population. Some of these variations may give individuals an edge in reproducing. If those variations can be inherited, they will gradually become more common. Some populations may split off from the other members of their species and become a new species of their own, but they still carry the adaptive mutations from their common ancestors. And over very long periods of time, many mutations can arise leading to new complex traits.
The case of Neu5Gc is completely consistent with evolution. Scientists may not yet understand what advantage the mutation that robbed us of this sugar had, but the fact that our knowledge is incomplete is not a compelling argument against evolution. After all, scientists don't even know what Neu5Gc andNeu5Ac do today. That doesn't mean the sugars don't exist, or that they aren't important. (As I mentioned before, if you take away these sugars from a mouse through genetic engineering, you end up with a dead mouse.)
Is all of the evidence I've presented consistent with "creationist theory" or "Intelligent Design"? Perhaps someone can offer an explanation that can make it fit, but I don't see it. To create this part of our genome, the Designer would have had to have inserted an Alu element by hand at a particular place in our distant ancestor's genome in order to produce this change. And if Intelligent Design really is supposed to be a scientific theory, that would mean that every time an Alu element winds up somewhere else in the genome, it's the work of a Designer. (You can't just pick and choose the cases that the Designer is responsible for.) And that means that every time someone dies of an Alu-related cancer or other disorder, it's because the Designer invisibly slipped into the body of his or her parents and monkeyed with an egg or a sperm to make sure they died. I look forward to reading the scientific paper documenting that.




Last October, word leaked out that something might be seriously amiss with the embryonic stem cell lines approved by President Bush for federally funded research. Today, the full details were published on line in Nature Medicine. It's an important paper, and not only because it points out a grave problem with the current state of stem cell research. It also shows how scientists who do cutting-edge medical research are looking back at two million years of human evolution to make sense of their work. At a time when antievolutionists are trying hard to wedge creationist nonsense into science classrooms, this is something worth bearing in mind.
This new research focuses on the sugar molecules that coat our cells like frosting on a cake. Two of these sugars are common on virutally every mammal. They are abbreviated as Neu5Ac and Neu5Gc. These sugars are clearly essential to survival. When scientists altered the genes of mice so that they couldn't produce them, the mice died. The sugars probably have several vital roles. They probably work as identity badges, judging from the fact that mammal cells also have receptors that can lock onto these particular sugars and only these particular sugars. Cells need to recognize each other for many reasons, such as when they are developing together to form a complex organ like a liver or a brain.
A surprise was in store for scientists who began looking for these two sugars in the human body. They found plenty of Neu5Ac, but they found practically no Neu5Gc. This is no minor difference, abbreviations aside. Neu5Gc is very common in other mammals. In gorillas, our close relatives, it makes up between 20% to 90% of this group of sugars. In us, zip. We are unique, in fact, among mammals for lacking this molecule.
Ajit Varki of UCSD led the research that established that Neu5GC is missing from humans. He decided to figure out how it disappeared. Other mammals make Neu5Gc by tinkering with Neu5Ac. The enzyme that does the actual tinkering is known as CMAH. This enzyme is pretty much identical in mammals ranging from chimpanzees to pigs. In humans, Varki and his colleagues discovered, the gene for CMAH is broken. It produces a stunted version of the enzyme which can't manufacture Neu5Gc, and so our cells end up with none of these sugars on their surfaces.
The CMAH gene is broken the same way in every person that has been studied. That strongly suggests that all living humans inherited the mutation from a common ancestor. Since chimpanzees, our closest living relatives, have a working version of the gene, that ancestor must have lived less than six million years ago. Scientists can even say exactly how the gene mutated. A parasitic stretch of DNA known as an Alu element produced a copy of itself which got randomly inserted in the middle of the CMAH gene.
But Varki didn't stop here. He joined with experts on extracting ancient biomolecules from fossils. They ground up bits of bones of Neanderthals, which split off from the ancestors of living humans about 500,000 years ago. In 2002 they reported that they found Neanderthal Neu5Ac, but no Neu5Gc. Neanderthals probably inherited the same mutation as we carry. Thus, the mutation must have struck hominids before 500,000 years ago.
To narrow their estimate further, the researchers looked closely at the Alu element that had caused the mutation. They compared its sequence to the original version from which it had been copied. They also looked at related versions in other primates. Studies have shown that this parasitic DNA mutates at a relatively steady rate. So by comparing the mutations in the different versions, they could estimate how old the sugar-disrupting mutation was. They came up with 2.7 million years ago, plus or minus 1.1 million years. While this estimate spans a couple million years, it still falls nicely between the range suggested by earlier research.
This study was the first to pinpoint a mutation that produced a signficant biological change in the hominid lineage. Just three years later, we have hundreds to choose from. But the loss of Neu5Gc still remains an important discovery because it is a loss. As I wrote in an earlier post, losing genes may actually be as important to human evolution as gaining new ones. Losing genes can sometimes release us from restraints that prevented our ancestors from exploring new ways of living. Exactly what advantage giving up Neu5Gc provided isn't clear, according to Varki, but he has some suspicions. Parasites have evolved receptors that can grab onto both sugars, an important step in invading a cell. It's possible that losing one of these sugars helped our ancestors become more resistant to some disease.
Varki also points out that the elimination of Neu5Gc might have been particularly important for the hominid brain--which, perhaps not coincidentally--went through a huge expansion roughly around the time that the Neu5Ac mutation occurred. In other animals, Neu5Gc is abundant on the cells of most organs, but exceedingly rare in the brain. It is very peculiar for a gene to be silenced in the brain, which suggests that it might have some sort of harmful effect. Once a mutation knocked out the gene altogether, hominids didn't have to suffer with any Neu5Gc in the brain at all. Perhaps Neu5Gc limited brain expansion in other mammals, but once it was gone from our ancestors, our brains exploded.
This is not merely a just-so story. In Varki's lab, researchers are breeding mice that can't produce Neu5Gc and others that make too much. If Varki is right, the alter mice should wind up with altered brains.
Now for the stem cells.
Varki has been puzzled by the fact that some scientists over the years have reported detecting tiny amounts of Neu5Gc in humans. If, as Varki has found, the genetic machinery for making this sugar is broken beyond repair, how are they getting it? He and his researchers have spent several years attacking the problem. Their experiiments indicate that we pick up the sugars from the foods we eat--in particular beef and other meat from mammals. Our cells absorb the foreign Neu5Gc and stick them on their surfaces, alongside their normal Neu5Ac sugars. It's possible that their similarity fools our cells into making this mistake. This happens only rarely, but often enough that we develop antibodies to Neu5Gc. In other words, our bodies know that Neu5Gc is the enemy.
It occurred to Varki that something similar might be happening in the production of embryonic stem cells. Once these cells are taken from an embryo, scientists traditionally lay them on top of a layer of mouse embryo cells and calf serum, which provide a supply of food for them. This food, it turns out, is loaded with Neu5Gc, and Varki--working with Fred Gage of the Salk Institute--discovered that it ends up on the human stem cells like frosting on a cake. And Varki and Gage found that human antibodies against Neu5Gc readily attack the stem cells.
If these stem cells were put in people, they might well be destroyed by antibodies. And even if they weren't, the foreign Neu5Gc on their surfaces could cause problems. Both Neu5Gc and the normal Neu5Ac help cells recognize each other, which is crucial during development, when cells stick together to form new structures. Confused cells could wind up producing developmental defects.
Now I suppose that opponents of embryonic stem cell research might seize on this research. Most of the embyronic stem cell lines now being studied could never be implanted in people to provide a new supply of neurons or heart tissue, because they'd be attacked as foreign tissue--exactly the sort of trouble that stem cells were supposed to avoid. Better to scrap the whole line of research and just focus on adult stem cells. (This article in Forbes seems to push this line.)
But this doesn't really make sense on strictly scientific grounds. Scientists could just scrap their existing lines of stem cells and start new ones, making sure that they can't take up Neu5Gc. This would be a challenge, but not an impossible one. Varki and Gage suggest feeding stem cells on serum taken from the person who is going to receive them, for example. Since we really don't know whether embryonic or adult stem cells are going to work as cures, why should scientists simply walk away from embryonic stem cells in the face of a challenge?
The irony is that scientists who rely on federal funding have no choice but to walk away. Starting a new stem cell line is expressly verboten under Bush's decree, because it crosses the moral line he has drawn in the sand. Varki and Gage's results will spell certain doom for embryonic stem cell research only if the government wants it to.
I have noticed that members of the Discovery Institute, the headquarters for lobbying for Intelligent Design, are also speaking out against embryonic stem cell research. It will be interesting to see if they try to embrace Gage and Varki's research while still trying to cast doubt on evolution. How on Earth, I wonder, could someone promoting Intellgent Design or Young Earth creationism make sense of these scientific results? How could they explain away so many facts that line up to present us with an evolutionary history taking us down through millions of years, from our common ancestor with other apes, to the first hominids to evolve large brains, to the rise of Neanderthals and our own species, to the latest breakthroughs in medicine? I do try to imagine how they would do this from time to time, but without much luck. I think I'll keep track of real science instead.
Update, Monday January 24, 2005: The paper is not on the Nature Medicine site yet. I will post a good link as soon as one becomes available.
Update, Monday, 3:00 pm: Welcome, citizens of Slashdot and Metafilter. There sure are a lot of you!
Nature Medicine has made the PDF of the Varki paper available for free on their home page. (Scroll all the way down.)
Update, Friday, 5 pm: Here's a follow-up post on why I don't think this proves the handiwork of an Intelligent Designer.


The Guardian has a long but disjointed report about the dispute over Homo floresiensis. Articles like these rarely give a very good picture of scientific disputes, since all parties involved only get a couple catchy quotes apiece. I've been particularly puzzled by Teuku Jacob, the elderly Indonesian paleoanthropologist who sparked the controversy by taking possession of the bones and locking them away from the Indonesian and Australian researchers who found them. So I was pleased when my brother, a linguistic anthropologist who does research in Indonesia, passed on this link to a translation of a long essay by Jacob. My brother promises me that the translation is accurate. There's a fair bit of science here, although Jacob isn't averse to calling his Australian rivals "latter-day conquistadors."


The more time I spend talking to biologists, the more they remind me of detectives. I have two stories in tomorrow's New York Times that make this connection particularly clear. In the first, E.O. Wilson attempts to solve the mystery of a plague of ants that devastated some of the earliest Spanish settlements in the New World. In the second, I look at another mystery--is there life on Saturn's moon Titan. The space probe Huygens will be falling into its hazy atmosphere on Friday to see what lurks under its cloak. I inteviewed University of Florida chemist Steven Benner who will be trying to search for signs of life in Huygens's data. But if Titan does have life, it may not be based on DNA or even need liquid water. So how do you look for something you've never seen before?
During my interview with Dr. Benner, he said something I found particularly apt about this sort of detective work--but which unfortunately had to be cut for space. Those of us who do professional science in this area do get a degree of emotional balance. Most of the time science fails. So you meter your emotions.


Not long ago I had a remarkable experience: I got to visit the nursery for what might prove to be a new form of life. At Michigan State University, a group of computer scientists, biologists, and philosophers run the Digital Evolution Laboratory. There, they are developing software called Avida which allows them to create virtual worlds swarming with digital organisms. Avida's residents show a lot of the important features that scientists consider essential requirements for life. Their evolution is particularly impressive, because it parallels evolution in the wet world in all sorts of subtle ways. And because you can run through a hundred thousand generations in a matter of hours, the Avida team can carry out experiments on some of the most important aspects of evolution that biologists could previously only study by looking at the natural world.
For more details, you can read my cover story in the February issue of Discover.


When you consider a tapeworm or an Ebola virus, it is easy think of them as being evil to their very core. That's a mistake. It's true that at this point in their evolutionary history these species have become well adapted to living inside of other organisms (us), and using our resources to help them reproduce themselves even if we get sick in the process. But one of the big lessons of modern biology is that there are no essences in nature--only the ongoing interplay of natural selection and the conditions in which it works. If the conditions change, organisms may evolve into drastically different things. Even the most ruthless parasite may discover the virtues of peace and harmony--if the conditions are right.
Joel Sachs and James Bull, two biologists at the University of Texas, have offered a vivid demonstration of this fact with the help of bacteria-infecting viruses, called bacteriophages. Bacteriophages, such as the one shown here, are wickedly elegant in the way they find hosts and inject their DNA, which then hijacks the bacteria's cellular machinery to make new bacteriophages. (For more of my praise of the bacteriophage, plus an excellent movie of the beast, go here.)
Bacteriophages fit the definition of parasite to a T. In many cases new viruses multiply inside a host until the bacterium simply rips apart. In other cases, they make bacteria sick, draining resources from their hosts that could otherwise be used for the hosts' own reproduction. But, as Bull and his colleagues have shown in a series of experiments, bacteriophages are not malicious in their very essence. Depending on the conditions in which bacteriophages find themselves, they can evolve into milder forms, or into meaner ones.
Bull and his colleagues took advantage of the fact that many bacteriophages can infect new hosts in one of two ways--by escaping one bacterium to invade another, or by getting passed down from one bacterium to its offspring. These two routes are called horizontal and vertical transmission. Bull's team experimentally created conditions that favored vertical transmission, and within a few dozen generations the viruses became much milder. If you rely on your host's survival for your own survival, it doesn't pay to be a brutal killer. (I wrote more about this evolutionary trade-off--and some of the debate surrounding it--in this article for Science.)
Now Bull and Sachs show that bacteriophages can even evolve to be nice to other bacteriophages. They describe the experiment in the January 11 issue of the Proceedings of the National Academy of Sciences. They started out with two bacteriophages, called f1 and IKe. Both viruses infect E. coli bacteria, but they enter in different ways. f1 only grabs onto one type of hair on the surface of E. coli (the F pilus), while IKe invades its hosts through another type (the N pilus). In the wild, f1 and IKe don't get along well. If they end up in the same host, they compete for the bacterium's cellular machinery. Also, because they are close relatives, sharing the same 10 genes, DNA-binding proteins of one bacteriophage can accidentally grab the DNA of the other species. As a result, bacteria infected with both f1 and IKe produce fewer copies of each virus than bacteria infected with only one species. It's the classic Darwinian scramble.
But Bull and Sachs wondered what would happen if the survival of both bacteriophages actually depended on their coexistence. Here's how they answered the question. First they engineered both bacteriophages, adding a gene that provides resistance to a different antibiotic (kanamycin for IKe and chloramphenicol for f1). Then they dumped billions of the engineered viruses into beakers full of E. coli. They allowed the viruses 16 minutes to find hosts, invade them, and start producing the proteins that confer antibiotic resistance. Then they added the two antibiotics to the beakers. Only the bacteria that had been infected with both bacteriophages could survive the assault. If a bacterium harbored only f1, for example, it would still die, because it remained susceptible to kanamycin.
Next, Bull and Sachs let the bacteriophages and their hosts alone for an hour. The bacteria divided, while the bacteriophages made copies of themselves. After an hour, the scientists dissolved away the bacteria, leaving behind the viruses. These new viruses were then added to a fresh batch of bacteria, and the cycle repeated itself.
Viruses are notoriously sloppy at replicating. The odds of a new virus winding up with a mutation is much higher than for organisms like ourselves, equipped as we are with enzymes that act like genetic proofreaders. As a result, with each round of Bull and Sachs' experiment, many variants emerged in both the f1 and IKe populations. The variants that were best suited for reproducing in the experimental conditions were favored by natural selection, and over time the viruses evolved. After 50 rounds, Bull and Sachs stopped the experiment and took a look at what the bacteriophages had become. Were they so selfish that they had driven themselves extinct? Or had they come to some sort of accommodation?
The bacteriophages clearly went through natural selection during just 50 rounds. By that point f1 was producing 50 times more copies of itself, and IKe was producing 1,000 times more. At the beginning of the experiment sharing a host was a bad thing for these viruses, but at the end it had become a very good thing. Bull and Sachs discovered that they had overcome their conflict of interest in an extraordinary way: they practically merged into a single organism. When Bull and Sachs opened up a bacteriophage shell, very often they found both the f1 and IKe genomes sitting side by side. They cold still find plenty of viruses with a single genome inside, but even in these cases, evolution had taken a dramatic turn. By about round 20, the IKe viruses had lost the ability to make their own protein coat. Instead, they borrowed f1 coats.
Bull and Sachs argue that the bacteriophages adapted to the experiment in a clever way. If you're a bacteriophage, successfully invading a host on your own is not enough to stave off death, because you may find yourself alone. If a mutation lets you bring along the other virus with you, then you are pretty much guaranteed survival. For some reason, f1 seems to have taken the lead in this cooperation, mutating in such a way that IKe genomes could slip easily inside f1's protein coats. As a result, IKe began to lose its own ability to survive as an independent virus, relying instead on the cooperation of f1. Once the viruses were packaged together, they no longer had a conflict of interest, and they could evolve an even greater level of cooperation.
Evolutionary biologists have long been fascinated by cooperation, whether the cooperators are chromosomes in a single cell, individual bacteria in a colony, or people in a village. What keeps individuals from cheating on others, from choosing the selfish strategy rather than the selfless one? Scientists have constructed sophisticated mathematical models in order to find the right sort of conditions where cooperation might evolve. But Bull and Sachs point out that it only took them 50 generations to turn uncooperative bacteriophages into intimate partners. When they sequenced the viruses, they found that f1 had acquired just eight mutations in its DNA, and IKe had acquired nine. Perhaps cooperation is not such a big deal after all. And perhaps parasites are not the essence of evil we tend to believe them to be.


Evolutionary biologists face a challenge that's a lot like a challenge of studying ancient human history: to retrieve vanished connections. The people who live in remote Polynesia presumably didn't sprout from the island soil like trees--they must have come from somewhere. Tracing their connection to ancestors elsewhere hasn't been easy, in part because the islands are surrounded by hundreds of miles of open ocean. It hasn't been impossible though: studies on their culture, language, and DNA all suggest that the Polynesians originally embarked from southeast Asia. We may never be able to retrieve the full flow of history that carried people thousands of miles to the middle of the Pacific, but we can know some things about it, and we can rest assured that some things are definitely not true (such as the sprouting-from-the-ground theory).
Whales are a lot like Polynesians. All living species of whales look a lot like each other, and not very much like any other animals. They all have horizontal tail flukes, blowholes, and smooth skin free of scales or fur. Darwin argued that whales were not simply created in the oceans in their current form, but instead descended from land mammals which had adapted to life in the ocean. He pointed out that whales share a number of traits with land mammals, such as milk and a placenta. Their blowhole connects to a set of lungs very much like those of land mammals and nothing like the gills of fish.
Darwin wigged out more than a few people with this argument. Whales just seemed too different, too distinct to have evolved by small steps from a four-legged ancestor. And creationists loved to point out how unlikely this transition seemed--on par with turning a cow into a shark. They also liked to point out that no intermediate fossils had ever been found. But as I wrote in my book At the Water's Edge, paleontologists began to find those fossils in the 1980s. Today, the transition whales made from land to sea is wonderfully well documented. Paleontologists have found complete skeletons of creatures such as the 45 million year old Ambulocetus (reconstructed here by the gifted artist Carl Buell). The transformation was not some sudden macromutation, but a gradual series of changes over millions of years, featuring shrinking legs, lengthening tails, loosening hips, and migrating noses.
In the coming century, I suspect fossils will help scientists reconstruct other major transitions. But they'll also start reconstructing others that have left no record in rocks A fascinating case in point has been published on line at the Proceedings of the National Academy of Sciences. Jan Klein and Nikolas Nikolaidis of Penn State have drawn a rough map that charts the evolution of the immune system.
Our immune system is as awesome as a whale's body--in terms of the complexity of its parts and the way those parts work together so well. It keeps viruses, bacteria, tapeworms, and even cancer cells at bay, while generally sparing our own tissues from its withering attack. All animals share a rudimentary immune system, but Klein and Nikolaidis focused on a second system that is found only in vertebrates. Only we vertebrates have immune systems that can learn.
This learning system is a network of cells, signals, and poisons. Among its most important cells are T cells and B cells. They originate in the bone marrow, although the T cells have to finish their development in the thymus, an organ near the heart. These cells are unusual in many ways, most important of which are some of the receptors they make on their surface. The cells have a special set of tools that cut up the receptor genes and paste them into new arrangements, so that the genes produce receptors with new shapes. Depending on its shape, a receptor can grab onto certain molecules. Those molecules may come from a bacteria toxin, or they coat nerves or muscle cells. Our bodies can usually eliminate the immune cells that have an affinity for our own tissue. If they don't, we end up with autoimmune diseases such as muscular dystrophy.
The surviving B cells and T cells are introduced to molecules from invading pathogens (antigens) by other immune cells called macrophages. The macrophages devour bacteria or virus-infected cells and then put some of the molecules of their victims on their surface. They travel to the lymph nodes to show off their conquests. If T cells or B cells bump into one of these macrophages, their receptor may fit reasonably well onto an antigen. That fit sends a signal to their DNA, triggering them to multiply. Some of the cells they produce receptors cut and pasted into newer shapes, some of which do an even better job of fitting on the antigen. These winners get to reproduce more. In other words, our immune systems use a version of natural selection to fine tune their recognition of pathogens.
These B cells and T cells can then fight off a disease. The T cells may destroy cells infected with the pathogen, because most cells in the human body have receptors they can use to display antigens. In other cases, they can whip up macrophages into a furious frenzy of killing. Or they may spur B cells to produce antibodies. The B cells spray out the antibodies into our bodies, and when they come into contact with their particular pathogen, they may drill into it, stop it from invading cells, or tag the pathogen to make it an easier target for macrophage attack. Some B cells and T cells that can recognize a pathogen sit out the battle. If we should be exposed to the same disease years later, these memory cells can leap into action so quickly that the new infection may not even make us sick.
You can find this same remarkable system in humans, albatrosses, rattlesnakes, bullfrogs, and all other land vertebrates. You can also find it in most fish, from salmon to hammerhead sharks to sea horses. There are some variations from species to species, but they've all got B cells, T cells, antibodies, thymuses, and the other essential components. But you won't find it in beetles, earthworms, dragonflies, or any other invertebrate on land. Nor will you find it in starfish, squid, lobsters, or lampreys in the water. All these other animals rely instead on rudimentary immune systems that cannot learn.
For those who reject evolution, this sort of pattern tells them nothing. Like everything else in nature, they can only wave their hands and declare it the inscrutable work of a designer (lower case d or upper case D as they are so inclined on a given day). But immunologists and other scientists who actually want to learn something about the immune system find this view useless. Instead, they look at how animals with an antibody-based immune system are related to one another. And what they find is both straightforward and astonishing. All of the living animals with an antibody-based immune system descend from a common ancestor, and none of the descendants of that common ancestor lack it. That means that the antibody-based immune system evolved once, about 470 million years ago.
I need to back up in the history of life a few hundred million years to explain how scientists know this. Studies on fossils and genes agree that everything we call an animal (including sponges and jellyfish) shares a common ancestor not shared by plants, fungi, or other major groups of organisms. Exactly when that ancestor lived is a subject of fierce debate, but one of the latest estimates puts the date at about 650 million years ago. This ancestor probably had a simple immune system, because all animals, from sponges on, have at least some sort of defense against pathogens. Over the next 100 million years or so, the major groups of animals branched off from one another, and while some branches evolved some new defenses of their own, the antibody-based immune system only appears in our own branch, the vertebrates.
Animals with some--but not all--of the key traits of vertebrates, such as heads and brains, lived at least 530 million years ago. The only living relics of these early branches are hagfish. Later, our ancestors also evolved a vertebral column, becoming true vertebrates. Lampreys represent the deepest branch of vertebrate evolution, splitting off perhaps 500 million years ago from their common ancestor with us. They lack many traits that other vertebrates have--most obviously a jaw. A number of other weird jawless vertebrates filled the oceans between about 500 and 360 million years ago, but except for lampreys, they're all long gone. One of these branches gave rise over 470 million years ago to fish with jaws--known as gnathostomes. Gnathostomes later gave rise to sharks and other "cartilaginous" fishes, as well as ray-finned fishes, and land vertebrates.
You may have already guessed the kicker of all this history. Lampreys and invertebrates don't have an antibody-based immune system. Sharks, ray-finned fish, and land vertebrates do. Sharks, ray-finned fishes, and land vertebrates all share a common ancestor that is not shared by lampreys or other invertebrates. The simplest way to explain this coincidence is to conclude that the antibody-based immune system evolved after lampreys branched off from our own lineage, but before sharks and other living gnathostomes began to branch apart. We can't dig up fossil antibodies, but we can know when they evolved.
Scientists have sometimes treated the transition from rudimentary immune system to antibody-based immune system as a great leap. Lampreys don't have antibodies, B cells, T cells, thymuses, or the rest, and all gnathostomes do. Some creationists have even tried to turn this into an argument against evolution, claiming that something as complex as the adaptive immune system could not have emerged gradually. But it's important to bear in mind that tens of millions of years of evolution separate our common ancestor with lampreys and the earliest gnathostomes. And in their new paper, Klein and Nikolaidis argue that the evolution of the antibody-based immune system was a lot like the evolution of whales: a gradual, step-wise process.
Most of the components of the antibody-based immune system were actually already in place long before gnathostomes evolved. Lampreys, for example, don't have a thymus, but they do have the structures and cell types that form the thymus. In gnathostomes, the thymus develops as cells switch on special genes in a particular order. Lampreys have these genes, as so many other animals. Instead of building thymuses, they build other structures, such as eyes and gill arches. It would have only required altering the switches that determine when and where these genes become active to produce a new organ.
B cells and T cells are known as lymphocytes. Lampreys don't have lymphocytes, but Klein and Nikolaidis point out that they do have "lymphocyte-like cells." (The picture above shows what these cells look like.) Lymphocyte-like cells develop like lymphocytes, under the control of many of the same genes that control the development of lymphocytes. Once they are mature, these cells have almost the same structure and chemistry as lymphocytes--but they don't produce the antibodies and receptors of B cells and T cells. Exactly what they do in lampreys isn't clear.
What about those receptors and antibodies? Klein and Nikolaidis point out that they aren't quite as novel as they may look at first. They are made up of building-blocks of simple proteins arranged in different ways. And guess what--many of these simpler proteins are found in lampreys and invertebrates, where they serve other functions. The same goes for many of the proteins that B cells and T cells use to communicate with one another. Other proteins are made by genes that are unique to gnathostomes, but show a kinship to entire families of genes found in other animals. The most likely explanation is that an ancestral gene duplicated by accident, and later one of the copies was recruited to the evolving immune system.
Klein and Nikolaidis point out that some truly new things appeared as the antibody-based immune system emerged. But just because something is new doesn't mean that it couldn't have evolved. The best-understood example of a new feature is the cut-and-paste machinery that allows B cells and T cells to mix up their receptors into new shapes. Scientists have been working out its evolution for years now, but just last week some scientists from Johns Hopkins published a paper in Naturethat brought the picture into remarkable focus. Our genomes are rife with virus-like sequences known as transposable elements that produce enzymes whose sole function is to make copies of the transposable DNA and insert those copies somewhere else in our genomes. In a few cases, these transposable elements have evolved from pests to helpers, carrying out important functions in our cells. The genes that are responsible for cutting and pasting immune cell receptors bear a clear resemblance to transposable elements in other animals. So the evolution of a new cut-and-paste mechanism was actually just the domestication of an in-house virus.
I suppose that creationists might claim that these components could not possibly have come together into an antibody-based immune system. But there's no proof behind this sort of categorical dismissal, just a personal feeling of disbelief. These folks would still be left with the fact that the evolutionary tree of life and the biochemistry of vertebrates and other animals are all consistent with a gradual evolution of this system. It would all have to be a spectacular coincidence, or perhaps an intentional deception on the part of the designer. Who knows. Who cares, really? (Aside from certain Pennsylvania senators.) What's exciting here is the future research that could shed more light on this transition. Klein and Nikolaidis propose introducing lamprey genes into vertebrates and vice versa to see just how close the ancestors of lampreys had gotten to an antibody-based immune system before they branched off on their own. Obviously, some half a billion years of independent evolution will muddy up the results, but it should be possible to see whether gnathostome immune genes can organize the lamprey immune system to act more like our own. What I'd be even more excited by would be a deep-sea discovery of a living fossil--a jawless fish that is more closely related to us than lampreys. They filled the seas 400 million years ago, and perhaps a few are lurking in some deep sea trench. Such a fish might have a crude antibody-based immune system, with only a few genes recruited and others yet to be pulled in. Perhaps it could do a mediocre job of learning to recognize diseases--but a mediocre job is better than no job at all.
It may sound like a crazy dream, but then again, so did walking whales.
Update 1/2/05: Panda's Thumb has more on the evolution of the immune system.

Size matters. At least that's the result of some recent research on long-term evolutionary trends that I'll be reporting in tomorrow's New York Times. Here are the first few paragraphs...
Bigger is better, the saying goes, and in the case of evolution, the saying is apparently right.
The notion that natural selection can create long-term trends toward large size first emerged about a century ago, but it fell out of favor in recent decades. Now researchers have taken a fresh look at the question with new methods, and some argue that these trends are real.
Biologists have recently found that in a vast majority of animals and plants, bigger individuals are more successful at reproducing than smaller ones, whether they are finches, damselflies or jimsonweed.
Nor is this edge a fleeting one. Natural selection can steadily drive lineages to bigger sizes for vast stretches of time. The giant dinosaurs that made the earth tremble, for example, were the product of the long-running advantage of being big over tens of millions of years.
"I think it holds up very well, and a lot better than a lot of people have said over the years," said David Hone, a paleontologist at the University of Bristol. Mr. Hone and others argue the push toward bigger size is so strong and persistent that there must be significant forces pushing the other way. Otherwise, we would be living on a planet of giants.
You can read the rest of the article here.
As is so often the case, I wish the article could have ended with a big fat asterisk, along with a footnote reading, "There's more to the story, but you'll have to visit The Loom for it."
This notion about size increase, known as Cope's Rule, has a long, checkered history, and this history say a lot about how the entire science of evolutionary biology has changed over the years. Cope's Rule is named after the American paleontologist Edward Drinker Cope, who made a careful study of the fossils of North America in the late 1800s. Cope belonged to the first generation of scientists who grappled with Darwin's Origin of Species, published in 1859. Its reception was decidedly mixed. On the one hand, Darwin was hugely successful in persuading scientists that life had evolved over a long period of time, thanks to the huge amount of evidence he marshalled fossils, embryos, and the distribution of living species. But Darwin didn't fare so well in his argument about what drove the evolution of life. He was trying to quash the popular ideas of Lamarck, who had offered two mechanisms for evolution. First, traits acquired over an individual's lifetime are passed on to its descendants. Second, life contains a mysterious force that continually drives it from lowly primordial slime towards higher levels. Darwin rejected both of these mechanisms almost completely, replacing them with natural selection.
This second argument did not fare as well in the late 1800s as the first. Many biologists came to see life as the product of evolution, but they saw evolution as the product of various Lamarckian, long-term forces. Cope was one of these scientists. He looked at the early mammals of North America--tiny creatures, for the most part--and saw that later they were replaced by much larger species. Here, he decided, was evidence of an evolutionary force that could operate over millions of years, a force, moreover that was separate from natural selection. Others found similar patterns in other groups, such as corals and foraminfera.
In the mid-twentieth century, evolutionary biology went through a revolution known as the Modern Synthesis. Scientists came to understand how genes mutate, and how mutations helped make natural selection possible. Leftover Lamarckism found no vindication of its own, and faded away. Biologists still accepted Cope's Rule as a genuine pattern in the fossil record, but they offered a different mechanism than Cope originally had. Instead of some mysterious long-term trend, good old natural selection was at work. Bigger individuals were favored in populations, and over millions of years, this edge produced bigger and bigger species.
In the 1970s, a group of young paleontologists challenged some aspects of the Modern Synthesis. They rejected the idea that every long-term pattern in the fossil record could be neatly explained by short-term natural selection. And Cope's Rule became one of their favorite targets. A trend towards bigger sizes could appear in the fossil record, they pointed out, even if natural selection didn't favor bigger individuals. Small species, for example, might be more likely to survive mass extinctions, and would thus have been more likely to found major new groups of species. Because they were small, their descendants couldn't get much smaller before they hit a minimum size limit. But they'd have plenty of room at the larger end of the spectrum. Even without an inherent advantage to being big, the lineage would gradually get larger.
Although a number of paleontologists were involved in this rebellion, Stephen Jay Gould was its most outspoken member. He made Cope's Rule a favorite object of his derision, a case study of how our subjective biases ("bigger is better") shape our interpretation of the natural world. And from the 1970s to the 1990s, he had pretty good reason to be scornful. The evidence for Cope's Rule turned out not to be all that strong. Or to put it more precisely, scientists who promoted Cope's Rule did not test it rigorously against other possible explanations.
My article looks at some recent research that gives it the careful look it demands. And, in something of a surprise, Cope's Rule is enjoying a renaissance. In most living populations of animals and plants studied so far, natural selection shows a strong preference for larger size. In rigorous studies of the fossil record, lineages of dinosaurs and mammals show signs of having evolved to bigger sizes over millions of years thanks to natural selection.
So was Gould wrong? Yes and no. Cope's Rule is not, as he claimed, a "psychological artefact." But there must be more to the story than natural selection favoring bigger indvidiuals. Otherwise, we'd live on a planet of giants. In my article, I mention a few possible forces that work against Cope's Rule. One that I didn't have space to mention is a force near and dear to Gould's heart: species selection. Just as individuals are favored or disfavored by natural selection, species may also undergo a selection of their own, with some species giving rise to more descendant species, while others go extinct. In the case of size, what's good for the individual may not be so good for the species.
Species selection has been kicked around for quite some time to explain why Cope's Rule hasn't made everything enormous. Recently, a nice study of fossils came out that supported the idea. Paleontologist Blaire Van Valkenburgh of UCLA and her colleauges have documented how big size may have doomed two groups of canids-the ancient relatives of today's dogs and wolves-in North America. In both cases, the canids evolved to larger sizes over millions of years, only to dwindle away to extinction.
As Van Valkenburgh and her colleagues pointed out in Science in October, small canids could have found enough energy in rabbit-sized prey and other foods such as fruits. But once the canids got above about forty pounds, they could no longer survive on this fare. They would spend more energy running after prey than they got eating them. As the canids got bigger, Van Valkenburgh argues, they shifted to hunting prey as big as themselves or bigger. Consistent with this hypothesis, Van Valkenburgh has found that as both groups of canids got large, their jaws and teeth also evolved. They lost molars, their front teeth got larger, and their jaws became stout and strong. This shift put these canids in an evolutionary trap. If their large prey became extinct, they risked starving to death. Nor could they re-evolve the versatile teeth and jaws that had allowed their ancestors to eat different sorts of food. They didn't go extinct because they were big; they went extinct because they were specialized.
Scientists I spoke to for this article were confident that Cope's Rule would figure in a lot of research in coming years. They've now got the tools they need to dissect long-term trends like never before--from databases of fossils to detailed evolutionary trees to sophisticated statistical methods. After more than a century, Cope's Rule still has plenty of life in it yet.


Intelligence is no different than feathers or tentacles or petals. It's a biological trait with both costs and benefits. It costs energy (the calories we use to build and run our brains) which we could otherwise use to keep our bodies warm, to build extra muscle, to ward off diseases. It's also possible for the genes that enhance one trait, such as intelligence, to interfere with another one, or even cause diseases. Over the course of evolutionary time, a trait can vanish from a population if its cost is too high.
On the other hand, intelligence may offer some evolutionary benefits, by allowing us to find food, withstand the elements, locate the car keys our children have put in their dollhouses, etc. But it is by no means a given that intelligence is always a net plus. It all depends on the conditions in which we--and other animals--find ourselves in.
Scientists have come to appreciate how optional intelligence is through several sorts of experiments. Last year French scientists reported an experiment in which they bred fruit flies for their ability to learn. They would give the flies oranges and pineapples on which to lay their eggs, but they would dab one kind of fruit with a nasty tasting chemical. Some of the flies learned quickly to avoid the bad-tasting fruits, avoiding them even when the researchers didn't put the chemicals on them. These smarter flies were allowed to reproduce, passing on their learning genes to the next generation. (The researchers switched the bad taste between the fruits in each generation to make sure that the flies weren't simply evolving a distaste for oranges or pineapples.) This line of flies became significantly better at learning than their unevolved cousins in a few dozen generations. And in a reverse experiment, they succeeded in breeding stupid flies who did worse at learning than normal flies.
If it was so easy for the scientists to produce better learning in flies, why hadn't the ancestors of these insects already evolved this sort of intelligence in the wild? The answer is that this intelligence comes at a cost. The researchers put the larvae of the smart flies alongside some normal fly larvae and let them compete for a supply of yeast. They then counted how many of the larvae survived to adulthood. Then they did the same experiment with the dumb flies. They found that the larvae of smart flies are more likely to die off than the dumb ones.
Now comes another experiment in intelligence, this one conducted mainly by nature rather than scientists. Many of the streams that feed the Panama canal are inhabited by the same species of guppy, Brachyraphis episcopi. And in many of these streams, the guppies live in two different habitats: above and below waterfalls. Below the waterfalls, they face a lot of competition from other fish that are trying to eat the fruit and other foods that fall from the trees overhead, and they also have to cope with several predatory fish. But above the waterfalls the guppies enjoy a predator free existence. Researchers at the University of Edinburgh realized that this arrangement created excellent conditions for the evolution of different kinds of behavior within a species. Upstream guppies would not face the same evolutionary pressures that the downstream fish were. And if the researchers were right, they should find the pattern repeated in stream after stream.
The researchers netted guppies from four different streams, both from upstream and downstream populations. They then shipped the fish back to their lab in Scotland and tested their ability to make their way through mazes to find food. As they report in a paper in press at Behavioral Ecology, the fish from the low-predator upstream sites consistently outperformed their downstream counterparts. They figured out the mazes twice as quickly.
The researchers argue that the upstream fish do so well because they have been able to evolve a sort of single-mindedness. In the wild, the guppies appear to size up their stream and figure out the best place to wait for food to drop to the water. They head for that patch quickly and defend it from other guppies. This sort of learning translates well into a laboratory maze. The downstream guppies, on the other hand, would risk becoming easy prey if all they did were to search for the best patch of stream. Instead, they also have to get a better sense of their overall habitat, spotting predators, finding refuges, and so on. In the laboratory, they tended to explore more of the passageways of the maze than the upstream guppies, perhaps due to their instinct to get a lay of the land (or perhaps the lay of the water).
These results raise a sticky point about ourselves. They suggest that different populations of the same species (such as humans) can evolve differences in cognition in response to different environments. I don't think these results can be used to boost any notion of race-based difference in IQ, though, because we're not fish or laboratory fruit flies. I don't think the conditions that people in different parts of the world face are as different as these flies and guppies have faced. The most important lesson from these results, I think, is make us tone down our self-love a bit. Being intelligent does not make us superior to other animals. It only makes us superior in one respect.


Imagine you're a columnist. You decide to write something about how the National Park Service is allowing a creationist book to be sold in their Grand Canyon stores, over the protests of its own geologists, who point out that NPS has a mandate to promote sound science. Hawking a book that claims that the Grand Canyon was carved by Noah's Flood a few thousand years ago is the polar opposite of this mandate. So what do you write? Well, if you're Republican consultant Jay Bryant, and you're writing for the conservative web site Town Hall, you declare that this as a clear-cut case of Darwinist atheists censoring freedom of speech in a desperate attempt to squelch Intelligent Design.
I don't blog much about science and politics, because I don't have the time and because others do it better than I could (see Chris Mooney and Prometheus for starters). But there's something so simple and basic about the Grand Canyon affair--with plain scientific fact on one side and eye-popping rhetorical nonsense on the other--that I can't help but register disbelief at it from time to time.


The Australian media are doing a fantastic job of keeping up with the developments with Homo floresiensis. Here's the first three-dimensional reconstruction I've seen of the little hominid, made by an Australian archaeologist. It's published on the Australian Broadcasting Corporation's web site. I'm sure that as more bones emerge, the image will improve, but this is still a wonderful first look.


Homo floresiensis update: The Economist weighs in on the "borrowing" of the fossils. They mention that when the bones were removed, they were simply stuffed in a leather bag. This is not exactly the sort of procedure you see in protocols for avoiding contamination of ancient DNA. In the Australian, the discoverers of "Florence" vow to return to the fossil site, and this time they'll put their discoveries in a really good safe. Wise move.


In tomorrow's New York Times, I have an article about how to reconstruct a genome that's been gone for 80 million years. The genome in question belongs to the common ancestor of humans and many other mammals (fancy name: Boreoeutheria). In a paper in this month's Genome Research, scientists compared the same chunk of DNA in 19 species of mammals. (The chunk is 1.1 million base pairs long and includes ten genes and a lot of junk.) The researchers could work their way backwards to the ancestral genetic chunk, and then showed they could be 98.5% certain of the accuracy of the reconstruction.
There are some pretty astonishing implications of this work. For one thing, it should be possible to synthesize this chunk of DNA and put it in a lab animal to see how it worked in our ancestor. For another, the scientists are now confident that they will be able to use the same technique to reconstruct the entire genome in the next few years, if the sequencing of mammal genomes continues apace. Could scientists some day clone a primordial Boreoeutherian? It's not impossible.
On the down side, this method will not work for just any group of animals you want to pick. Mammal evolution was rather peculiar 80 million years ago: a lot of branches sprouted off in different directions in a geologically short period of time. That makes the 19 species the scientists studied like 19 different fuzzy images of the same picture. Other groups of species had a very different evolutionary history, and one that may make genome reconstruction impossible. If you yearn for the day when Jurassic Park becomes real, you will have to conect yourself with a swarm of shrew-like critters. If they did somehow manage to break out of a lab, I suspect they would get eaten by the first cat to cross their path.


I have a short piece in today's New York Times about how male swallows are evolving longer tails, which female swallows find sexy. Here's the original paper in press at The Journal of Evolutionary Biology. Measuring the effects of natural selection is tough work, the details of which are impossible to squeeze into a brief news article. Scientists have to document a change in a population of animals--the length of feathers, for example--but then they have to determine that the change is a product of genetic change. We are much taller than people 200 years ago, but it's clear that most, if not all, of this change is simply a response of our bodies to better food and medicine. The authors of the swallow paper carried out a number of studies that suggest that the length of swallow tails is genetically based, and that those genes are changing. If they're right--and other experts I contacted think they are--it's a striking example of how quickly the sex lives of wild animals can evolve.
Things get a little fuzzier when the researchers propose what's driving the evolution. They think desertification in the springtime range of the swallows in Algeria is to blame. But it's very hard to eliminate other possibilities, since these swallows have complicated lives, migrating from Europe to South Africa and back every year. It's much easier to make a case for the forces driving the evolution of Darwin's finches, which generally sit obediently on the island on which they were born and are subject to cycles of droughts and heavy rains.
But it's a question very much worth investigating. Global warming may well produce ecological changes that could produce just these sorts of rapid evolutionary changes in animals and plants. In some cases, species may be able to adapt quickly enough to their new environment. In other cases, they may lose the race.


On Wednesday I spoke on "The Current," the Canadian Broadcasting Corporation's morning radio show. The hour-long segment focuses on various aspects of evolution, such as the evolution of diseases and the ongoing creationist circus in Georgia. I spoke about how humans are altering the evolution of other species. You can listen to the entire episode here. The audo file is broken up into pieces; part two and part three are the evolution segment.


Last month saw the bombshell report that a tiny species of hominid lived on an Indonesian island 18,000 years ago. Since then there has been a dribbling of follow-up news. Some American paleoanthropologists have expressed skepticism, pointing out that while bones from several small individuals have been found, only one skull has turned up. The skull was the most distinctive part of the skeleton, with a minuscule brain and other features that suggested it was not closely related to our own species. The skeptics suggest that these hominids were actually modern human pygmies, and that the skull came from an individual who suffered a genetic disorder called microcephaly.
In Friday's issue of Science, Michael Balter reports that a prominent Indonesian anthropologist, Teuku Jacob of Gadjah Mada University, thinks Homo floresiensis was a microcephalic. He has taken possession of the fossils to study them, and this has a number of researchers worried. Jacob is known to guard fossils in his vault, and so he may essentially be making it impossible for other researchers to look at them. Balter quotes one of the authors of the original report on the fossils, Peter Brown of the University of New England in Australia, saying, "I doubt that the material will ever be studied again."
This could be staggeringly tragic, because the world is waiting for the other shoe to drop: is there any DNA in the fossils?
The fossils are so young that they might well contain some genetic fragments, and this DNA could quickly resolve the debate over which species the bones belong to. If they belong to human pygmies, their DNA should be more similar to the DNA of Australian aborigines or Southeast Asians than to Europeans or Africans. But if, as Brown and his colleagues suggest, they belong to a species that branched off from an Asian population of Homo erectus, then their DNA should not be particularly close to any living human's genes. Most evidence indicates that Homo erectus in Asia shares a common ancestor with Homo sapiens that lived two million years ago. It might even be possible to compare Homo floresiensis DNA to the fragments of Neanderthal DNA that have come to light in recent years. If Brown is right, then Neanderthal DNA should be more similar to human DNA than that of Homo floresiensis, because Neanderthals and humans share a common ancestor that lived roughly 500,000 years ago--four times younger than the ancestor we share with Homo erectus.
According to an Australian newspaper, Brown and his colleagues have found hair that may belong to H. floresiensis, and which may contain DNA. But if that turns out to be a dead end, the next best hope will be the fossils. And the biggest challenge in finding fossil hominid DNA is contamination. You don't want to accidentally grab DNA from a lab assistant's thumbprint. If the Homo floresiensis goes down a bureaucratic rabbit hole, that challenge could become enormous.


There are lots of news stories today (as well as PZ Myers' take) about the fabulous new discovery in Spain of Pierolapithecus catalaunicus, a 13-million year old fossil close to the common ancestor of all living great apes.
The early evolution of apes is where some of the most interesting developments are emerging. Until the recent discoveries of fossils of Pierolapithecus catalaunicus and other early species, the fossil record from this period of our history was pretty scanty. These new fossils are starting to shed light on some pretty major questions, such as how our upright stance came to be and how our brains got so big. Meanwhile, new genetic work is raising the curtain on the evolution of cognition in these early apes, which set the stage for our subsequent explosion.
Yet for all the excitement a story like can engenders, some of the coverage has been pretty irritating. Certain hoary misconceptions about science have a way of taking hold in the journalistic world and seem to be impossible to dislodge. One of these is the notion that paleoanthropologists are focused on discovering "the missing link," and that only the missing link can tell us anything of real importance about our origins. Just consider Diedtra Henderson's article on MSNBC.com. It includes this rather revealing sentence--
"Coaxed by a reporter to say Pierolapithecus catalaunicus represented a 'missing link,' co-author Meike Kohler demurred. 'I dont like, very much, to use this word because it is a very old concept.'"
That's right--coaxed. As in, "Come on, professor, just give us a smile and say it's a missing link. It won't kill you, right?"
Henderson is hardly alone. A little googling unearths 59 articles that do their best to call Pierolapithecus a missing link, even if it means putting a question mark after it in a headline. Today, Ira Flatow on Science Friday asked his paleoanthropologist guest whether the fossil is a missing link, even while he acknowledged that the scientist might not want to be "boxed in" with that phrase.
Now, if you learned about human origins 50 years ago, you might well have read things by scientists referring to a missing link in our evolution. The great paleoanthropologist Robert Broom even published a book in 1951 called Finding the Missing Link. But this was a time when so few fossils were known from human evolution that many researchers thought that our ancestry was pretty much linear until you got back to our common ancestor with other living apes. But fifty years later, it's abundantly clear now that human evolution has produced many branches, all but one of which have ended in extinction. Some are close to our own ancestry, others are further away. Paleoanthropologists don't get excited about a fossil because they think they've found the missing link (whatever that is), but because a fossil can show how early a trait such as a big brain evolved, and sometimes can even reveal traits that have evolved independently several times in evolution. That's what gets them fired up about Pierolapithecus catalaunicus. So why shouldn't journalists get fired up as well, rather than trotting out old cliches?
It's not just lazy journalism, I'd argue, but abets some pernicious pseudoarguments made against evolution. Creationists try to cast doubt on the reality of evolution whenever a new fossil of a hominid is discovered. They crow that the latest fossil has a feature not found in living apes or living humans, meaning that it can't bridge the gap between the two groups. These arguments hardly call human evolution into doubt. The only lesson that should be drawn from them is that the term "missing link" should be retired for good.


...Actually, this new Gallup report shows that 35% of people believe that Darwin's theory of evolution is not supported by the evidence, while another 29% don't know enough to say, and 1% have no opinion. So perhaps I should say, wrong or uninformed.


A little more horn-tooting: The Loom has just been named a winner of the American Association for the Advancement of Science's 2004 Science Journalism Award. The judges considered three pieces: Hamilton's Fall, Why the Cousins Are Gone, and My Darwinian Daughters. Here's the press release. Thanks to the judges--it's gratifying to see that it's possible for a little blog to swim with the big online sharks.
On the other hand, the news is a bit embarrassing, coming as it does while I've left the Loom woefully neglected over the past couple weeks. I've been working on a lot of articles, such as a piece for Science about the new hypothesis that our ancestors evolved to run. (Here's a shorter version; the full version will go online later today.)




Get to know that little skull. Scientists are going to be talking about it for centuries.
As researchers report in tomorrow's issue of Nature, the skull--and along with other parts of a skeleton--turned up in a cave on the Indonesian island of Flores. Several different dating methods gave the same result: the fossil is about 18,000 years old. (Additional bones from the same cave date back to about 38,000 years.) If all you had was the 18,000 year figure and this picture to go on, you might assume that the skull belonged to a small human child. After all, there is plenty of evidence that Homo sapiens had already been in this part of the world for 25, 000 years. But you'd be wrong.
The skull actually belongs to a previously unknown species of hominid, whose ancestors split off from our own some 2 million years ago. Homo floresiensis, as it's known, stood three feet high as an adult and had a brain less than a third the size of our own.
To understand just how mind-blowing Homo floresiensis is, you have to consider it in the context of hominid evolution. Our closest living relatives (chimpanzees and bonobos) live in Africa, and both genetic and fossil evidence indicate that the common ancestor we share with them lived in Africa as well. The oldest known hominids--those species more closely related to us than chimps or other primates--date back 6 million years. They were short, probably could walk upright, and had brains about the size of a chimpanzee--about 350 cubic centimeters. It was only about 2.6 million years ago that hominids started using stone tools, and only about 2 million years ago that species emerged that stood as tall as we do. Its brain was also bigger--850 cc. The increase in brain size may not have been all that significant, since bigger mammals tend to have bigger brains, smart or not. But shortly after this evolutionary surge, the first hominids turned up outside Africa. Homo erectus moved as far east as China and Indonesia within just a few hundred thousand years. At the very least, their migration suggests an expanding population of meat-eaters who have to seek out much bigger ranges than their ancestors.
The Asian population of Homo erectus had little, if anything, to do with our own origins. The oldest human fossils, dating back 160,000 years ago, were found in Africa, and there's a pretty good chain of evidence showing that Homo sapiens descends from hominids who stayed home on the mother continent while Homo erectus swept across Asia. For instance, African hominids underwent a massive burst of brain expansion around 500,000 years ago to close to our own capacity. Meanwhile, Homo erectus in Asia underwent a slight increase, if any. Humans only expanded successfully out of Africa about 50,000 years ago. They may have interbred with Homo erectus, but most of our genome still points back to a recent African origin.
Paleoanthropologists were first attracted to Flores when 800,000 year old tools were found on the island in 1998. Boats seem to have been essential for getting to Flores, which speaks of a pretty impressive mental capacity for Homo erectus . (On the other hand, lizards and elephants and other land animals got to the island without a boat--perhaps by swimming being swept away on logs during storms.) Researchers poked around on Flores, and last September they turned up something none of them had expected: Homo floresiensis. Homo floresiensis was not an ape--it had the signature traits of a homind, such as a bipedal anatomy and small canine teeth. But it wasn't a pygmy human, either. Pygmy brains are in the normal range of variation for our own species. What's more, the floresiensis brain wasn't just small but had a drastically different shape than ours--a shape more like the brain of Homo erectus. This and other anatomical details have led the researchers to conclude that Homo floresiensis branched off from Homo erectus and evolved into a dwarf form.
Here is case-closed proof that today's solitary existence of Homo sapiens is a fluke in the history of hominids. Even 18,000 years ago, at least one other species walked the Earth with us. Exactly how Homo floresiensis went extinct no one knows, but close to the top of the list would have to be ourselves. Neanderthals survived only a few thousand years after humans turned up in Europe, and Homo erectus seems to have disappeared from Indonesia around 40,000 years ago, just around the time humans came on the scene. Perhaps Homo floresiensis lasted longer on Flores because it was harder for humans to reach.
A dwarf hominid on an island is fascinating for another reason--islands are famous for fostering the evolution of dwarf animals, from deer to mammoths. It's possible that the small territory of islands and the lack of competition and predators favors the small. For the first time, hominids have fallen under the same rule. Islands mammals have also been shown to sometimes evolve much smaller brains, and, incredibly, the hominid brain is subject to the same rule. Homo floresiensis's brain shrank down to the smallest size ever found in a hominid. Did Homo floresiensis lose the mental capacity to use tools along the way? The researchers found stone tools in the same site where they found Homo floresiensis, but it's not clear whether Homo floresiensis made the tools, or humans used them (perhaps to kill Homo floresiensis?).
One of the most interesting questions that comes to mind with the discovery of Homo floresiensis is how far back it goes in the fossil record. Just how long did it take for a lineage of hominids to lose half their height and two-thirds of their brain? It may have taken a million years, or a few hundred thousand, or maybe less. In a commentary in Nature, Marta Lahr and Robert Foley of Cambridge point out that it only took 12-foot high elephants on Malta only 5,000 years to shrink to the size of a dog. I've always been a bit skeptical when people forecast dramatic change for our species. But if evolution can produce Homo floresiensis, who knows what a few thousand years on Mars or another solar system could take our descendants?
Update, 11/1/04: Here's a bundle of papers, interviews, and such on H. floresiensis from Nature. Much of it is free.


Last month I blogged about my Scientific American review of Dean Hamer's new book, The God Gene. I was not impressed. It's not that I was dismissing the possibility that there might be genetic influences on religious behavior. I just think that the time for writing pop-sci books about the discovery of a "God gene" is after scientists publish their results in a peer-reviewed journal, after the results are independently replicated, and after any hypotheses about the adaptive value of the gene (or genes) have been tested.
Apparently Time doesn't agree. In fact, juding from this week's issue, they think it's the stuff of cover stories. I should point out that the article itself contains some pretty good interviews with people other than Hamer about their own work--studies of spirituality in twins and the like. But Hamer's work gets the lion's share of space, without any mention that his results haven't been published in a journal (let alone that the last results that got Hamer this sort of press--about a "gay gene"--could not be replicated). Time even copied Hamer's title on their cover, despite the fact that in his book, Hamer backpedals furiously from it, saying that the gene he has identified must be one of many genes associated with spirituality. In fact, the Time article has to backpedal, too. It quotes John Burn, medical director of the Institute of Human Genetics at the University of Newcastle in England as saying:
If someone comes to you and says, Weve found the gene for X, you can stop them before they get to the end of the sentence.
You may be able to stop them from getting to the end of the sentence, but you can't stop the presses.
Update, 11/1: The Time story is no longer available for free. I've linked instead to a Time press release.


I have an article in tomorrow's New York Times about the mystery of autumn leaves. Insect warning? Sunscreen? The debate rages. The one thing I was sad to see get cut for space was the statement by one of the scientists that the answer might be "all of the above." This sort of multitasking is the cool--and sometimes maddening--thing about living things. Very important, and very hard to sort out.


Last week I blogged about the strange story of our past encoded in the DNA of lice. We carry two lineages of lice, one of which our Homo sapiens ancestors may have picked up in Asia from another hominid, Homo erectus. I always get a kick imagining human beings, having migrated out of Africa around 50,000 years ago, coming face to face with other species of upright, tool-making, big-brained apes. It's pretty clear that it happened in Europe, which was occupied by both humans and Neanderthals for several thousand years. But encountering Homo erectus would be even weirder. Studies on DNA suggest humans and Neanderthals share an ancestor dating back half a million years or so. But Homo erectus moved into Asia 1.8 million years ago. These were long-lost cousins, to put it mildly. What's more, they almost certainly had nothing along the lines of human language. Their brains were very different too; they kept making the same stone tools they had been making since they had left Africa. I can't help imagining it would have been an awkward encounter, or even a bloody one. Yet it was close enough for us to pick up their lice.
Hot on the heels of the lice study, a new study on human DNA offers some more support to the idea of a very intimate reunion. Until now, most studies of human genes have pointed to Africa as their origin. If you draw a tree of the various versions of a gene, the deepest branches often belong primarily to living Africans. Some genetic markers are shared almost exclusively by Europeans and Asians, which may have evolved as humans moved out of Africa. These patterns suggested that humans sweeping out of Africa did not interbreed with Neanderthals or Homo erectus. Or, if they did, none of the DNA of those other hominids is around today. But in a paper in press in Molecular Biology and Evolution, University of Arizona scientists report the discovery of a gene that flouts the pattern.
Known as RRM2P4, this gene has its roots in Asia. Over half of people sampled from South China had the oldest version of the gene, while only 1 out of 177 Africans who were surveyed had it. And by studying the variation in different versions of the gene, the researchers concluded that the most recent common ancestor of them existed 2 million years ago. The simplest explanation for this pattern is that at least a few humans and Homo erectus came face to face in Asia and had kids.
The authors point out that the gene they looked at isn't big enough to offer a huge amount of statistical confidence. That will have to wait for other genes with Asian roots, if they're out there. But if RRM2P4 is any guide, humans and Homo erectus didn't just trade lice. Our hominid cousins may not have been able to survive as a species with us in the neighborhood, but all was not war between the species.


Here's the most important thing about The Ancestor's Tale that I couldn't fit in my review. I kept noticing how little Richard Dawkins mentioned the other celebrity evolutionary biologist of our time, Stephen Jay Gould. After all, Gould was a prominent character in many of Dawkins's previous books, cast as the brilliant paleontologist misled by leftist ideology.
Gould was famous for his attacks on adaptationism--the notion that the creative powers of natural selection are behind all sorts of fine points of nature, from jealousy to 11-year cicada cycles. Dawkins was an ultra-Darwinian fundamentalist in Gould's opinion. Gould thought that evolutionary biologists should widen their horizons. They should consider that things that look like adaptations might just be by-products of how organisms develop. They should consider how random catastrophes can override all of natural selection's work, wiping out fit and unfit alike. They should consider how selection may work on many levels--not just with selfish genes, but with populations, and even species. (This was why Gould thought punctuated equilibrium was so important.)
Dawkins would have none of this. He downplayed the importance of developmental constraints, of mass extinctions, and species selection. His attitude towards punctuated equilibrium has been, "Yeah, but so what?"
And then, in The Ancestor's Tale, the battle of Dawkins v Gould disappears. One possibility for the disappearance might be that Dawkins is respecting the dead. (Gould died in 2002.) Perhaps, but the silence is still weird. That's because in this book, Dawkins moves into the heart of Gould territory: the murky realm of evolutionary history. Dawkins has always been at his most eloquent and powerful when he ignores history. His arguments about selfish genes and the like are, at their heart, exquisitely organized reasoning. He did sometimes bring in actual details from biology to these arguments, but only as illustrations of his points. In The Ancestor's Tale, Dawkins takes on 4 billion years of evolution, in all its strange exuberance. The evidence--the fossil record, the relationships of living species revealed by DNA, and so on--dwarfs our explanations for it. We know there were giant scorpions in the oceans, and that they disappeared. But we don't know why. We know that birds survived mass extinctions 65 million years ago, but their close relatives--feathered, flightless dinosaurs--did not. But we don't know why. And so on. You'd imagine that this territory might make an adaptationist a bit anxious.
Dawkins handles himself very well as he moves across this terrain. He knows his natural history, his plate tectonics, and all the rest. He frequently throws up his hands about why the history of life took the turns that it did--although he remains confident that the best way to find the answer is to keep adaptationism first and foremost in mind. Gould shows up only in footnotes. Punctuated equilibrium remains an interesting empirical question but not a major principle. Species selection doesn't even show up in the index.
Yet I thought that sometimes Dawkins didn't acknowledge that some of the episodes in evolution he was writing about still raise some important questions about his selfish-gene centered view. I found this to be the case especially when he wrote about the origin of animals. Animals are multicellular organisms, in which trillions of cells come together as an individual, which then reproduces through just a few sex cells. Animals also descend from a single-celled ancestor. Making that transition isn't simple. A bunch of cells won't just come together and agree that a few of them will get to pass their own DNA on to the next generation. That doesn't make evolutionary sense. The only way to decipher this transition is to view evolution taking place at different levels--at the level of the genes, of the cell, and of the individual. Changes at one level may work against changes at the others, or they may all end up working together. I got interested myself in this subject a couple years ago while writing an essay for Natural History, focusing on the work of Richard Michod of the University of Arizona. It seems to me that the origin of animals is a case where Gould's multi-level selection may work well. Now, Dawkins might disagree, and yet he didn't even mention this challenge to his own views, let alone tear it apart as you'd expect from his previous books. In a 630 page long book, I find this omission puzzling.
It's always possible that Dawkins might eventually accept that in this case multi-level selection is important. He'd probably go on arguing that in most cases a gene-centered approach to life works best. I found it very interesting that he ends the book with a discussion of religion, saying that he suspects that many who call themselves religious would agree with Dawkins (an outspoken atheist) on many of the things he has to say about nature. He describes how "a distinguished elder statesman of my subject" was arguing for a long time with a colleague. The statesman said jokingly, "You know, we really do agree. It's just that you say it wrong."
I imagine Dawkins talking to Gould there.


The New York Times is running my review of Richard Dawkins's new book The Ancestor's Tale this weekend.
I'm particularly grateful at times like these to have a blog, where I can add extra information and the occasional correction.
Towards the start of the review I mention a remarkable tree of 3,000 species. You can download a pdf here. It's files like these that the zoom function were made for.
Towards the end of the review, I say that jellyfish and humans share a common ancestor that lived perhaps a billion years ago. There's plenty of debate about early animal evolution, but a billion years is probably too old--700 million or 800 million would have been better. Maybe I was thinking about fungi instead of jellyfish.
When I have a little more time today, I'll blog about some of the things that I think Dawkins should have included in his book but didn't.


Yesterday I blogged about how the National Park Service is selling a young-Earth creationist book about the Grand Canyon in its stores. Today the Washington Post wrote an article on the subject. It contains a response from the National Park Service, which I find pretty unbelievable. They claim that they are in fact reviewing the matter. The review was supposed to be done in February, but it's been delayed while lawyers at the Interior and Justice Departments "tackle the issue." No deadline is set for the decision, and the book will continue to be sold until one is made.
Tackle the issue? Do these folks really need an extra eight months (and counting) to recognize that the Grand Canyon is millions of years old, and was not formed in Noah's Flood?
The book has been moved from the science section to the inspirational section. But from what I know about it, it's not claiming to offer inspiration but facts. The intellectual cowardice continues.
Continue reading "Further Adventures in Geological Cowardice"


David Appell points to some depressing news about how our government deals with science.
In August 2003, the Grand Canyon National Park Superintendent tried to block the sale of a book in National Park Service stores. The book claims that the Grand Canyon formed in Noah's Flood. No vague ambiguity of the sort you hear from Intelligent Design folks--just hard-core young Earth creationism, claiming that the planet is only a few thousand years old. The folks at National Park Service headquarters stopped the administrator from pulling the book. Geologists cried foul, and NPS promised to review the situation. Meanwhile, the book remained for sale at NPS stores.
And then months passed with nothing. Today a public employees activist group that first publicized this sorry situation announced that it has documents showing that the administration has decided to let the book stay. In fact, there wasn't even any review.
I haven't seen any news pieces yet on this shamefulness, nor have I seen any statement from the National Park Service. From the information we have at hand at the moment, there's only one good conclusion to draw: your government is indifferent to even the most basic facts of science. If it doesn't care about something as well-established as the age of the Earth, you have to wonder what other science it is willing to ignore.


A lot of readers have commented on my recent post about a study that suggests we all share a common ancestor who lived 2,300 years ago. Some people doubted that isolated groups could share such a recent ancestry.
One of the study's authors, Steve Olson (also the author of the book Mapping Human History) sent me the following email yesterday:
"Ensuring a recent common ancestor doesn't take long-range migrations (although contact between the Polynesians and South Americans certainly speeds things up). All it really requires is that a person from one village occasionally mates with a person from an adjoining village; after that the power of exponential growth, and the dynamics of small worlds networks, take over. As for counterexamples, I've been looking for five years for examples of populations that were completely isolated, and I've decided that they're rare to the point of nonexistence. The Tasmanians are a possibility, but it's only 60 miles from Tasmania to Australia -- that no one made that trip in 9,000 years seems counterintuitive to me. And of course it only takes one person to link two genealogical networks, even though the amount of gene flow represented by that one person may be negligible (though I also think that gene flow in the past has been much more extensive and much more continuous than most people imagine)."


In March, I wrote a post on some tantalizing new findings about the secrets of human evolution lurking in our genome. In brief, researchers at the University of Pennsylvania studied a gene called MYH16 that helps build jaw muscles in primates. In our own lineage, the gene has mutated and is no longer active in jaw muscles. Perhaps not coincidentally, we have much smaller, weaker jaws than other apes. The researchers estimated that the gene shut down around 2.4 million years ago--right around the time when hominid brains began to expand. They suggested that shrinking jaw muscles opened up room in the hominid head for a larger brain.
It's a cool hypothesis, but it may not hold up. Scientists at Arizona State University have followed up on the initial study by anlayzing much larger pieces of MYH16, both in humans and in other species. All told, they studied 25 times more DNA from the gene. In a paper in press at Molecular Biology and Evolution, they report finding a significantly different date for when the gene mutated. Instead of 2.4 million years ago, they get a much older date: 5.3 million years ago.
If that's true, then you can forget any significant link between the evolution of MYH16 and brain evolution. If the Arizona State team is right, the two events are separated by three million years. What's more, the jaws of hominids also remained relatively large after the mutation of MYH16.
The Arizona State researchers do point out an intriguing clue that may eventually lead to a solution to this paradox. The mutation that the Penn team originally argued that the MYH16 gene became useless when a section of DNA in the middle of its sequence was accidentally deleted. Often, when this sort of deletion takes place, DNA-copying enzymes come to a screeching halt at the site of the mutation. With the gene only partly copied, it cannot be turned into a protein. But the Arizona State researchers found signs that the gene did not shut down entirely 5.3 million years ago. The DNA "downstream" from the mutation--in other words, beyond the point where the enzymes stopped copying the gene--has picked up mutations in a pattern that shows no sign of natural selection at work. That's what you'd expect from DNA that doesn't make a gene, since any change will have no effect for good or bad on its owner. But the upstream DNA--the part of the gene that could still be copied--told a different story. It showed signs of having undergone selection. So perhaps the mutation that occurred 5.3 million years ago didn't actually kill the gene, but just amputated it. What the surviving portion of MYH16 did (or still does) remains unknown.
I would wager that this new paper will unfortunately not attract much press. When scientists first come up with an attention-grabbing hypothesis, they're more likely to get a paper accepted to a high-profile journal, and more likely still to get written up by science writers like me. But follow-up work often ends up in the shadows.
That's a shame, because science is actually not made up of single studies that suddenly overturn everything that came before. It's more of a dialectic, as different groups of scientists search for new evidence in order to put hypotheses to new tests. Some hypotheses--such as the idea that chimpanzees are our closest living relatives--have become stronger over time. Others fall away. It would help if more people understood this process. Unfortunately, it seems that a lot of people think science is like building an elaborate sculpture out of glass. If someone discovers that a piece of research is wrong, then it seems as if the whole sculpture cracks and falls to the ground. Creationists are particularly fond of this tactic. They seize on research about evolution that goes against earlier research, and claim that the entire theory of evolution is a fraud. They conveniently ignore all points on which scientists agree. So, for example, the researchers who have published the new findings on MYH16 do not conclude that humans were intelligently designed, MYH16 and all. Instead, they argue that the gene mutated earlier than once believed, and that the full history of this gene remains to be revealed. Science is more like a sculpture made of clay than glass, continually being molded and reshaped to better reflect reality.
Correction, 10/16/04: Changed "ancestors" to "relatives."


Contempt is never wise in biology. The creature that you look down on as lowly, degenerate, or disgusting may actually turn out to be sophisticated, successful, and--in some cases--waiting to tell you a lot about yourself. That's certainly the case for lice.
The human body louse, Pediculus humanus, has two ways of making a living--either dwelling on the scalp, feeding on blood, or snuggling into our clothes and come out once or twice a day to graze on our bodies. For lice, we humans are the world. They cannot live for more than a few hours away from our bodies.Only by crawling from one host to the next does their species escape extinction.
A group of louse specialists recently decided to find out where human lice came from. Have they been riding on our bodies since before we were human? A comparison of the lice that live on different primates shows that they certainly can be very loyal. If you draw an evolutionary tree of primates, and then draw a tree of their lice, they are almost identical. On the other hand, some lice can live on more than one species. And a side-by-side comparison of trees reveals that in some cases they don't form a perfect mirror. In other words, sometimes lice can make an evolutionary leap.
As the researchers report today in Public Library of Biology, they compared human lice to the lice of primates, looking at both their DNA and their anatomy. As earlier research had shown, they found a major split among lice species that live on apes and on monkeys and other primates. That reflects an ancient split in the primates themselves: our ape ancestors diverged from other primates 20-25 million years ago. The variation in louse DNA turns out to act like a sort of molecular clock, showing when they split into different lineages. The molecular clock puts the split between lice that live on humans and chimps at 5.6 million years ago--exquisitely close to the age that's been estimated for humans from studies on both DNA and fossils.
The research suggested that we've carried our lice for millions of years, since before the time of our common ancestor with chimpanzees. But after we parted company with the chimps, the lice have a remarkable story to tell. Human lice split into two lineages. One lineages is found around the world. The second is found only in North America. The worldwide branch all share a common ancestor that lived 540,000 years ago. The North American branch shares a common ancestor that lived 150,000 years ago. And finally, the two branches share a far older common ancestor, which lived a 1,180,000 years ago.
So how did these two strains of the same species become separated and then wind up back on our bodies? The researchers argue that human evolution holds the key. Paleoanthropologists and geneticists still debate over the origins of modern humans, but the rough outlines are becoming clear. The first hominids to emerge that were tall, big-brained bipeds--that weren't just upright apes, in other words--lived about 2 million years ago. They very quickly began to spread out of their birthplace in Africa to other parts of the world. They were in the Caucusus mountains 1.8 million years ago and China 1.66 million years ago. These hominids are generally called Homo erectus, although they may well have consisted of several species, rather than one. And the ranks of Asian Homo erectus may have been boosted by fresh migrations of African hominids when ecological conditions favored another journey out of Africa. But it does appear that Asian populations became pretty isolated from African hominids. The fossils of Homo erectus from a few hundred thousand years ago look pretty distinct from both African hominids and Neanderthals, with very thick skull walls and other peculiar anatomical details. Thirty years ago, most paleoanthropologists would have told you that these Asian hominids probably were the ancestors of living Asians. But that's not what the evidence gathered since then suggests. Instead, it now looks pretty clear that Homo erectus was a very distinct species than Homo sapiens, and became extinct perhaps as recently as 30,000 years ago.
Our own roots can be found in Africa. The oldest clear cut examples of Homo sapiens fossils, found in Ethiopia, date back 160,000 years. By about 100,000 years ago, our species was beginning to diverge into different populations, and these differences can still be found in the DNA of various African groups, such as the Khoisan of Southern Africa (sometimes called bushmen). By 50,000 years ago, humans were moving out of Africa. In Europe, they moved into territory occupied by Neanderthals and their ancestors for some 300,000 years. Neanderthals disappeared by 28,000 years ago. They seem to have been driven into mountainous refuges by the booming population of humans. The story in Asia has always been a bit fuzzier. Humans appear to have gotten to Australia by at least 40,000 years ago, and perhaps much earlier. By 15,000 years ago, some Asian populations of Homo sapiens made their way into the New World through Alaska. Exactly where Homo erectus was on their arrival in Asia, and how long they survived, has never been clear. It hasn't even been clear whether the two species came into contact or not.
You may be able to guess how the louse scientists interpret the data from their parasitic charges. When Homo erectus moved into Asia and became isolated from our own ancestors, their lice became isolated as well. When our own ancestors burst out of Africa around 50,000 years ago, they carried the African lice with them. The most sensational part of the story comes when humans arrive in Asia. The researchers argue that a population of humans encountered Homo erectus and picked up their lice. Their descendants then passed into North America, where they--and their lice--live today. One of the many intriguing implications of this research is that the contact may have occurred in one limited regions--the same region where Native Americans originated in Asia.
This is not the first case where our parasites have preserved our own hidden history. Our tapeworms, for example, can tell us about how our ancestors began eating meat. Malaria reveals how agriculture brought new diseases to humans over the past few thousand years. Helicobacter pylori, the bacteria that trigger stomach ulcers, maps the spread of modern humans. (I go into more detail on some of these examples in my book Parasite Rex.) And the lice probably have more to tell us.
For example, the scientists can't say for sure how humans most likely picked up Homo erectus's lice. The contact definitely had to be intimate. But did it occur when humans drove Homo erectus away from their kills? Or did these two species make love, rather than war? Although the genetic evidence indicates that Homo erectus could not have contributed a significant number of genes to our species, it's possible that they contributed a few. The answer to this question may help show how Homo erectus became extinct, leaving us as the sole hominids left on Earth.
One way to test that possibility will be to look at the other species of lice that live on humans--crabs, or Pthirus pubis. If our ancestors got body lice from Homo erectus during sex, they probably got crabs as well. Somehow, though, I'm guessing that putting together a global collection of crabs may take a little bit longer than the body lice. But it will definitely be worth the wait.
UPDATE: 10/4 9:50 PM: A question occurs to me: why didn't we pick up Neanderthal lice?
UPDATE: 10/5 6:20 PM: The link to the paper is fixed (and the paper is free--bless PLOS!)


Congratulations to Linda Buck and Richard Axel for winning the Nobel Prize for Medicine today. They won for their pioneering work on the 600 or so receptors that we use to smell. As is so often the case these days, the research that wins people the Nobel for Medicine also reveals a lot about our evolution. This February, for example, Buck published a paper in the Proceedings of the National Academy of Sciences, in which she and her colleagues charted the evolutionary history of human olfactory receptors.
As Buck explains, it appears that many olfactory receptor genes mutated beyond repair in our lineage as we came to rely more on sight than smell. Only about half of the olfactory receptor genes in the human genome actually produce working proteins. (You can find working versions of these genes in other animals). Other researchers, however, have found that some human olfactory genes have undergone strong natural selection, which suggests that it's still a good idea to be able to sniff out a piece of rotten meat. (If you want more details on this line of research, you can read an essay I wrote a couple years ago for Natural History.)
And yet, somehow creationsists and their ilk keep a straight face as they continue to tell us that evolution is a dying myth. In this month's Wired, for example, techno-know-nothing George Gilder declares "Darwinian materialism is an embarrassing cartoon of modern science." When Gilder gets to run the Nobel Prize committee, I guess he can take back Buck's medal.


Every now and then you come across a scientific hypothesis that is so elegant and powerful in its ability to explain that it just feels right. Yet that doesn't automatically make it right. Even when an elegant hypothesis gets support from experiments, it's not time to declare victory. This is especially true in biology, where causes and effects are all gloriously tangled up with one another. It can take a long time to undo the tangle, and hacking away at it, Gordian-style, won't help get to the answer any faster.
I was reminded of this while reading Andrew Brown's review of A Reason For Everything by Marek Kohn in the Guardian. The book sounds fascinating. Kohn recounts how a small group of English biologists shaped the course of modern evolutionary biology--in particular, by pondering how adaptation through natural selection could account for just about everything in nature. One of the foremost of these thinkers was William Hamilton, who died a few years ago. Brown writes that "even the colours of the leaves on autumn trees around the grave of Bill Hamilton have been given a meaning by evolution - they are so vivid in order to warn parasites that the tree is healthy enough to repel them."
I wrote about Hamilton's leaf-signal hypothesis here. It is one of those beautifully elegant hypotheses, and some studies have even supported Hamilton's idea that the brilliant colors of autumn evolved as a way for trees to tell insects to buzz off. But readers should not have finished reading my post by thinking, "Well, that sews that question up."
Here's why. H. Martin Schaefer and David M. Wilkinson have written a review of the Hamilton hypothesis which has just gone into press in Trends in Ecology and Evolution. They offer a lot of evidence suggesting that Hamilton may have been wrong--or at least may not have captured the whole picture. They show how a completely different process may be responsible for fall colors. Trees may produce them as they prepare for winter.
When leaves die, their nitrogen, phosphorus, and other nutrients get shipped back into their tree. It's a crucial, carefully orchestrated stage in a tree's life; it will survive on these reserves through the winter. In order to pump the nutrients back into the branches, the leaves need a lot of energy, which they have to generate with photosynthesis. That's where the pigments may come in. Pigments act as a sunscreen for leaves, shielding them from harmful UV rays that can shut down their photosynthetic machinery . What's more, as the leaves ship their nutrients back to the tree, they may produce harmful free radicals as a byproduct. It just so happens that pigments are veritable magnets for free radicals.
If the authors are right, then the evidence that seems to support Hamilton's hypothesis might not actually support it at all. For example, researchers have found that birch trees that display brighter leaves grow more vigorously the following year. You could argue that these trees did so well because they could create such strong warning signals, which warded off insects. But perhaps those bright leaves are just a sign that these trees were doing a particularly good job of protecting their leaves as they stored nutrients for the winter--nutrients that made them more vigorous the follwing spring.
Fortunately, evolutionary biologists can do more than just come up with beautiful hypotheses. They can test them. Schaefer and Wilkinson lay out a list of experiments that could discriminate between the leaf-signal hypothesis and the winter-storage hypothesis. It's even possible that evolution has produced fall foliage in order to both ward off insects and ship nutrients out of the leaf. As beautiful as any one hypothesis may be, it's the interplay of different ideas and the experiments that put them to the test that's most beautiful of all. It would not bother Hamilton one bit, I suspect, if it turned out that the leaves that fell on his grave had taken on their autumn colors for an entirely different purpose.



Evolution works on different scales. In a single day, HIV's genetic code changes as it adapts to our ever-adapting immune system. Over the course of decades, the virus can make a successful leap from one species to another (from chimpanzees to humans, for example). Over a few thousand years, humans have adapted to agriculture--an adult tolerance to the lactose in milk, for example. Over a couple million years, the brains of our hominid ancestors have nearly doubled. Sometimes scientists distinguish between these scales by calling small-scale change microevolution and large-scale change macroevolution. Creationists have seized on these terms and used them to build one of their central canards: that they accept microevolution but can then reject macroevolution. That's a bit like accepting microeconomics--how households and firms make decisions and interact in markets--but then denying macroeconomics--how entire societies produce goods, how inflation rises and falls, and so on. Evolutionary biologists debate fiercely about how macroevolutionary change emerges from microevolution. But they continue to find abundant evidence that the two are a package deal.
I was reminded of the interwoven scales of evolution last week when, just before leaving on vacation, I read a wonderful new paper about how the beaks of baby birds develop. As I drove off sans laptop, I was sure that it would be heavily blogged and reported while I was away. But when I returned I found almost complete silence. So I thought I would do my small part to keep this research from disappearing into the data smog.
After all, these baby birds are not just any birds. They belong to a group of some 13 species collectively known as Darwin's finches. Charles Darwin first encountered the birds in 1835 when he visited the Galapagos Islands. He thought at first that they were belonged to various groups of birds, such as wrens and blackbirds. After all, their beaks were dramaticall different from one another--some blunt, some narrow, some curved. Not surprisingly, the birds use these different beaks to get different kinds of food--cracking nuts, drinking nectar, and so on. Darwin was shocked to learn later that all of the birds were finches. He struggled to understand why such an unparalleled diversity of finches existed only on a remote archipelago. That struggle helped lead him to his theory of evolution by natural selection.
As Jonathan Weiner recounts in his excellent The Beak of the Finch, later generations of biologists came back to the Galapagos to study the birds. Living in near isolation, they are a natural experiment in evolution. Today the leading experts on the finches are Peter and Rosemary Grant of Princeton University. They and their colleagues have shown that the birds originate from a few settlers who arrived on the islands two to three million years ago. These founders gave rise to different lineages, each of which adapted to the islands with a special beak shape of its own. This evolutionary change is remarkably fast compared to most other animals, and it continues today. As droughts and heavy rains hit the islands every few years, natural selection favors different beak sizes. Meanwhile, populations of the finches become separated from one another as they develop unique mating songs. Sometimes this divergence produces a new species. In other cases, closely related species may interbreed and fuse back together.
The Grants wondered what sort of mutations were fueling this extraordinary evolution of beaks on the Galapagos. They joined forces with developmental biologists at Harvard to study the genes that build the finch body within the egg--in particular, genes known as growth factors that stimulate cells to divide and differentiate. They found that a gene called bone morphogenetic protein 4 (BMP-4) played a key role. Big-beaked birds such as the ground finch made a lot of BMP-4 early on in development in the cells of their jaws. The slender-beaked cactus finch produces less BMP-4, and does so later. Each species they studied had its own unique pattern of BMP-4 activity, while the other growth factors behaved pretty much the same.
BMP-4 has a number at the end because it belongs to a family of genes. Originally, there was one BMP-like gene, and at some point it was accidentally duplicated. Those copies were duplicated again and again. The copies evolved differences in their sequences, and some eventually mutated into gibberish. It turns out that the first gene of this family evolved a long time ago. A huge range of animals have BMP-like genes, ranging from vertebrates to sea urchins to insects. The genes are so similar that you can destroy the insect version of BMP-4 in a fruit fly, replace it with a frog's BMP-4 gene, and the frog gene will cooperate perfectly well to build a fly. The simplest explanation for this similarity is that all these animals (known as bilaterians) inherited their BMP-like genes from a common ancestor some 700 million years ago. In early bilaterians, BMP-like genes probably helped lay out the front and back of a developing body. In vertebrates, it is active along the abdomen side, where the digestive system grows. Insects run their digestive system along their back, and in insect larva, that's where BMP-like genes are active.
These BMP genes belong to an entire network of body-building genes that have survived for 700 million years. Some of them switch on BMP genes, while others block their activity. And BMP genes in turn switch on and shut down other genes. This network has been borrowed many times in the course of evolution to build new structures in animal bodies. As vertebrates evolved skeletons made of bone, the BMP network took on a new role helping to build it. (BMP encourages bone to grow, and also to heal--making it the object of a lot of interest in medical circles.) But its role was not limited to ribs and vertebrae. As new sorts of vertebrates evolved, the BMP network was coopted yet again. In birds, for example, feathers grow under the guidance of the BMP network. And so to, the Grants and their colleagues have found, do bird beaks.
So here we have a network of genes that has played a major role in evolution at many scales. It emerged as part of an animal toolkit, which could be used to construct bodies as different as that of a fly and a fish. It was then borrowed and redeployed in new ways, building new structures. And because this network controls many other genes, a small tweak to it can produce some significant changes even within a single species. Alter the timing of BMP ever so slightly in a finch's developing beak, and it may be prepared to survive a drought by cracking hard seeds. Thanks to the relative ease by which beaks can evolve, these sorts of generation-to-generation changes have helped Darwin's finches explode into 13 new species over the past couple million years. Micro and macro, in other words, are bound together into one extraordinary whole.


Today scientists took another step towards creating the sort of simple life forms that may have been the first inhabitants of Earth. I wrote a feature for the June issue of Discover about this group, led by Jack Szostak at Harvard Medical School. Szostak and his colleagues suspect that life started out not with DNA, RNA, and proteins, but just RNA. This primordial RNA not only carried life's genetic code, but also assembled new RNA molecules and did other biochemical jobs. Szostak and others have created conditions in their labs under which today's RNA can evolve into a form able to cary out these primordial tasks. So far, their evolved RNA molecules can assemble short fragments of RNA, using another RNA molecule as a template.
RNA-based life was presumably not just loose genetic goop, but organized into primordial cells. Last year Szostak and his students demonstrated that RNA can spontaneously get inside bubbles made of fatty acids--protocells, in other words. These protocells can grow by absorbing new fatty acids. When Szostak's team pushed the protocells through microscopic pores (as might happen in seafloor rock), the bigger protocells split into smaller ones, each with RNA inside.
Szostak's protocells have to meet three standards in order to become life. They have to carry a genetic code. They have to be able to grow and reproduce. And as they reproduce, they have to evolve. Szostak has already made some important steps towards the first and second standards, and today he and his colleagues moved towards the third. In Science, they report that their protocells compete for the fatty acids necessary to build membranes. They mixed together protocells containing RNA with empty shells. The mere presence of the RNA in a protocell altered physical properties of the membrane, creating tension in the membrane. The empty shells, on the other hand, were more relaxed. As a result, the protocells pulled fatty acids away from nearby empty ones. The protocells with RNA got bigger, while the empty ones got smaller.
Szostak and his colleagues suggest that this competition for membrane material may have driven the evolution of RNA that could replicate itself fast. The faster RNA could replicate itself in a protocell, the more fatty acids it could grab from slower neighbors. And by grabbing fatty acids, it could grow faster and divide faster.
What's interesting about this theory is how simply it is. Szostak isn't bothering with an army of RNA molecules, some specializing in supporting the cell's structure, some specializing in building the membrane, and some specializing in producing new RNA. Even an incredibly simple RNA-based organism that could replicate itself might be able to take advantage of the power of natural selection.


While doing some research on human evolution, I stumbled across the web site for a wonderful meeting that was held in March at San Diego to celebrate the sequencing of the chimpanzee genome. You can watch the lectures here. By comparing the chimp genome to the human genome, scientists are discovering exactly how we evolved into the peculiar species that we are. If you find yourself in an argument with someone who claims that evolution has nothing to do with cutting edge science, plunk them down in front of these talks. Without evolution, genomics is gibberish.
(Note--Oliver Baker informs me that this page won't work in Firefox. IE and Safari are fine.)


Is Intelligent Design the same thing as creationism? The people who back Intelligent Design have spilled an awful lot of ink saying they're different. Even self-proclaimed creationists have tried to claim a difference. Somehow, both of these camps think that any confusion between the two is evidence of the lazy arrogance of evolutionists. In fact, the evidence points towards Intellgent Design being just a bit of clever repackaging to get creationist nonsense into the classroom. (See this useful article.)
A little clarity has emerged over at the new Sarkar Lab Weblog. They've created a "Creationist Faculty," described as a "list of faculty who have spoken in favor of creationism in its traditional form or as intelligent design." They add, "Please feel free to nominate members to this Hall of Shame."
Today they announced an addition--William Dembski, the loudest Intelligent Design advocate out there. Nominated by? Dembski.
Question asked and answered.
Update, 9/1/04: See also this article by Chris Mooney on the political similarities of the creationist and ID movements.


If you took a census of life on Earth, you'd probably find that the majority of life forms looked like this. It's a virus known as a bacteriophage, which lives exclusively in bacteria. There are about 10 million phages in every milliliter of coastal sea water. All told, scientists put the total number of bacteriophages at a million trillion trillion (10 to the 30th power). Bacteriophages not only make up the majority of life forms, but they are believed to have existed just about since life itself began. Since then, they have been evolving along with their hosts, and even making much of their hosts' evolution possible by shuttling genes from one host to another. Thanks in large part to bacteriophages, more and more bacteria are acquiring the genes they need to defeat antibiotics. Bacteriophages also kill off a huge portion of ocean bacteria that consume greenhouse gases. If you suddenly rid the world of all bacteriophages, the global climate would lurch out of whack.
It may seem strange that the world's most successful life form looks a bit like the ship-drilling robots that swarmed through The Matrix. But the fact is that the bacteriophage is nanotechnology of the most elegant, most deadly sort. To get a real appreciation of its mechanical cool, check out the movie from which this picture comes. (Big and small Quicktime.) The movie is based on the awesome work of Michael Rossmann of Purdue University and his colleagues. (Their most recent paper appears in the latest issue of Cell, along with even more cool movies.) Rossmann and company have teased apart pieces of a bacteriophage and have gotten a better understanding of how they work together. The phage extends six delicate legs in order to make contact with its host, E. coli.. Each leg docks on one of the bacteria's receptors, giving the phage the signal that it is time to inject its DNA. The legs bend so that its body pulls towards the bacterium. The pulling motion makes the base of the phage begin to spin like the barrel of a lock. A set of shorter legs, previously held flush against the base of the virus, unfold so that they can clamp onto the microbe's membrane. The phage's sheath, shown here in green, shrinks as its spiralling proteins slide over one another. A hidden tube emerges, which in turn pushes out a needle, which rams into the side of the bacterium. The needle injects molecules that can eat away at the tough inner wall of the microbe, and the tube then pushes all the way into the microbe's interior, where it unloads the virus's DNA.
It has taken a while, historically speaking, for scientists to come to appreciate just how sophisticated parasites such as bacteriophages can be, a subject I explored at length in my book Parasite Rex. The best human-designed nanotech pales in comparison to bacteriophages, a fact that hasn't been lost on scientists. Some have been using bacteriophages to build nanowires and other circuitry. Others see them as the best hope for gene therapy, if they can be engineered to infect humans rather than bacteria. In both cases, evolution must play a central role. By allowing the phages to mutate and then selecting the viruses that do the best job at whatever task the scientists choose, the scientists will be able to let evolution design nanotechnology for them. From the depths of deep time, one of the next great advances in technology may come. And perhaps some more work in Hollywood, I hope.


Spiteful bacteria. Two words you probably haven't heard together. Then again, you probably haven't heard of altruistic bacteria either, but both sorts of microbes are out there--and in many cases in you.
Bacteria lead marvelously complicated social lives. As a group of University of Edinburgh biologists reported today in Nature, a nasty bug called Pseudomonas aeruginosa, which causes lung infections, dedicates a lot of energy to helping its fellow P. aeruginosa. The microbes need iron, which is hard for them to find in a usable form in our bodies. To overcome the shortage, P. aeruginosa can release special molecules called siderophores that snatch up iron compounds and make them palatable to the microbe. It takes a lot of energy for the bacteria to make siderophores, and they aren't guaranteed a return for the investment. Once a siderophores harvests some iron, any P. aeruginosa that happens to be near it can gulp it down.
At first glance, this generosity shouldn't exist. Microbes that put a lot of energy into helping other microbes should become extinct--or, more exactly, the genes that produce generosity in them should become extinct. Biologists have discovered mutant P. aeruginosa that cheat--they don't produce siderophores but still suck up siderophores made by the do-gooders. It might seem as if the cheaters should wipe the do-gooders off the face of the Earth. The solution to this sort of puzzle--or at least one solution--is helping out family. Closely related microbes share the same genes. If a relative scoops up the iron and can reproduce, that's all the same for your genes.
To test this hypothesis, the Edinburgh team ran an experiment. They filled twelve beakers with bacteria they produced from a single clone. While the bacteria were all closely related, half were cheaters and half were do-gooders. They let the bacteria feed, multiply, and compete with one another. Then they mixed the beakers together, and randomly chose some bacteria to start a new colony in twelve new beakers. More successful bacteria gradually became more common as they started new rounds. In the end, the researhers found--as they predicted--that these close relatives evolved into cooperators. The do-gooders wound up making up nearly 100% of the population.
That didn't happen when the researchers put together two different clones in the same beakers. When the bacteria had less chance of helping relatives, the do-gooders wound up making up less than half of the population.
But the biologists suspected that even families could turn on themselves. Mathematical models suggest that the benefit of helping relatives drops if relatives are crammed together too closely. They never get a free lunch--siderophores produced by other, unrelated bacteria. Instead, all the benefits of consuming iron are offset by the cost of producing the siderophores. In the end, the benefit doesn't justify the cost.
The Edinburgh team came up with a clever way to test this prediction out. They ran the same colony experiment as before, but now they didn't take a random sample from the mixed beakers to start a new colony. Instead, they took a fixed number of bacteria from each beaker. This new procedure meant that there was no longer a benefit to being in a beaker where the bacteria were reproducing faster than the bacteria in other beakers. The only way to survive to the next round of the experiment was to outcompete the other bacteria in your own beaker--even if they were your own relatives. The researchers discovered that when closely related bacteria were forced to compete this way, utopia disappeared. Instead, the ratio of cheaters to do-gooders remained about where it started, around 50:50.
The evolutionary logic of altruism also has a dark side, known as spite, which the Edinburgh have explored in a paper in press at the Journal of Evolutionary Biology. (They've posted a pdf on their web site.) It's theoretically possible that you can help out your relatives (and even yourself) by doing harm to unrelated members of your same species, even if you have to pay a cost to do it. You might even die in the process, but if you could wreak enough havoc with your competitors, this sort of behavior could be favored by evolution. Biologists call this sort of behavior spite.
It turns out that many bacteria are spiteful in precisely this way. They produce antibiotics known as bacteriocins that are poisonous to their own species. These poisons take a lot of energy to make, and the bacteria often die as they release them. But these spiteful bacteria don't kill their own kin. Each strain of bacteria that makes a bacteriocin also makes an antidote to that particular kind of bacteriocin. Obviously, evolution won't favor a lineage of microbes that all blow themselves up. But it may encourage a certain balance of spite--a balance that will depend on the particular conditions in which the bacteria evolve.
Understanding the evolution of spiteful and altruistic bacteria will help scientists come up with new ways to fight diseases. (The altruism of P. aeruginosa can make life hell for people with cystic fibrosis, because the bacteria cooperate to rob a person of the iron in his or her lungs.) But bacteria can serve as a model for other organisms who can be altruistic or spiteful--like us. While some glib sociobiologists may see a link between a spiteful self-destructive microbe and a suicide bomber, the analogy is both disgusting and stupid. Yet the same evolutionary calculus keeps playing out in the behavior of bacteria and people alike.
(Update 6.27.04: Did I say siderophiles? I meant siderophores...)

Marriage, we're told by the president and a lot of other people, can only be between one man and one woman. Anything else would go against thousands of years of tradition and nature itself. If the president's DNA could talk, I think it might disagree.
In the 1980s, geneticists began to study variations in human DNA to learn about the origin of our species. They paid particular attention to the genes carried by mitochondria, fuel-producing factories of the cell. Each mitochondrion carries its own small set of genes, a peculiarity that has its origins over two billion years ago, when our single-celled ancestors engulfed oxygen-breathing bacteria. When a sperm fertilize an egg, it injects its nuclear DNA, but almost never manages to deliver its mitochondria. So the hundreds of mitochondria in the egg become the mitochondria in every cell of the person that egg grows up to be. Your mitochondrial DNA is a perfect copy of your mother's DNA, her mother's DNA, and so on back through history. The only differences emerge when the mitochondrial DNA mutates, which it does at a fairly regular rate. A mother with a mutation in her mitochondria will pass it down to her children, and her daughters will pass it down to their children in turn. Scientists realized that they might be able to use these distinctive mutations to organize living humans into a single grand genealogy, which could shed light on the woman whose mitochondria we all share--a woman who was nicknamed Mitochondrial Eve.
Alan Wilson of the University of California and his colleagues gathered DNA from 147 individuals representing Africa, Asia, Australia, Europe, and New Guinea. They calculated the simplest evolutionary tree that could account for the patterns they saw. If four people shared an unusual mutation, for example, it was likely that they inherited from a common female ancestor, rather than the mutation cropping up independently in four separate branches. Wilson's team drew a tree in which almost all of the branches from all five continents joined to a common ancestor. But seven other individuals formed a second major branch. All seven of these people were of African descent. Just as significantly, the African branches of the tree had acquired twice as many mutations as the branches from Asia and Europe. The simplest interpretation of the data was that humans originated in Africa, and that after some period of time one branch of Africans spread out to the other continents.
Despite the diversity of their subjects, Wilson's team found relatively little variation in their mitochondrial DNA. Although their subjects represented the corners of the globe, they had less variation in their genes than a few thousand chimpanzees that live in a single forest in the Ivory Coast. This low variation suggests that living humans all descend from a common ancestor that lived relatively recently. Wilson's team went so far as to estimate when that common ancestor lived. Since some parts of mitochondrial DNA mutate at a relatively regular pace, they can act like a molecular clock. Wilson and his colleagues concluded that all living humans inherited their mitochondrial DNA from a woman who lived approximately 200,000 years ago.
The first studies by Wilson and others on mitochondrial DNA turned out to be less than bulletproof. They had not gathered enough data to eliminate the possibility that humans might have originated in Asia rather than Africa. Wilson's students continued to collect more DNA samples from a wider range of ethnic groups. Other researchers tried studying other segments of mitochondrial DNA. Today they have sequenced the entire mitochondrial sequence, and the data still points to a recent ancestor in Africa. All mitochondrial DNA, it now appears, came from a single individual who lived 160,000 years ago.
More recently, men offered their own genetic clues. Men pass down a Y chromosome to their sons, which remains almost completely unchanged in the process. Y chromosomes are harder to study than mitochondrial DNA (in part because each cell has only one Y chromosome but thousands of mitochondria). But thanks to some smart lab work, scientists began drawing the Y-chromosome tree. They also found that all Y chromosomes on Earth can be tracked down to a recent ancestor in Africa. But instead of 170,000 years, the age of "mitochondrial Eve," they found that their "Y-chromosome Adam" lived about 60,000 years ago.
This discrepancy may seem bizarre. How can our male and female ancestors have lived thousands of years apart? Different genes have different history. One gene may sweep very quickly through an entire species, while another one takes much longer to spread.
In 2001 I wrote an essay on this odd state of affairs for Natural History. At the time, scientists weren't sure just how real the discrepancy was. After all, both estimates still had healthy margin of errors. If mitochondrial Eve was younger and Y-chromosome Adam was older, they might have missed each other by only a few thousand years. On the other hand, if the gap was real, there were a few possible explanations. In one scenario, a boy 60,000 years ago was born with a new mutation on his Y chromosome. When he grew up, its genes helped him reproduce much more successfully than other Y chromosomes, and his sons inherited his advantage. Thanks to natural selection, his chromosome became more common at a rapid rate, until it was the only chromosome left in our species. (This selective sweep might have been just the last in a long line of sweeps.)
Now comes a fascinating new paper in press at Molecular Biology and Evolution. Scientists at the University of Arizona suspected that some of the confusion over Adam and Eve might be the result of comparing the results of separate studies on the Y chromosome and mitochondrial DNA. One study might look at one set of men from one set of ethnic backgrounds. Another study might look at a different set of women from a different set of backgrounds. Comparing the studies might be like comparing apples and oranges. It would be better, the Arizona team decided, to study Y chromosomes and mitochondrial DNA all taken from the same people. Obviously, those people had to be men. The researchers collected DNA from men belonging to three populations--25 Khosians from Southern Africa, 24 Khalks from Mongolia, and 24 highland Papuan New Guineans. Their ancestors branched off from one another tens of thousands of years ago.
The results they found were surprisingly consistent: the woman who bequeathed each set of men their mitochondrial DNA was twice as old as the man whose Y chromosome they shared. But the ages of Adam and Eve were different depending on which group of men the scientists studied. The Khosian Adam lived 74,000 years ago, and Khosian Eve lived 176,500 years ago. But the Mongolian and New Guinean ancestors were both much younger--Adam averaged 48,000 years old and Eve 93,000 years.
You wouldn't expect these different ages if a single Y chromosome had been favored by natural selection, the Arizona team argues. Instead, they are struck by the fact that Khosians represent one of the oldest lineages of living humans, while Mongolians and New Guineans descend from younger populations of immigrants who left Africa around 50,000 years ago. The older people have an older Adam and Eve, and the younger people have a younger one. The researchers argue that some process has been steadily skewing the age of Adam relative to Eve in every human population.
Now here's where things may get a little sticky for the "one-man-one-woman-is-traditional-and-natural" camp. The explanation the Arizona scientists favor for their results is polygyny--two or more women having children with a single man. To understand why, imagine an island with 1,000 women and 1,000 men, all married in monogamous pairs, just as their parents did, and their grandparents, and so on back to the days of the first settlers on the island. Let's say that if you trace back the Y chromosomes in the men, you'd find a common ancestor 2,000 years ago. Now imagine that the 1,000 women are all bearing children again, but this time only 100 men are the fathers. You'd expect that the ancestor of this smaller group of men lived much more recently than the common ancestor of all 1,000 men.
Scientists have proposed that humans have a history of polygyny before (our sperm, for example, looks like the sperm of polygynous apes and monkeys, for example). But with these new DNA results, the Arizona researchers have made a powerful case that polygyny has been common for tens of thousands of years across the Old World. It's possible that polygyny was an open institution for much of that time, or that secret trysts made it a reality that few would acknowledge. What's much less possible is that monogamy has been the status quo for 50,000 years.
People are perfectly entitled to disagree over what sort of marriage is best for children or society. But if you want to bring nature or tradition into the argument, you'd better be sure you know what nature and tradition have to say on the subject.


For several decades, evolutionary biologists have been trying to figure out the forces that set this balance. It appears that they come down to a tug of war between competing interests. Imagine a species in which a freakish mutation makes the females gives birth to lots and lots of daughters. If you're a male, suddenly your chances of reproducing look very good--certainly better than all those females. Now imagine that some of these lucky males acquire mutations that makes them father more sons than daughters. This son-favoring mutation would spread because of the advantage to being male. In time, these mutations would tip balance of the sexes over to the males. The now-common males would have less chance of producing offspring than the now-rare females. The advantage shifts to the females. Over time, these opposite forces pull the ratio back and forth until they settle down into an equilibrium.
Sometimes, though, the ratio of males to females lurches out of balance. Some insects, for example, only give birth to daughters. That's because a third player has entered the tug of war--a bacterium called Wolbachia. Wolbachia lives in animal cells, and so the only way it can survive beyond the life of a host is to get into its eggs, which then grow into adults. Since Wolbachia cannot fit into sperm, males are useless to it. And so it has evolved a number of tools that it uses to forces its female hosts to give birth only to daughters.
But biologists have also noticed that in some situations other animals--including humans--can give birth to an overabundance of sons or daughters. In the early 1970s, Robert Trivers and Dan Willard, both then at Harvard, wondered if mothers might be able to control the sex ratio of their offspring to boost their own reproductive success. They imagined a species, such as deer, in which relatively few males mated with most females. If a doe gives birth to a healthy male, he is likely to grow up into a healthy buck that has a good chance of producing lots of grandchildren for the doe. In fact, he would be able to produce far more grandchildren than even the most successful daughter. He could impregnate lots of mates, while a daughter could produce only a couple offspring a year, which she would then have to nurse. So if the doe is in good health and can give birth to strong offspring, she would do best to produce sons.
On the other hand, if this doe gives birth to a male in poor condition, he may be unable to compete with other males, and his chances of reproducing fall to zero. Since most females that live to adulthood give birth to at least some offspring, it would make more sense to give birth to a daughter rather than a son in poor condition.
Trivers and Willard speculated that if a mother could somehow gauge the prospects of her offspring, she might manipulate their sex ratio to her own evolutionary advantage. In bad times, she'd produce females, and in good times she'd produce males.
Since Trivers and Willard first published their idea in 1973, scientists have tested it in hundreds of studies. Some of the results have been quite powerful. Scientists moved a bird known as a Seychelles warbler from one habitat to another and measured the sex ratio of their chicks. In places with lots of food, they produced lots of daughters that stayed at the nest to help raise their younger chicks. In places with little food, the ratio swung in favor of sons, which flew off in search of new territory. But the results have been far from clear-cut, particularly for mammals, which has led some researchers to wonder whether this particular force is very strong in mammal evolution.
In an article in press at The Proceedings of the Royal Society of London, South African zoologist Elissa Cameron of the University of Pretoria argues that the case for adjusting sex ratios is actually very good if you look at the evidence properly. She analyzed over 400 studies of sex ratios and noticed that, depending on the study, the scientists measured the condition of mothers at different points in their pregnancy. Some took their measurements at conception, some in the middle of gestation, and some at birth. While studies during gestation and at birth provided ambiguous results, almost all the studies done around conception gave strong support to the Trivers-Willard hypothesis.
These results, Cameron argues, indicate that mothers can shift the balance of the sexes, but only right around conception. It's possible, for example, that the amount of glucose in the uterus when an egg begins to divide may trigger different responses, depending on whether the egg is male or female. For example, high glucose may signal that the mother is doing well and would do well to raise sons, while low glucose would favor females. In a paper in press at Biology of Reproduction, Cheryl Rosenfeld and Michael Roberts of the University of Missouri review some evidence that may support Cameron. (You can download it for free here.) They raised female mice on two different diets--one high in saturated fats, and one high in carbohydrates. Mothers eating a high-fat diet (which probably led to high levels of glucose) gave birth to litters with two sons for every daughter. Mothers eating high-carb diets produced about one son for every two daughters.
And here's where test tube babies come in. Humans may not have the sort of mating imbalance that you find in deer, but a lot of evidence suggests that we descend from a long lineage of primates in which a few males mated with a lot of females. It wouldn't be surprising, therefore, to find adaptations in women to favor sons or daughters. Rosenfeld and Richards survey some interesting studies on census data that suggest that this does indeed happen. And test tube babies offer some clues about the biochemistry that may be at work. When doctors fertilize a woman's eggs and let them begin to divide into a ball of cells, they keep the embryo in a solution of glucose. The farther along the doctors let the embryos develop, the more likely it is that their patients will wind up with sons rather than daughters. In this glucose-rich environment, male embryos may thrive, while females may risk failure to develop. Even in an age of reproductive technology, it seems, we are grappling with our evolutionary legacy.


Recently I've been trying to imagine a world without leaves. It's not easy to do at this time of year, when the trees around my house turn my windows into green walls. But a paper published on-line today at the Proceedings of the National Academy of Science inspires some effort. A team of English scientists offer a look back at Earth some 400 million years ago, at a time before leaves had evolved. Plants had been growing on dry land for at least 75 million years, but they were little more than mosses and liverworts growing on damp ground, along with some primitive vascular plants with stems a few inches high. True leaves--flat blades of tissue that acted like natural solar panels--were pretty much nowhere to be found.
It's strange enough to picture this boggy, bare-stemmed world. But it's stranger still to consider that plants at the time already had the genetic potential to grow leaves. Some species of green algae--the organisms from which plants evolved--were growing half-inch leaf-like sheets 450 million years ago. Tiny bud-like leaves have been found on 400 million year old plant fossils. Despite having the cellular equipment necessary to grow leaves, plants did not produce full-sized leaves in great numbers until about 350 million years ago. When they finally did become leafy, the first trees emerged and gave rise to the earliest forests. Leaves have dominated the planet ever since. They capture enough carbon dioxide to make millions of tons of biomass every year, and as roots suck up water, trillions of gallons evaporate through them.
Why did leaves take 50 million years to live up to their genetic potential? Apparently they had to wait.
Plants, the researchers point out, take in carbon dioxide through elaborate channels on their surface called stomata. Living plants can adjust the number of stomata that grow on their leaves. If you raise them in a greenhouse flooded with carbon dioxide, they will develop significantly fewer stomata. That's because the plants can gather the same amount of carbon dioxide they need to grow while allowing less water to evaporate out of their stomata.
Geological evidence shows that 400 million years ago, the atmosphere was loaded with carbon dioxide--about ten times the level before humans began to drive it up in the 1800s. (It was 280 parts per million in the early 1800s, 370 ppm today, and is predicted to rise to 450 to 600 parts per million by 2050. In the early Devonian Period, it was around 3000 ppm.) Consistent with living plants, the fossil plants from the early Devonian had very few stomata on their leafless stems.
Why didn't these early plants grow lots of leaves with few stomata? If they did, they could have grown faster and taller, and ultimately produced more offspring. But the scientists point out that a big leaf sitting in the sun risks overheating. The only things that can cool a leaf down are--once again--stomata. As water evaporates out of these channels, it cools the leaf, just as sweat cools our own skin. Unable to sweat, early Devonian leaves would have been a burden to plants, not a boon.
About 380 million years ago, however, carbon dioxide levels began to drop. Over the next 40 million years they crashed 90%, almost down to today's levels. The decline in carbon dioxide brought with it a drop in temperature: the planet cooled enough to allow glaciers to emerge at the poles. In the paper published today, the scientists describe what happened to plants during that time. Two different groups of plants--ferns and seed plants--began to sprout leaves. As years passed, the leaves became longer and wider. And at the same time, the leaves became increasinly packed with stomata. From 380 to 340 million years ago they became eight times denser. It seems that the drop in carbon dioxide and temperature turned leaves from burden to boon, and the world turned green.
It's possible that plants themselves may have ultimately been responsible for the emergence of leaves. Before leaves evolved, roots appeared on plants. Unlike moss and liverworts, which can only soak up the water on the ground, plants with roots can seek out water, along with other nutrients. Their probing eroded rocks and built up soil. The fresh rock that the plants exposed each year could react with carbon dioxide dissolved in rainwater. Some of this carbon was carried down rivers to the ocean floor and could no longer rise back up into the atmosphere. In other words, roots pulled carbon dioxide out of the atmosphere and made it possible for leaves to evolve. The evolution of leaves in turn led to the rise of big trees, which could trap even more carbon, cooling the climate even more. Clearly, we are not the first organisms to tinker with the planet's thermostat.


In 1970, the natural history illustrator Rudolph Zallinger painted a picture of human evolution called "The March of Progress" in which a parade of hominids walked along from left to right, evolving from knuckle-walking ape to tall, spear-carrying Cro-Magnon. The picture is etched in our collective consciousness, making it possible for cartoonists to draw pictures like the one here safe in the knowledge that we'll all get the joke. I had actually wanted to show Zallinger's own picture, but, like others before me, I failed to find it on the web. I was inspired to hunt down the picture by the news today of the discovery of a strange new fossil of a hominid--an extinct relative of humans--that lived 900,000 years ago.
Zallinger painted "The March of Progress" at a time when paleoanthropologists still had found only a few hominid species. When most experts looked the evidence, it seemed reasonable to line it up in a straight ancestor-descendant line, running from chimp-like apes to Neanderthals to us. But over the past 30 years, scientists have dug up many new sorts of hominids--perhaps as many as 20 species--and many of them don't seem to fit in Zallinger's parade. In some cases a number of different species seemed to have lived side by side--some that might have been our ancestors and others that veered off into their own strange gorilla-like existence. Neanderthals were not our ancestors, but rather our cousins, having branched apart from our own lineage over 500,000 years ago. And finally, a number of paleoanthropologists have taken a fresh look at some of the hominid species identified by their predecessors, and they've concluded that two or more separate species may have been unfairly lumped together under the same name.
As a result, many paleoanthropologists have turned against the march of progress. Certainly there's a single line of genealogy that links us to an ancestor we share with chimpanzees. (Just consult your own DNA if you doubt it.) But the forces of evolution did not steadily drive hominids towards our own condition. Evolution simulataneously wandered down many different avenues, most of which ended up as dead ends. Instead of a march, many experts began thinking of human evolution in terms of a bush. Some scientists have even claimed that the march of progress was a case of people imposing their cultural biases--the Western perception of human history as a steady improvement--on the fossil record. (See, for example, Stephen Jay Gould's display of Zallingeresque cartoons on pp. 30-35 Wonderful Life.)
But it's also possible to turn the question the other way. Scientists working in the 1970s were born just after the horrors of the Holocaust and came of age during the civil rights movements of the 1960s. Could they be eager to find examples of diversity in the hominid fossil record, to complement today's focus on ethnic diversity? Tim White of Berkeley has raised this possibility. He points out that some of the prime evidence for the bushiness of the hominid tree are extraordinarily busted-up skulls, which have been reduced to hundreds or thousands of chips. In the reconstruction of these skulls, it's possible for researchers to create a shape that seems so distinct that it must belong to a separate species. White also notes that with so few hominid fossils to study, it's possible to mistake variability within a single hominid species as evidence for two or more species. And, as I wrote in March, White has found evidence in the teeth of the earliest hominids between 5 and 6 million years old that they may be as similar as chimpanzees and bonobos.
Even the strongest advocates of the bushy tree generally held onto some pieces of the old march of progress. Before 1.8 million years ago, the evidence suggested, hominids were tiny, small-brained African apes that walked on two legs and could use simple stone tools. But shortly thereafter hominids got tall--in some cases over six feet tall. Their brains became larger as well. Their fossils (starting with the species Homo ergaster) began to turn up in harsh, dry African grasslands, suggesting that their new long legs helped them stride efficiently across vast distances, and their bigger brains allowed them to find new sources of food. And within a couple hundred thousand years, descendants of these tall hominids (known as Homo erectus) had bolted out of Africa and spread from the Caucasus Mountains to Indonesia. Some diversity would remain--Neanderthals, for example, appear to have evolved in Europe and didn't interbreed much if at all with our own species before they became extinct. But all of these species were now big-brained and long-legged.
This model began to strain a bit two years ago when scientists reported that one of the earliest fossils of a hominid out of Africa, in the former Soviet country of Georgia, was small. Although it was a full-grown adult, its brain was about 600 cubic centimers (ours is 1400 cc, plus or minus about 200 cc), and fit into a miniature skull. One stray footbone found along with the skull suggests that it stood less than five feet high. The discovery led some scientists suggested that the first exodus of African hominids began before the tall Homo erectus evolved. But this proposal is undercut by the many features that linked the Georgian skull to Homo erectus.
One fossil can't support all that much speculation. So that was why it was so fascinating to read today about the discovery of another tiny Homo erectus skull. This one comes from Africa, not Asia. Its skull was about the same size as the Georgian fossil. But it lived 800,000 years later. What are we to make of these tiny people? A couple hypotheses might explain these remarkable fossils, one from the March-of-Progress school, and one from the Bush school.
A Bushist could reasonably suggest that the evidence shows that a lot of the fossils that are called Homo erectus belong to separate species. Small hominid species were able to migrate out of Africa and, within Africa they thrived alongside taller species for hundreds of thousands of years. If this is true, it casts doubt on the important of long legs for the spread of hominids. It also raises questions about how big of a brain you need to make sophisticated tools. The newly discovered Kenyan individuals were found in the same rocks where scientists have found lots of well-crafted hand axes and butchered animal carcasses. Yet their brains were significantly smaller than other individuals that appear to be Homo erectus.
A Marcher could offer a different hypothesis: under certain conditions isolated groups of Homo erectus evolved from tall to short. After all, the same reduction has happened in our own species, among pygmies in Africa and several other populations around the world (and has evolved only over a matter of hundreds or thousands of years). But just because they became very small doesn't mean they became a separate species of their own.
The truth may be that scientists need dozens or hundreds of times more hominid fossils to make a confident choice between these alternatives. But today's news raises a very interesting possibility. In Africa, the fossil record of Homo erectus peters out about 800,000 years ago, replaced by species that seem more closely related to our own. But in Asia, heavy-browed weak-chinned Homo erectus seems to have lingered a long time--perhaps as recently as 40,000 to 80,000 years ago on Indonesia. That's just around the time when our own species left Africa and arrived in southeast Asia. My hunch is that miniature hominids may have survived in isolated refuges for a long time. Are they still lurking on some deserted island or jungle enclave? I doubt it. But it wouldn't surprise me if Homo sapiens did come face to face with them before they disappeared.


These treks have something profound to say about biological change--how life can start out exquisitely adapted to one world and then eventually become adapted just as exquisitely to an utterly different one. Before creationists began marketing bacterial flagella and other examples of intelligent-design snake oil, they loved to harp on the transition from land to sea. Who could possibly believe the story those evolutionary biologists tell us, of a cow plunging into the sea and becoming a whale? And it was true, at least until the 1980s, that no one had found a fossil of a whale with legs. Then paleontologists working in Pakistan found the fossil of a 45-million year old whale named Ambulocetus that looked in life like a furry crocodile. Then they found a seal-like whale just a bit younger. Then they found tiny legs on a 50-foot long, 40-million year old whale named Basilosaurus. I wrote about these discoveries and others like them in my first book, At the Water's Edge, in 1998. I'm amazed at how the fossils have continued turning up since then. Paleontologists have found goat-like legs on a dog-sized whale that lived 50 million years ago, known as Pakicetus. They've found other whales that may have been even more terrestrial than Pakicetus, and many others that branch off somewhere between Pakicetus and Basilosaurus. In the latest review of fossil whales, the evolutionary tree of these transitional species sports thirty branches.
All these discoveries have apparently made whales unsuitable for creationist rhetoric. Yes, you can still find some pseudo-attacks on the fossils, but you have to look hard. The more visible creationists, the ones who testify at school board meetings and write op-eds for the Wall Street Journal, don't bring up whales these days. The animals apparently no longer serve the cause. It's hard to distract people from evidence when it can kick them in the face.
Whales, moreover, were not the only mammals that moved into the water. Seals, sea lions, manatees, and other lineages evolved into swimmers as well, and paleontologists are also filling in their fossil record. It's fascinating to compare their invasions, to see how they converged on some of the same strategies for living in the water, and how they wound up with unique adaptations. The June issue of The Journal of Vertebrate Paleontology has two papers that shed light on one of the weirdest of these transitions--a transition, moreover, we know only from fossils. The animals in question were sloths.
That's right--I'm talking about the sort of animals that hang from trees by their three toes. Sloths may seem an unlikely choice for a sea-going creature; if you threw one of these creatures in the water, I'd imagine it would sleepily sink away without a trace. I've never hurled a three-toed sloth myself, so I can't say for sure. But the sloths alive today are actually just a vestige of a once-grand menagerie that lived in North and South America. Many species prowled on the ground, growing as tall as ten feet. And one lineage of these giant sloths that lived on the coast of Peru moved into the ocean.
In 1995 Christian de Muizon of the National Museum of Natural History in Paris and his colleagues announced the discovery of sloth fossils in Peru dating back somewhere between three and seven million years. The rocks in which they found the bones had formed in the sea; the same rocks have yielded other ocean-going creatures including fish, sea lions, and weird dolphins with walrus-like tusks. The sloths, de Muizon concluded, were aquatic as well. Terrestrial sloths have much longer lower leg bones than upper ones, but the Peruvian sloths had reversed proportions. Manatees and otters also have reversed legs, which suggests that the sloths' limbs were adapted for powerful swimming strokes. The front of their skull was manatee-like as well: its jaws extended out well beyond its front teeth, with a rich supply of blood vessels. Like manatees, de Muizon argued, the sloths had powerful muscular snouts they used to root out sea grass.
In their initial report, the paleontologists dubbed the fossils Thalassocnus natans. But it was already clear that they might have more than one species on their hands. In the years since, they've dug into the Peruvian rocks and found hundreds of sloth fossils, which they have been carefully studying and comparing. The new papers are not the last word on Thalassocnus, but the sloths are already shaping up as a great illustration of a transition to the water.
Instead of a single species, de Muizon's team has now identified at least five. They lived, respectively, seven to eight million years ago, six million years ago, five million years ago, three to four million years ago, and, finally, 1.5 to three million years ago. The earliest species look more like ground sloths on land, while later species show more adaptations to the water. For example, the radius, one of the lower bones of the foreleg, became much broader. The change--which can also be seen on sea lions--allowed the forelegs deliver a better swimming stroke. The teeth become less like those of ground sloths, adapted for browsing on leaves and assorted vegetation. Instead, they became adapted for full-time grazing. The coast of Peru is a bone-dry desert with nothing to graze on, and so the only thing to graze on would be sea grass.
The sloth skull changes as well. Both the upper and lower jaws stretch out further and further. From the oldest species to the youngest, the distance from the front teeth to the tip of the jaw nearly doubles. At the same time, the entire skull became stronger, to withstand the forces involved in tearing sea grasses from the sea floor. And finally, bones in the palate evolved to support muscles that could keep the digestive tract separate from the sloth's airway--something important when you're feeding underwater.
The changes documented in these fossils suggest that the earliest Thallassocnus sloths eked out an existence on land along the Peruvian shore. In a bleak desert, the sea grass that washed up on the beach would have been like manna. De Muizon and his colleagues have found another clue in the early sloths that supports this beach-comber hypothesis: their teeth bear scrape marks that suggest they were getting a lot of sand in their mouths; later sloths show no such marks. Over five million years or so, the sloths evolved adaptations that allowed them to move further and further out into the water, to feed on sea grass beds. Natural selection would have put a strong premium on these adaptations, since they would let sloths graze in lush underwater forests rather than pick through sandy flotsam and jetsam on the beach.
De Muizon's group have yet to sort out all the differences throughout the entire skeletons of all five species. We'll have to wait for those papers. But there's enough in print now to raise some interesting questions. In whales, seals, and manatees alike, their arms and hands became flippers--stubby, webbed, fin-like limbs. Thalassocnus still had big, long-clawed fingers on its hands. De Muizon proposes that they would have enabled the sloths to hold onto rocks to stay submerged as they fed on sea grass. Manatees don't need to do this because their bones are especially dense; the sloths had not yet acquired this adaptation. It seems that Thalassocnus only traveled part of the way down the road to a marine life before they became extinct.
Why they became extinct (as opposed to manatees, for example), is also intriguing. Did something happen 1.5 million to 3 years ago that ruined their home? Perhaps the coastal waters off Peru became too cold. If the sloths had spread further along the coast, they might not have been so vulnerable. Other mammals moved into the water at very restricted sites as well. For their first few million years or so, whales could only be found off the coast of Pakistan. If some Indian volcano had blanketed the neighborhood in ash, we might never have known what a whale looks like.
UPDATE Monday June 21, 7 pm: PZ Meyers has put photos of one of the skulls on Panda's Thumb


Love demands an explanation. Less than 5% of mammal species live monogamously, with males and females staying together beyond mating, and fathers helping mothers care for babies. We humans aren't the most monogamous species of the bunch, but we're closer to that end of the spectrum than the other end, where mating is little more than ships bumping into each other in the night.
A biological explanation for love--as with any biological explanation--has two levels. On one level are the molecular circuits that produce love, and on another level are the evolutionary forces that favor the construction of those circuits in the first place. It turns out that in this case one of the best guides to both levels of explanation is the vole.
The prairie vole is a five-percenter. When a male prairie vole mates, something happens to his brain. He tends to stay near her, even when other females are around, and then helps out with the kids when they arrive--grooming them, huddling around them to keep them warm, and so on. By contrast, the meadow vole, a close relative, is a ninety-five percenter. Male meadow voles typically couldn't care less. They're attracted to the scent of other females and don't offer parental care.
Scientists have searched for years now for the molecular basis for this difference. One promising candidate was a molecule called the vasopressin V1a receptor (V1aR). In certain parts of the brains, male prairie voles produce more V1aR than meadow voles. To test whether this difference had anything to do with the dedication of male meadow voles, Larry Young of Emory University and his colleagues injected a virus carrying the V1aR gene into the brains of medow voles. As they report today in Nature, the virus caused the meadow voles to begin huddling with their mates almost as loyally as prairie voles.
So what happened? It seems that for prairie voles, love is a drug. When male prairie vole mate, their brains release a chemical called vasopressin. Vasopressin does a lot of things all over the body, such as regulating blood pressure. In the brain of prairie voles, it latches onto vasopressin V1a receptors that stud the neurons in a region called the ventral palladium. This region is part of the brain network in vertebrates that produces a sense of reward. Young and company propose that the memory a male forms of mating with the female gets associated with her fragrance. Later, every time he gets a whiff of her, he feels that same sense of reward. This brain circuit is also responsible for the high from cocaine and other drugs--as well as the addiction. Even looking at drug paraphernalia can make an addict feel the old cravings, because his memories are tinged with the high.
By contrast, male meadow voles have relatively few vasopressin receptors in their brains, so that vasopressin released during sex doesn't switch on the same circuit and they never develop the same memories. And so, to them, the smell of their mate produces no special feeling. In an accompanying commentary, Evan Balaban of McGill University in Montreal says that V1a receptors may be "the adjustable nozzle atop a social-glue dispenser in the mammalian brain."
You can almost see the spam already on its way to your mailbox:LADIES! Trying to land Mr. Right? Just inject our new Vaso-Love virus into his brain before your next date, and HE WILL BE YOURS FOREVER!!!
Don't buy it.
It's true that much of the circuitry in our brains is similar to that in voles. And we also produce vasopressin and other neurotransmitters that are associated with love and other feelings. That's because we share a common ancestor with voles that had the basic system found in the heads of both species. But since our two lineages diverged, perhaps 100 million years ago, the systems have diverged as well. The bonding in voles depends on the male smelling the female. That's not surprising, given that voles and other rodents have an exquisite sense of smell. We don't; we're more of a visual species. Differences like these mean that what works for the vole probably won't work for the human. (Even among rodents, Vaso-Love doesn't work: gene-therapy experiments--in which the prairie vole gene has been injected into mice and rats--haven't altered their behavior.)
Even if Vaso-Love won't be hitting the patent office anytime soon, Young's research can shed some light on our own love lives. That's because he and his colleagues have discovered a fascinating difference between the V1aR gene in the two species of voles. All genes have a front and back end. At the front, you typically find a short sequence that acts as an on-off switch, which can only be operated by certain proteins. In praire voles, this front end also contains a short sequence that is repeated over and over again--known as a microsatellite. In meadow voles, by contrast, the microsatellite is very short.
Somehow the microsatellite affects how the gene is switched on in each species. A long microsatellite produces more receptors--and a loyal male--in the prairie vole brain than in a meadow vole. While it isn't clear how microsatellites alter the amount of V1aR produced, what is clear is that it is, evolutionarily speaking, easy to go from one behavior to the other. The same gene produces different behavior simply due to being common or scarce. Moreoever, microsatellites are famous for their high rate of mutation. That's because the DNA-copying machinery of our cells has a particularly hard time copying these sequences with complete accuracy. (Just imagine typing a copy of that manscript in The Shining, filled with "All Work and No Play Makes Jack A Dull Boy," over and over again. It wouldn't be surprising if you discovered once you were done that you missed a couple of those sentences, or added a couple on.) Since microsatellites control behavior, it's relatively easy for new behaviors to evolve as these microsatellites expand and contract.
This sort of flexibility can help explain the fact males and females of closely related mammals (such as the prairie and mountain voles) often evolve different behaviors towards each other. Here is where an explanation of love shifts levels, from molecules to evolutionary forces. Monogamy and fatherly care are favored by natural selection in certain siutations, and not in others. Scientists have identified a lot of factors that can produce a shift from one to another. One particularly unromantic force for monogamy is known as mate guarding. In some species, females can mate with many partners and then choose which sperm to use. If a male guards a female after mating, she'll have no choice but to use his sperm. Young's research suggests that mammals can switch pretty readily from one sexual behavior to another pretty quickly thanks to minor, common mutations.
Primates--our own branch of the mammal tree--seem to fit the general pattern. Marmosets, which pair up for years, have lots of V1a receptors, and promiscuous macaques don't. It will be interesting to watch what scientists find when they look more carefully at vasopressin in the brains of humans and our closest living relatives, chimpanzees and bonobos. As a rule, monogamous species tend to have males and females of the same size. In other species, males tend to fight with one another to mate with females, which gives big males and edge over smaller ones. Male chimps, for example, are bigger than females. Since our ancestors split with those of chimps, we became more monogamous, so that males are much closer to females in size.
One leading hypothesis for this shift has to do with our big brains. The human brain grows at a tremendous rate after birth, using up lots of energy along the way. Human children are also more helpless than other apes; a baby chimp can quickly clamp onto its mother and hang on. The care and feeding of hominid babies may have gradually required the work of two parents, which would have favored monogamy.
Not that monogamy became a hard and fast rule, of course. Even within the loyal prairie vole species Young has found variations. Some of them produce more vasopressin receptors, and some fewer. Likewise, some of them are more monogamous than others. On both counts, the same goes for humans. It's not surprising that humans should vary in their receptors, since microsatellites mutate so easily. Mutations that knock out most of the microsatellites have even been linked to autism, which is, among other things, a social disorder that makes it difficult for people to form deep attachments. Vaso-Love gene therapy may not get you the perfect man, but what if measuring how many V1a receptors a man has in his ventral palladium may give you a hint of whether he's going to stick around? Will a PET scan some day become part of the modern courtship ritual?
Update 6/17 5:20 pm: Be sure to check out Bornea Chela's Jason South's comments. He brings up some important points that are also discussed in the original papers.


Do you know who George Williams is? If you don't, let me introduce you to one of the most influential evolutionary biologists ever to ponder natural selection. If you do know who he is, you may still be interested in my article in this week's Science about a symposium that was recently held in Williams's honor. Scientists studying everything from pregnancy to economic decision making explained how Williams's remarkably clear thinking about the nature of adaptation helped them in their research.
A pdf of the article is also available.


Jack Szostak, a scientist at Harvard Medical School, is trying to build a new kind of life. It will contain no DNA or proteins. Instead, it will based on RNA, a surprisingly mysterious molecule essential to our own cells. Szostak may reach his goal in a few years. But his creatures wouldn't be entirely new. It's likely that RNA-based life was the first life to exist on Earth, some 4 billion years ago, eventually giving rise to the DNA-based life we know. It just took a clever species like our own to recreate it.
My cover story in the June issue of Discover has all the details.


On the east coast, we're bracing for the howling emergence of a massive brood of 17-year cicadas in a couple weeks. Here's a nice piece in the Washington Post about the evolution of this strange life history.


There are only a few places on the surface of Earth where you can find really old rocks--and by old, I mean 3.5 billion years old or older. The rest have gotten sucked down into the planet's interior, cooked, scrambled with other rocks, and pushed back up to the growing margins of continental plates. The few formations that have survived are mere fragments, some the size of a football field, some a house. And generally they're are mess, shot through with confusion such as intrusions of lava from more recent volcanoes. Paleontologists are drawn and repulsed by these rocks, because they may hold the oldest clues about life on Earth, or lifeless mirages that only look like clues.
In the past couple years, scientists have been putting the oldest evidence of life on Earth under tough scrutiny. The oldest fossils, 3.45 billion year old bacteria from Western Australia, have been attacked as mere crud. Life not only leaves fossils behind but also can create peculiar ratios of isotopes in rocks. The oldest isotopic evidence for life came from 3.8 billion year old rocks in Greenland. But that also came under attack by critics who questioned whether the rocks were actually sedimentary (and thus might contain biological material) or belched up from a very nonbiological volcano.
This does not mean that the fossil record has collapsed down to yesterdays road kill. In other parts of Greenland, scientists have found slightly younger rocks (if you can call 3.7 billion year old rocks young) that are almost certainly sedimentary. And they contain a clear isotopic signature of life. The Danish geologist Minik Rosing, who has studied the rocks, argues that this particular fingerprint is so detailed he can tell what kind of life produced it: photosynthesizers. That's tantalizing for several reasons. One is that photosynthesizers give off oxygen, and yet theres no record of any signifciant levels of oxygen in the atmosphere for well over a billion years after Rosings rocks formed. Another is that the early Earth may not have been a very friendly place for photosynthesizersthe oceans were hot and loaded with nasty metals.
The controversy over ancient fossils has forced some paleontologists to look for new kinds of evidence of life. For example, some bacteria can eat through glass, leaving behind microscopic pits. Volcanoes form glassy rocks such as obsidian, and in recently formed volcanic rocks sicentists have found tunnels that seem to have been created by hungry microbes. (They're even slathered with DNA and other biological material.). Today in Science, researchers reported that 3.5 billion year old rocks from Zimbabwe bear the same sorts of tunnels. They're also slathered in organic carbon with an isotopic fingerprint that looks like life. The evidence has impressed some researchers, but others are still skeptical. The possibility that these formations are formed without the help of microbes hasn't been eliminated yet.
I find all this work fascinating, but in one fundatmental way it's a bit pedestrian. These scientists are looking for the earliest signs of organisms that resemble organisms alive today, looking for the traits that are common to both. But a photosynthetic or glass-chewing bacterium is already pretty nicely evolved. Someday, a clever paleontologist is going to figure out how to identify something that no longer exists on Earth, such as an RNA-based organism. That discovery will push the fossil record back to a different chapter altogether in the book of life.


John Maynard Smith has died.
While many people know who Stephen Jay Gould was or Richard Dawkins is, Id bet few would be able to identify Maynard Smith. Thats a shame, because he played a key role in building the foundations of modern evolutionary biology. (Underlining this point, I only learned about his death from Science's online new service. As far as I can tell, no one else has run an obituary.)
Maynard Smith came to evolution from a previous career as an engineer. In World War II he measured the stress on airplane wings. When he moved to evolution, he brought with him a gift to see the mathematical underpinnings of things, whether they are bridges or botflies. (An awful lot of creationists are engineers, for some reason; they would do well to consider Maynard Smiths example.)
Maynard Smith saw evolution as a very complex mathematical equation that played out over time. Genes spread or faded depending on their fitness, which depended in turn on changes in the environment. Maynard Smith came up with brilliant new formulas to describe that change, in some cases borrowing methods from other disciplines. For example, economists have delved deep into game theory over the years, working out the ways in which players with different strategies can wind up winning or losing. Maynard Smith had the brilliant idea of apply game theory to evolution. The players in his game might be a population of elephant seals, each with its own genetically determined strategies for finding a mate. Different strategies would have different levels of success. One strategy might be to confront the biggest male on the beach, drive him away, and take his harem. That might work if a male was also big, but if he was small it was a strategy doomed to failure. So perhaps instead he might skulk at the edges of the colony and mate secretly with females from time to time, trying to avoid getting killed by the harem leader. Its not a solution guaranteed to produce a lot of kids. But Maynard Smith showed that its also not necessarily a one-way ticket to extinction. Instead, its possible that the two strategies, one dominant and one minor, can come to a stable coexistence.
Scientists have found lots of these so-called evolutionarily stable stategies. Some male salmons who take the sneaky route actually commit their whole bodies to the strategy. Instead of bulking up their bodies and developing big sexual displays such as long jaws, they become small and invest their energies into growing massive testes that give them a large enough supply of sperm to make the most of their few tristes. Some evolutionarily stable stragies cycle from prominence to rareness and back over time, in a sort of rock-scissors-paper game. Bacteria may reach evolutionarily stable strategies that leave some of them killers and others harmless. Evolutionary stable strategies may have a lot to tell us about human behavior as well. Genes have a role in personality, intelligence, and behavior, and theres obviously a lot of variation in all these factors. Its possible that these genes have, over millions of years, reached an evolutionarily stable state with one another. And these games may also be a model for how something as peculiar as cooperation evolved in our own species. (You can read a good recent review of evolutionary games written by Martin Nowak here.)
Maynard Smith realized that some of the equations that he developed for these sorts of social interactions might also carry over to more fundamental questions about the evolution of life. When life was just getting started on Earth, for example, genes might have settled into certain strategies for getting replicatedarranged on chromosomes, for example--in the same way animals settle into strategies for surviving. Maynard Smith came to see the history of life as a series of transitions to new ways of processing information--from the origin of life to the first sexually reproducing cells to the appearance of multicellular life to the emergence of animal societies, and finally, human language and culture. Each new transition created a new playing field for a new set of games.
All this may sound a bit daunting, but Maynard Smith was gifted with a disarmingly simple way of explaining his ideas. Check out his final book, The Origins of Life, to see what I mean.
Update 4/22: The obits are emerging now.
Update 4/23: The definitive JMS site. Via Panda's Thumb.


A great blog is born: The Panda's Thumb is a multi-authored blog that blasts a firehose of reason at distortions of evolution.


Our ancestors branched off from those of chimpanzees some six million years ago. Since then, our lineage became human--and distinctly unlike other apes. Figuring out how that difference evolved is one of the grand challenges of biology. Until now, scientists have gotten most of their clues by looking at the fossils of extinct hominids. These fragments of bones only preserve a little information, but it's not a random smattering of data. It's more like a scaffolding on which other clues can be fixed, so that a picture of how we became human can gradually emerge. That's because the changes documented in the fossil record were ultimately created through the evolution of our genome.
The power of combining fossils and genes was demonstrated today in Nature. Scientists at the University of Pennsylvania reported their studies on a human gene that has mutated into uselessness. Such broken genes are nothing new; scientists have identified several hundred broken genes in the human genome dedicated to smell alone. What's striking about this particular gene, known as MYH16, is how important it is in its functional form to our primate cousins. MYH16 is a muscle-building gene that only becomes active in the developing muscles of the jaw. Apes and monkeys all have massive jaw muscles, which pass from the jaw up under their cheekbones, fanning out across the top of their skull and anchoring to a keel-shaped ridge. We have no such ridge, and we have pretty puny chewing muscles compared to apes. And much of the difference between us and other primates in this respect comes down a fatal mutation that hit a single gene: MYH16.
To get a better picture of this crucial event in our evolution, the Penn team compared our version of the gene to those found in primates and other mammals. By tallying up the changes each version of the gene underwent, they were able to estimate when MYH16 shut down in our own lineage. Their estimate: 2.4 million years ago.
This was no ordinary time in our history. Hominids before that age still had big anchoring crests and large jaws. Younger species had smaller jaws and smooth tops on their skulls. And something else happened at the same time: their brains began getting significantly bigger. It's possible that when MYH16 still worked in our ancestors, their chewing muscles acted like a clamp on the evolution of brain size. The architecture of the entire top of the head was so dominated by the muscles and their anchoring that expanding the brain was impossible.
As fascinating as this result is, it's important to keep in mind that we're just at the beginning of this fusion of genes and fossils. Today's report is actually the first to link some important fossilized trait with the evolution of a human gene. A few other studies have revealed some other genes that were also important in human evolution, although they leave a subtler mark on the hominid line. The language gene FOXP2 appears to have undergone intense evolution perhaps 100,000 years ago. There are probably several thousand genes that have evolved significantly since we parted ways with our fellow apes, and so it's easy to blow these early discoveries out of proportion. While MYH16 may well have played a crucial role in our evolution, it didn't do it alone.
For one thing, a hominid with a weak jaw can't grind up tough foods the ways its ancestors did; it needs new foods. The oldest tools, interestingly enough, date back to about the same time as the MYH16 mutation. Scientists suspect that hominids were using these simple stone axes to hack meat off of carcasses and dig up tubers. This new diet might have meant that a mutation to MYH16 wouldn't have mattered much. The new diet may have been just as important as the missing jaw muscles to letting the hominid brain expand. For one thing, a big brain requires lots of energy. One way to make more energy available is to shrink the size of other organs, and it turns out that we humans have one particulary small organ: our intestines. Other primates use their long bowels to digest tough foods poor in nutrients; we can survive on our abbreviated bowels because we eat better grub. So here's a prediction: scientists will eventually discover genes that control the development of intestines in humans. When they compare them to ape genes, they'll discover that they underwent an evolutionary change around the same time that MYH16 shut down. Our brains did not evolve in a vacuum; they coevolved with the rest of our bodies in a complicated dance of tradeoffs and feedbacks.
(I don't want to turn every post about evolution in an attack on creationism, but here's a parting question. MYH16 is clearly essential to the well-being of other primates. We have a copy of MYH16, but it doesn't work. Where is the intelligence of this design? If we don't need the gene, why did the designer insert it into our genome?)


Last week I wrote about an important new study showing that three very different groups of species--plants, butterflies, and birds--have all been declining at the same alarming rate for over 40 years in Great Britain. The authors concluded that if the pattern is global, it may mean that we are entering one of the biggest bouts of mass extinctions in the past 500 million years.
The media handled the story pretty well, although some reports got ahead of the science. Here's a story that may give you the impression that the study documented the extinction of entire species, for example. The researchers only recorded the extinction of a few species in Britain, which can still be found elsewhere. But many populations of plants, birds, and butterflies are in rapid decline. Species are made up of populations, which means that if populations keep declining for a few more decades, the species can't survive for very long.
I've also been searching for criticisms, but to my surprise I can't find a single mention of the study in outlets that have attacked these sorts of studies in the past. Could it be that these folks are hoping that this study just disappears if they don't call attention to it? Or are they at a loss for a rhetorical trick to misrepresent the findings? Or do they accept that this may be a sign of a sixth pulse of mass extinctions, but is simply not worthy of commentary?
If anyone has found such a response, please let me know.


When I ask scientists what's the biggest misunderstanding people have about their work, they often talk about how they know what they know. People tend to think that a scientist's job is to gather every single datum about something in nature--a mountain, a species of jellyfish, a neutron star--and then, simply by looking at all that information, see the absolute truth about it in an instant. If science departments were filled with angels, that might be the case. But they're staffed by humans with finite brains, with tight research budgets, and with only so many years left before retirement or death. In order to tackle vast questions about the fate of the universe, the history of this planet, and the tangled bank of life on Earth, they have to live with uncertainty. To understand something, they can only gather a smattering of information about it, look for patterns within the data and use well-supported theories to come up with hypotheses about them. They can then gather more information in order to test the hypotheses again, and, if need be, alter their explanations to accord with the evidence. Their conclusions can only be tentative, but they can also be powerful. We were not around when the Earth formed, and we can only look at indirect clues in certain rocks and meteorites. And yet scientists have a good idea of when the Earth formed, how quickly the iron core settled to the center of the planet, when oceans began to appear, and so on.
Many bogus attacks on scientific research play on this common misunderstanding of science-as-revelation. If scientists don't know everything they can't conclude anything. Paleoanthropologists have found less than two dozen species of hominids from the past six million years--therefore they can't draw any conclusions about how humans appeared on Earth. Climatologists don't have a perfect temperature record for the planet--therefore they can't say anything about how man-made pollution is warming the atmosphere. In cases like climate change, these bogus attacks spread from science to policy based on science. To hear some people talk, we should only do something about climate change once we have tracked every molecule in the atmosphere since the dawn of civilizaiton and can predict its course for the next thousand years.
Extinction is a particularly good example of how confusion about the nature of science can cause serious trouble. In January I wrote a couple posts about some research that indicates global warming could cause a vast wave of extinctions in the next century, and how some critics deceptively emplyed the Imperfect Knowledge gambit. Today Science is publishing an important paper that may well attract the same specious criticisms, the same calls to ignore anything less than the wisdom of angels. (Here's the press release.)
Here's the lowdown:
British researchers have been working for the last decade to carry out the most ambitious analysis of changing biodiversity ever attempted. They took advantage of the fact that in the 1950s and 1960s, professional biologists and amateur volunteers began doing painstaking surveys of several groups of species around Britain. Mapmakers had carved up England into 10-kilometer-square parcels, and every year the surveyors would take a census of the species in each one. In the mid-1990s, researchers realized that this amazing database of biodiversity, unparalleled in the world, could let them track broad patterns of change. They picked out three very different groups of species to compare--butterflies, plants, and birds. They then chose the results from a few years in the early parts of these surveys to compare with results from the 1980s and 1990s. The scale of the comparison was staggering: every single species of butterfly, plant and bird known to live in England was tallied; the researchers analyzed 15 million records put together by 20,000 volunteers.
The results were bleak. Over a quarter of all native plant species had disappeared from at least one the survey squares in their range. Half the birds did. Butterflies fare worst, with 71% surrendering at least one square. But the average retreat was actually much bigger. The typical butterfly species vanished from 13% of its range, while the fastest-declining 10% of British butterfly species can't be found in over half of their former range.
There has been a lot of debate about how badly different groups of species are faring these days. Birds have been carefully studied around the world, partly because they're big enough to spot with binoculars, and they've been suffering significant losses. Plants, which can't move out of view, have also been pretty well studied (although not as well as birds). But could the same be said for insects, which number in the millions of species and are far harder to study? The answer, at least in this study, is yes.
What's particularly striking about this survey is that there are many good reasons to think that biodiversity has had it much easier in Britain over the past 50 years than man other parts of the world. Most of its forests had been cleared away many centuries earlier, so that the animals and plants living in 1950 had been living in fragmented habitats for a long time. Britain has suffered relatively few invasions of aggressive alien species that could have driven native ones extinct. Conservation is important to the British. And global warming, for the last few decades at least, has actually made Britain a better place for butterflies and plants to thrive. Nevertheless, the biodiversity of Great Britain is shrinking, and shrinking fast. That bodes ill for other parts of the world, particularly ones that are home to many species restricted to tiny ranges. (Many species in Britain can also be found elsewhere in Europe, which is why the population declines in Britain have not led to all-out extinctions yet.)
The scientists conclude: "If insects elsewhere are similarly sensitive, we tentatively agree with the suggestion that the known global exticntion rates of vertebrate and plant species may have an unrecorded parallel among insects, strengthening the hypothesis, derived from plant, vertebrate, and certain mollusk declines, that the biological world is approaching the sixth major extinction event in its history." (Italics mine.)
In a day or two I will update this entry. I am interested to see how this study gets digested by the media-punditry machine. I have a few suspicions of what we'll see:
Some environmentalists groups will trash the careful wording in the conclusion and simply say, we're doomed. If that does happen, it will be too bad, because it will undermine the care put into this research.
Some "skeptics" will say that you can't compare the surveys because different people made them, looking at different groups of species. That's actually untrue: the researchers did statistical tests to make sure that the comparisons could hold up even if there were some biases among the surveys.
I also predict that the skeptics will claim that British plants and animals have just retreated to the safety of refuges, where they can now live happily ever after. But the evidence points the other way. For example, the population declines took place "with remarkable evenness across the nation," the authors write.
The skeptics will also say you can't generalize from a small northern island nation to the world at large. But the results are actually in accord with other studies on extinctions worldwide.
I certainly don't mean to imply by all of this that this study is perfect. Perfection is, by definition, impossible to reach in science. But if critics say that we can't draw any conclusions--or any political decisions--until we are completely certain of how biodiversity is faring these days, and if they are sincere in their claim to be interested in protecting biodiversity, then I have a challenge. Go ahead and set up a project that would give us complete certainty. It took 20,000 volunteers to carry out these surveys in Britain alone. To survey the world, a few million more volunteers should do the trick.
Stay tuned.


In one of the weirdest attempts to pretend that creationism is a real science, a student at Harvard Law School wrote a favorable review in the Harvard Law Review of a book about Intelligent Design. You'd think that this would be so irrelevant that it would vanish off the cultural radar in a flash. But it has ballooned into something of a blogospheric hurricane, mainly because the National Review Online wants to pretend that criticism of the review is an Inquisition-style persecution. It's a cute way to distract attention from the basic issue of whether creationism in any of its manifestations has scientific merit, which it doesn't.
I only have one thing to add to the discussion (which has been ably handled by the likes of Brian Leiter, Dispatches from the Culture Wars, and Chris Mooney). It's an observation about the way creationists ornament themselves with references to peer-reviewed scientific papers that they claim support Intelligent Design, a great flood, whatever. In the course of interviewing scientists for articles or books, I will sometimes mention to them that they have been invoked in this way. Now, you'd think that if their work really did support creationism, they'd be delighted. Of course, the opposite is true: usually I hear a groan of someone being hideously misrepresented. Here's an example that was fortunately preserved in print. (Follow-up here) It frustrates scientists to no end that research that can take years to bring to fruition can be misused so swiftly.
Next time you encounter creationists trying to create an aura of respectability with a scientific citation and a few words quoted out of context, ask them what the authors of the paper think of creationism.


For over two centuries, opponents of evolution have searched for examples of natural complexity that could have only been created by design. Reverend William Paley was fond of the eye, with its lens, retina, and other components all beautifully fine-tuned to work with one another. These days, the Intelligent Design camp tries to invoke blood clotting cascades or the flagella that bacteria use to move around in the same way. (See here for some refutations of these arguments.) Ironically, one of the most successful, intricate examples of complexity in nature is something creationists never mention: a tumor.
Cancer cells grow at astonishing speeds, defying the many safeguards that are supposed to keep cells obedient to the needs of the body. And in order to grow so fast, they have to get lots of fuel, which they do by diverting blood vessels towards themselves and nurturing new vessels to sprout from old ones. They fight off a hostile immune system with all manner of camouflage and manipulation, and many cancer cells have strategies for fending off toxic chemotherapy drugs. When tumors get mature, they can send off colonizers to invade new tissues. These pioneers can release enzymes that dissolve collagen blocking their path; when they reach a new organ, they can secrete other proteins that let them anchor themselves to neighboring cells. While oncologists are a long way from fully understanding how cancer cells manage all this, it's now clear that the answer can be found in their genes. Their genes differ from those of normal cells in many big and little ways, working together to produce a unique network of proteins exquisitely suited for the tumor's success.
All in all, it sounds like a splendid example of complexity produced by design. The chances that random natural processes could have altered all the genes required for a cell function as a cancer cell must be tiny--too tiny, some might argue, to be believed. And surely the only way that a cell could become cancerous naturally would be for all the genes to change at once. After all, what good is it for a cell to be able to increase blood flow towards itself if it can't grow quickly? Getting so many genes to change at once makes an impossibility an absurdity. By this sort of reasoning, you'd conclude that cancer is the work of a supernatural designer.
And yet, despite all its appeals, creationists don't like to bring up cancer. Perhaps that's because they prefer to use the warm and fuzzy examples of complexity in nature instead of the pain-causing, life-ending ones. I'm no theologian, so I'll leave the religious implications of all this to others. But as a science writer, I do want to talk about what this means about creationism and evolutionary biology as sciences. Creationists say that they want to be taken seriously as scientists. But one mark of important scientific ideas is the important new scientific research it generates. Cancer is a case in point. Creationism in any of its flavors has never led to an important hypothesis about cancer. Evolutionary biology, on the other hand, is generating a wealth of new ideas about potential ways to fight cancer.
Martin Nowak of Harvard University and his coauthors offer a nice roundup of these ideas in a paper appearing in this month's Nature Reviews Cancer. (Nowak has posted a pdf of the cancer paper here, on his publications page. His other papers are worth checking out, too. He's done brilliant work on the evolution of everything from HIV to human language.)
Nowak and his co-authors argue that you can't understand cancer unless you recognize it as an evolutionary process. As cells divide, they mutate on rare occassion (roughly one out every 10 billion cell divisions). Most of these mutations will kill a cell, so that the genomes in most of the new cells in your body are identical to the old ones. But a few of these mutations can allow a cell to divide more quickly than its neighbors. They begin to outcompete the ordinary cells for resources, becoming even more common. These cancer cells continue to mutate, so that there's lots of genetic variation in a growing tumour. In a few cases, these mutations make cells better adapted to a cancerous existence, and the offspring of these cells come to dominate the tumor. As the tumor matures, new kinds mutations may be favored--ones that let it metastatize, for example, or withstand the abuse of chemotherapy.
The same basic dynamics of evolution by natural selection that can alter a species are at work in the cells of a tumor. Obviously, however, the two cases of evolution are not identical. The mutations that alter a species are the ones carried down in sperm and eggs from one generation to the next; the mutations to cells in the rest of the body (the soma) are irrelevant. Cancer, on the other hand, is all about somatic evolution. And while ordinary evolution can last for billions of years, each case of somatic evolution ends with the death of the body in which it takes place.
That said, though, Nowak and his colleagues show how evolutionary dynamics can tell us a lot about how cancers get started and spread. One crucial fact about cancer is that the evolutionary arena where it gets its start is a microscopic one. Our organs are generally composed of millions of little compartments, each containing a few thousand cells. Colon cancer, for example, begins in so-called "crypts" that line the intestines. Normally the crypt is in a delicate balance. A single stem cell at the base of the crypt divides every day, producing a fresh colon cell. The older cells move up towards the surface of the intestines to make room, dividing themselves as well. The oldest cells near the top of the crypt die off in an intricate self-destruct sequence of biochemistry.
The evolution of cancer cells has a different trajectory depending on the size of their compartment. In a big compartment with lots of cells mixing together, natural selection will favor cancerous mutants, which will quickly spread--and possibly spread to neighboring compartments. In a small compartment like a crypt in the colon, supplied by just a few stem cells, cancer may grow more slowly because the cells are more likely to self-destruct before they can cause much trouble. (In fact, the architecture of our tissues overall may be adapted to keeping cancer in check this way.)
Another factor in the spread of cancer are the genes themselves. For example, one common sort of mutation found in cancer cells causes the cells to do a bad job of repairing their DNA. At first, this seems like a very dangerous mutation for a cancer cell to have, since it means that the cell risks mutations to the many genes that it needs to stay alive. Nowak and his colleagues find, however, that bad repairs have a benefit that makes them worth the cost. To understand why, bear in mind that each of our cells has two copies of each gene, inherited from mother and father. In order for cancer to progress, both copies of certain genes have to get knocked out in a cell. This is a remote possibility for most cells, but, according to Nowak's calculations, not for ones that have become genetically unstable. (Genetic instability, Nowak's work also shows, is responsible for cancer's extraordinary capacity to evolve protection against drugs.)
Nowak's work is elegant and fascinating, but as he admits, it's just the beginning of an understanding of how cancer evolves. (He's not the only one pursuing it--this article in the March 15 issue of The Scientist describes how other scientists are pursuing similar lines of research.) It's worth pursuing further, because it may make it possible to predict precisely particular cases of cancer will progress, and help reveal which line of attack will work best.
It will be interesting to see how the members of certain state boards of education react to this kind of medicine. Will they hold off on chemotherapy until they find out what insights their creationist friends have gotten about cancer? If they do, they'll be waiting a dangerously long time.
UPDATE 3/24/04: Welcome to readers visiting from Phenomena News. The editors at PN ask, "Diverting blood vessels? How does he know that it is not a way by the body to try to fight the tumor?" Oncologists actually have a lot of evidence indicating that it is the tumor, not healthy cells, that send signals out to blood vessels to stimulate growth. The cancer cells need the extra blood to grow rapidly. And some of the most promising research on curing cancer involve blocking blood vessel growth around tumors. Kill the blood supply, and you kill the tumor.


I have been grievously mum in response to the many comments that readers have been sending to the Loom. My silence is not hostile--it is the result of way too much traveling, too much magazine writing, and the standard sleep deprivation that comes with life with two young daughters. In fact, reading comments is one of the favorite things I like about this blog.
As a case in point, today Nick at talkdesign.org explored the link between the subject of two recent posts: the ongoing adaptation of bacteria to manmade pollutants and the ongoing pollution of biology education with creationist distractions:
[snip]
Well, I don't have a poem, but I would like to mention that the evolution of catabolic pathways is the perfect example of the evolution of "irreducible comlexity" in modern times, which as you will recall is exactly the thing that the Intelligent Design folks say can't evolve.
(1) These catabolic pathways typically break down "xenobiotic" compouds that humans have only recently introduced into the environment
(2) These compounds are typically environmentally persistent toxins. Sometimes they are pesticides or herbicides, or by-products of other nasty compounds like explosives. Much of the the research on the evolution of the degradation pathways is done by military-funded labs, because the military has a big problem with polluted ground on military bases where chemical weapons, explosives, etc. were stored.
(3) The degradation pathways typically have multiple required proteins in the breakdown process. Often the compounds contain e.g. aromatic rings protected by tightly-binding atoms such as chlorine, and stripping off the chlorines and then breaking open the rings are all required before a non-toxic "eat-able" carbon chain is produced.
Some example papers:
* Copley, S. D. (2000). Evolution of a metabolic pathway for degradation of a toxic xenobiotic: the patchwork approach. Trends in Biochemical Sciences V25(N6): 261-265. Source: http://www.sciencedirect.com/science/journal/09680004
* Johnson, G. R., Jain, R. K. and Spain, J. C. (2002). Origins of the 2,4-Dinitrotoluene Pathway. Journal of Bacteriology 184(15): 4219-4232. Source: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=12107140&dopt=Abstract
* (Atrazine pathway evolution) Sadowsky, M. J., Tong, Z., de Souza, M. and Wackett, L. P., 1998. AtzC is a new member of the amidohydrolase protein superfamily and is homologous to other atrazine-metabolizing enzymes. J Bacteriol. 180 (1), 152-158. URL: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=9422605&dopt=Abstract
* (Atrazine pathway evolution) Seffernick, J. L. and Wackett, L. P., 2001. Rapid evolution of bacterial catabolic enzymes: a case study with atrazine chlorohydrolase. Biochemistry. 40 (43), 12747-12753. URL: http://pubs3.acs.org/acs/journals/doilookup?in_doi=10.1021/bi011293r
The McAdams et al. Nature Reviews Genetics paper you linked to mentions studies like these, and also last year's paper in Nature by Lenski et al. that did a computer simulation of the evolution of an "irreducibly complex" system. McAdams et al. don't miss the chance to take a swipe at Intelligent Design:
====
These experiments are particularly valuable as they show how straightforward evolutionary mechanisms of mutation and selection can produce steady increases in organism complexity without invoking ‘intelligent design’.
====
So far, the ID folks seem to be saying that the computer simulation is irrelevant for [insert obscure hair-splitting here], and they seem to hope that if they completely ignore the studies on the evolution of catabolic pathways for xenobiotic compounds, the studies will just go away. I would not bet on their strategy over the long term. But then again, the ID folks appeal entirely to the public and avoid discussions with the relevant scientific experts like the plague, and their strategy seems be having some success lately. I'm glad that there are at least a few people like Carl Zimmer around to help get the word out.
Nick
[snip]
Thanks for the connection and the sources, Nick.


Ohio's Board of Education has taken a big step towards forcing its students to waste their time on creationist pseudo-objections to evolution. PZ Meyers has a good round-up of this sad situation.


Probing the origins of humanity is actually a lot like being a dentist. The bones of our hominid ancestors tend to fall apart, leaving behind a smattering of shards. But teeth, made of enamel, can do a better job of withstanding the ravages of time. And teeth--particularly those of mammals--are not just tough but interesting. Mammals--us included--have several kinds of teeth, each of which is covered with distinctive bumps, cusps, and roots. All those details vary from one species to another. So even if you find a fragment of a tooth, you may be able to figure out what species it belongs to. And if you have enough teeth to compare to each other, you can probe some pretty profound questions about where we came from.
A fascinating paper about hominid teeth was published today, but the few news reports I've seen so far have missed the real story. For many decades, scientists tended to see human evolution as a steady, linear march of progress from quadrupedal, small-brained ape to bipedal, big brained Homo sapiens. When they found a previously unknown species of hominid, they tried to find where along the line from primitive to advanced it fit in. Neanderthals seemed to fall in just before modern humans. "Lucy"--better known as Australopithecus afarensis--seemed to fall in just after our common ancestor with chimpanzees. But then things started to get confusing. In the fossil record, Neanderthals don't grade smoothly into Homo sapiens. In fact, anatomically modern humans existed at least 160,000 years ago, and Neanderthals became extinct only 30,000 years ago. It's more likely that they were two distinct species. Meanwhile, paleoanthropologists found species that seemed distinct from Lucy some 3 million years ago. Even at the very earliest stages of hominid evolution, five to six million years ago, at least three highly distinct species seemed to coexist. Hominid evolution was looking less like a straight line and more like a bush, with branches shooting off throughout hominid evolution. Because our branch happens to be the one still left on Earth, we fooled ourselves into thinking that we were the product of linear progress.
Tim White has never bought this line. White, a paleoanthropologist at the University of California at Berkeley, has carried some of the greatest work on human evolution. (Among other things, he found those 160,000 year old Homo sapiens.) White has argued that simply finding some differences between hominids that lived at the same time doesn't always indicate a vast diversity of hominids. He has maintained that with so few fossils of really old hominids, we simply don't have a big enough sample to draw any hard conclusions. We don't know which traits in a given fossil were new features in their evolution and which were hold-overs from primitive ancestors. The hominid family tree could be bushy, or it could be more like a Charlie Brown Christmas tree, with a few scant branches. No one could say.
That's where the teeth come in. White and some of his long-time coworkers (Yohannes Haile-Selassie of the Cleveland Museum of Natural History and Gen Suwa of the University of Tokyo) have been studying some extremely old fossil remains from Ethiopia for years now. Some of the bones belong to a small-brained, possibly bipedal species called Ardapithecus ramidus, which lived some 5 million years ago. In 2001, they found some older teeth, dating back 5.2 to 5.8 million years ago, that they initially thought might belong to a subspecies of A. ramidus. But in Science today they reported that the teeth are different enough to belong to a species of its own, which they call Aradapithecus kaddaba. They had distinctively long canines, it appears, and had a bite pattern more like chimps than later hominids (including A. ramidus).
At first this discovery might seem to add yet another branch to the bushy tree of hominid evolution. But Haile-Selassie, Suwa, and White argue otherwise. They found some significant similarities between A. kaddaba and the two other hominids known from the same period, Orrorin and Sahelanthropus. They even propose that all three species should be lumped together in the same genus, in the same way chimpanzees and bonobos are lumped together. It's even possible that they all belong to the same species. In other words, what seemed like bushy branches at the very base of the hominid tree should be pruned down to a single trunk. Perhaps there's something to the march of progress after all.
Your teeth have secrets of the ages embedded in them. So be sure to floss daily.
Update: Friday 3/5/04 9:40: Kate Wong in Scientific American gets it.


I was puzzled by an article in today's New York Times called "Researchers rewrite first chapter for the history of medicine." William Honan, the reporter, announced that "an art historian and a medical researcher say they have pushed back by hundreds of years the earliest use of a medicinal plant." Until now, he wrote, the oldest evidence dated back to 1000 BC, but now researchers had discovered a picture 3500 years old showing a Greek goddess overseeing crocus flowers being made into medicines.
This painting will certainly tell historians a lot about medicine in ancient Greece, but the article pretends that it has something to say about the origin of medicine itself. That's absurd. People all over the world have well-established traditions of using medicinal plants. Did Australian aborigines and Incas in Peru copy the ideas of the Greeks? How would they even hear about them? It's far more likely that the common ancestors of all of these farflung people understood that some plants could cure diseases. That would put the origin of medicine back 50,000 to 100,000 years ago, with the dawn of our species in Africa. If that's true, a 3500-year old picture has nothing to tell us at all.
Some other lines of evidence suggest that the use of medicinal plants actually goes way, way back in history. Michael Huffman, a primatologist at Kyoto University, has spent years watching great apes medicate themselves. For example, apes can purge intestinal parasites by swallowing leaves loaded with poisons. (Here's an abstract of Huffman's latest review of the evidence. Here is the full text of a 1996 paper.)
If chimpanzees and gorillas are self-medicating with plants, as Huffman argues, then it's likely that the common ancestor of them and us--dating back some 8 million years--was doing it too. They may not have known what they were doing in the same way that we do. But as the mental power of hominids grew--particularly after 2 million years ago--they would have gradually become more conscious of the link between disease, drug, and cure. Looking for the dawn of medicine in archaeology, rather than in human evolution, is like looking for stars through the wrong end of a telescope.


In my last post, I wrote about how our genes work in networks, much like circuits made of elements wired together in various ways. As genes are accidentally duplicated, mutated, and rewired, old networks can give rise to new ones. It's pretty clear our ancestors could have never become particularly complex if not for this sort of network evolution. As they acquired nerves, muscles, and other tissues, animals needed to organize more and more genes into new circuits. But in saying this I don't mean to imply that single-celled microbes, such as bacteria, live without gene networks. Far from it. In fact, in many ways bacteria are more adept at network engineering than we are.
Evolution has engineered the networks of bacteria with many of the same tricks that produced our own. As one bacterium divides into two, all sorts of mistakes can creep into its duplicating DNA. As one generation inherits gene networks from its parents, the networks can slowly change.
But bacteria can also do something else we virtually never do: they can swap genes. The genes may be carried by viruses that jump from one bacterial host to another; in other cases, bacteria slurp up DNA from dead microbes and insert it into their own genomes. In still other cases, genes can spontaneously slice themselves out of one genome and get inserted in the DNA of a distantly related species. The most famous example of this process is antibiotic resistance. One reason that resistant bacteria can spread so quickly in a hospital is that inheritance is not the only way these microbes can get hold of the genes that can fight off a drug. Every now and then, the genes get transferred from one species to another; the lucky bugs that receive them soon outcompete their cousins who lack the defense. Horizontal gene transfer, as it's known, may involve a single gene or an entire network of genes. And when two networks arrive in an alien genome, they can combine together into a bigger network that can do something entirely new. Horizontal gene transfer gives bacteria an extra dimension of creativity.
Our penchant for pollution has given bacteria a new opportunity to flaunt this extra creativity. Over billions of years, they evolved the ability to eat just about any source of carbon on the planet. But in the past century we have created synthetic chemicals that bacteria have never faced before (or faced in only tiny amounts). In many cases, these chemicals kill off most of the bacteria that encounter them. Over the years, though, strains have emerged that can not only survive exposure to these pollutants but can even devour them. Scientists have unpacked the genomes of these hardy microbes to figure out how they evolved a solution so quickly. It turns out that microbes are swapping genes and gene networks, and then assembling them into networks that can handle the chemical at hand. Last year, for example, scientists looked at the bacteria that thrive in ground water near a Texas Air Force base polluted with fuel. One strain of bacteria there can break down chlorobenzene with a series of enzymes. This chlorobenzene-destroying network actually is the product of two smaller networks that can each be found in other bacteria strains in the same ground water. One turns chlorinated benzenes into another compound known as chlorocatechol. The other breaks chlorocatechol down into smaller molecules. Only in the strain studied by the scientists did these two networks come together to create an entirely new kind of metabolism.
These bacteria show an evolutionary nimbleness we will never enjoy. But it may be possible to harness them to clean up the messes we make.
(For more information, see this fascinating survey in the March issue of Nature Review Genetics.)


As biologists figure out more about how life is, they can then figure out how it got to be that way. First there were genes. Mendel noticed that somehow the wrinkles on wrinkled peas could be transmitted down through the generations, even if some of those generations had no wrinkles at all. It turned out that the wrinkles were the result of a gene; a different version of the gene produced smooth peas. For much of the twentieth century, evolutionary biologists worked out how changes in genes produced evolutionary change. A mutation that alters one position in a gene (or chops out a whole chunk of it) can alter the protein it encodes. As the proteins on a virus mutate, for example, their shape becomes harder for an immune cell to recognize.
But towards the end of the twentieth century, it became clear that the protein-coding sequence is not the whole story. For example, many genes are equipped with on-off switches. Only if other proteins toggle these switches on will a gene produce its own protein in a particular place and time. A slight tweak to one of these switches can produce a drastic change--adding or subtracting legs from a segment on an insect's body for example. Other proteins destroy other proteins, while others enhance their supply. Some genes create proteins that can only work when they fuse to proteins made by a different gene. You can think of the genes as pieces of a complicated circuit, evolutionarily wired for some particular job, such as sensing a molecule or telling time.
How then do networks evolve? At first this can seem like an insurmountable problem. Consider a network of three genes that can only do a job if all three genes are working together. How then could the network evolve from two genes, let alone one? This is the basic "irreducible complexity" argument you sometimes hear from the Intelligent Design camp. They'd like you (or at least your local board of education) to think that you can't get there from here, and that someone must have designed the network from scratch. In reality, many scientists are now probing genomes to figure out how networks evolve, generating detailed hypotheses, testing them, and publishing their results--yet never once finding the need to utter the phrase Intelligent Design.
The key to network evolution lies in yet another way genes can mutate. Instead of just a small segment of its DNA changing, it's possible for an entire gene to get duplicated. Gene duplication happens a lot, judging from the many families of similar genes both in our own genome and those of other species. A copied gene would initially play the same role in the original network. But as it gradually mutates, it can take on a new function. Can it take up a new role in a new network? One clue that the answer is yes is that many networks are made up of related genes. Some researchers have proposed that all the genes in a network (perhaps even an entire genome) have to get duplicated at once in order to create a new network. But this large-scale copying may come with its own trouble: somehow, all the copied genes would have to stop interacting with the old network.
In the current issue of EMBO Reports, scientists at the University of Manchester in Great Britain offer a more humble way to build a new network. They suggest that it can happen one duplicated gene at a time. Imagine that one gene in a three-gene network gets duplicated. A mutation prevents it from interacting with the original three. Then it gets duplicated in turn, and these two genes start interacting in a tiny network of their own. Another duplication, and there are three genes at work in a fully-functional network that's completely isolated from its parent.
It would have been vaguely interesting if the scientists had stopped there, but then they figured out a way to test their hypothesis. They studied a family of genes that produce molecules called basic helix-loop-helix proteins (bHLH). These genes form several networks in our own bodies and in those of other animals. By linking with one another in different combinations, they can do all sorts of work in the cell, from sensing signals from the environment to keeping cell division under control. The history of these networks, the researchers realized, should be preserved in the genealogy of the genes. Say that some ancestral bHLH network was copied all at once. Then each gene of the new network should be most closely related to the gene playing the same part in the old network. But if, as the scientists propose, new networks are built a gene at a time, then all the genes in a new network should be closely related to each other, and only distantly to the old network. When they drew the bHLH family tree, that's what they found.
What's particularly remarkable about this work is what it means about the way new networks evolve. Each one budded off from an old network as a single duplicated gene. But over time, as the new network expanded with additional gene duplications, the new network came to look and act a lot like the old one. Each network, for example, is organized around a hub of a few genes that can interact with a constellation of other genes. Stephen Jay Gould famously asked whether life would take the same form it has today if you replayed the tape. Gould thought that there were so many contingencies that could push life off on another path that the answer must be no. But when it comes to gene networks, it appears that the tape may play just about the same.
(Update, 3/1/04 8 am: Link to paper fixed, along with a few typos.)


Apologies for the long radio silence. Travelling and the obligatory pre-travelling frenzy shut down the blogging assembly line for a couple weeks. Having wrapped up my west-coast jaunt (thanks to the great crowd that came out for the CSPAN taping at Stanford), I can write a bit about some of the new science that has caught my eye.
Crouching on top on the pile are howler monkeys. Howlers have become frequent visitors to the Loom, much to my surprise. For some reason they've recently started to have a lot to say about evolution--particularly, as odd as it may seem, about the evolution of our own species. As I wrote in an earlier post, we humans have good eyesight compared to many other primates. We have three genes that make receptors for light in our eyes, each sensitive to its own band of the spectrum--red, green and blue. The combined sensitivity of these genes lets us tell the difference between yellow, organge, pink, and red. Other apes and monkeys in the Old World also have trichromatic vision, as it's called. On the other hand, almost all monkeys in the New World have only two color genes, as do lemurs, which are the most primitive of living primates. One gene is sensitive to blue, and the other is broadly sensitive to the red-to-green part of the rainbow. As a result, they can't discern colors as well as we can.
Scientists have proposed that the first primates had only the blue and red/green genes. When some monkeys colonized the New World, they took with them this poor color vision. Only later, in the ancestors of today's Old World monkeys and apes, was the red/green gene accidentally duplicated. The two copies gradually mutated until they became sensitive to different colors. What would drive the rise of better color vision? It seems that some 30 million years ago, the climate in Africa cooled and dried, altering the forests. Leaves became a much more abundant source of food than before. With eyes sensitive to the colors grading between red and green, Old World monkeys could make out tender young leaves lurking in the dappled foliage.
Enter the howlers. Unlike all other New World monkeys, howlers eat a lot of leaves. And it turns out that unlike all other New World monkeys, they also have trichromatic vision. They appear to have independantly evolved these genes some 10 million years ago.
Another striking thing about Old World monkeys and apes is their sense of smell. Many of the genes (half or more) that build receptors in their noses are broken. In other words, they have mutated to the point that they unable to be used by a nerve cell to build a receptor. Mice and dogs, which have intense senses of smell, have mostly intact olfactory receptor genes. So do lemurs, and so do almost all New World monkeys. One possible explanation has to do with food. To check fruit to see if it's ripe or rotten, it helps to have a keen sense of smell. But if you're eating leaves, smell becomes less important than vision. Howlers, as leaf eaters, offer an independant test. Not only do they have trichromatic vision, but they have lots of broken genes for smelling.
Now here's the twist: noses can do more than just smell. In many land vertebrates, there's a special clump of neurons in the nose called the vomeronasal organ. This mysterious organ is specialized for detecting only one particular kind of molecule: pheromones given off by other animals. Many animals can recognize relatives with pheromones, and males can tell whether females are recptive for mating by sniffing pheromones in their urine or released from special glands. But some land vertebrates have lost some or all of their ability to detect pheromones. Birds, for example, don't have a vomeronasal organ. Nor do Old World monkeys and apes. Regardless of some ad may promise about pheromone-laced cologne, we humans have little if any ability to detect pheromones. The genes that build pheromone receptors in other species are broken in our own genome.
One explanation for our missing vomeronasal organ is that our eyes destroyed it, much as they destroyed our sense of smell. With powerful eyes for searching for leaves, our ancestors became more sensitive to visual displays in the opposite sex. The females of many Old World monkeys and apes get red, swollen genitals when they're ovulating; males take that as a signal to try to mate. As these primates depended more on this visual language of love, their pheromones became less important. Birds support this hypothesis--they have four genes for color, giving them even better vision. And instead of pheromones, they depend on beautiful feathers and combs to attract mates. (Female humans, along with the females of a few other Old World primates now conceal their ovulation. That shift did not, however, bring back our vomeronasal organ.)
Recently, a group of researchers asked the next logical question: what about the howlers? In a paper in press at Molecular Biology and Evolution, they reported a surprising result: howlers have plenty of perfectly good pheromone genes. So three-gene color vision doesn't automatically wipe out pheromones. There are a couple potential explanations. One is that the link between vision and a loss of pheromones doesn't exist at all. The other--which the authors of the report favor--is that good color vision only raises the possibility of abandoning pheromones. They point out that Old World monkeys and apes tend to live more on the ground than their New World cousins, in open forests and savannas as opposed to dense jungles. It's a lot easier to see a distant potential mate in Tanzania, in other words, than it is in Brazil. For howlers, pheromones may still have an edge, even with color vision.
I have no idea what secrets howlers will reveal next. I'm assuming that they didn't invent the axe, the wheel, and the jet ski on their own. But beyond that, nothing will surprise me.


My wife and I have two lovely daughters: Charlotte is two and a half, and Veronica is seven weeks. And we are tired. We think of ourselves as being on the losing end of a tag-team wrestling match--particularly at about seven in the morning, after Veronica has gone through a few hours of pre-dawn nursing, squirming, groaning, crying, spewing, and nursing. Just when she has faded off into angelic sleep, Charlotte wakes up from a long restful night and wants to eat Cheerios, do some jumping jacks, and type on my laptop pretty much all at the same time. It's like the Destroyer giving the Crusher the high-five as one goes out of the ring and the other comes in to deliver the final flying scissor kick.
I've looked for some enlightenment about this daily bruising from evolutionary biologists. For them, these golden years are all about energy and information. In order for a child to thrive--and, ultimately, to pass on its parents' genes--it needs a lot of energy to grow. Getting enough milk in the first year or two of life makes a huge difference to a baby’s health. But a mother can't just nurse her baby on some rigid schedule--four ounces at noon, and then four at midnight--because a baby's hunger is influenced by everything from the weather to its mother's own changing health. She needs a sign, and her baby is happy to give her one, in the form of a cry.
The parental brain is finely tuned to a baby's cry; in the middle of the night it brings us stumbling over to see what's the matter. We’re pretty normal as animals go in this respect—when a bird comes to its nest and hears the sound of hungry squawks, it automatically rushes off to catch more bugs. Cuckoo birds take advantage of their slavish dedication to these squawks. They lay an egg in the nest of another bird, such as a reed warbler, and when the new cuckoo hatches it kicks out the reed warbler chicks. Yet the reed warbler parents feed the cuckoo that killed their family. Why? Because the cuckoo can mimic the sound of a nest full of reed warblers.
In the 1970s, the biologist Robert Trivers had an unsettling realization: a mother's own child is a bit like the parasitic cuckoo. She and her child only share half of their genes, which means that their evolutionary interests aren’t the same. A baby has the best shot at surviving to adulthood and having babies of its own if it gets as much food, protection, attention, and so on from its mother as possible. And anything that a baby can do to get all this may boost its odds of success. In the womb, for example, a fetus sends out signals that increase the flow of nutrients from its mother's blood vessels.
But what's good for the baby is not entirely good for the mother, evolutionarily speaking. The best strategy for a mother to pass on her genes may be to spread her energies out evenly to all her children. Bearing and raising children is hard work, particularly for humans, and if a mother works too hard fostering one child, she may have fewer resources for her next one. Her genes will have a better chance of getting passed down if she can keep the manipulations of any individual baby in check. Mothers, for example, seem to slow down the growth of their babies in the womb. As a result, the average baby is not born at the optimal weight for avoiding an early death. It's a little on the light side. Only an evolutionary tug of war can explain that gap.
Once out of the womb, baby still struggles with mother. The baby still needs milk, warmth, and protection. Its mother, on the other hand, may have a different unconscious agenda. If she wants to have another child, she needs to switch her baby eventually from high-energy milk to low-energy food. (Nursing lowers the chances of getting pregnant.) The conflict gets even tougher if the baby is weak or the mother is struggling to survive herself. It may be better to cut her losses and hope the next baby has better luck.
A baby is not helpless, though. After all, it has a direct line into its mother's head. Babies may manipulate their mothers into offering them more care with signals like crying. According to one theory, crying is a kind of "honest advertising" to convince a mother a baby's worth the effort. Crying, after all, doesn't come for free--it may actually double a baby's metabolism. So by crying, a baby may be saying, I can afford to waste this energy because I'm such a strong kid. Crying-as-advertising might solve the mystery of colic—the inconsolable wails of some children who otherwise seem perfectly healthy. They may just be trying particularly hard to impress their parents. (Here's a post about how the colors of autumn leaves may also be honest advertising, sent from trees to the insects that eat them.)
The tantrums and clinginess of older babies may just be new variations on this basic strategy. As mothers slowly try to wean their kids, the kids respond by getting in as much nursing and attention as they can. The more the child can nurse, the longer it will take for its mother to have another child.
Studies on our primate cousins back up these theories. It turns out that infant monkeys make about ten times more contacts with their mothers than vice versa, and that the mothers push away the babies as they get older. They even start ignoring their babies' distress calls--because often these calls turn out to be false alarms. (My personal favorite is the observation that young monkeys and apes sometimes jump on adults during sex. One chimp that was adopted by a married couple apparently jumped on them as well.)
But there's a flip side to this hypothesis: if it's the product of evolution, it must be partly the result of genes. In the February issue of the journal Behavioral Ecology and Sociobiology, Dario Maestripieri of the University of Chicago reports his elegant study of the genes behind the mother-child struggle. At a colony of rhesus macaque monkeys, he found 10 babies who were all born within a day of each other. He shuffled them among their mothers, and the foster mothers raised the babies as if they were their own. Maestripieri then watched how they got along as the babies grew older and the mothers prepared to have another baby.
Not surprisingly, these foster families got into more conflicts as the mothers approached their next opportunity to mate. But Maestripieri also found that some babies became pushier than others, while some mothers brushed them off more than others. And when he compared the foster children with their biological mothers, he found a genetic link between them. The clingiest infants had biological mothers who tended to rebuff their foster children. In other words, the pushy-baby genes and the tough-mom genes were bundled up as a package. As mothers become tougher, the genes that favor pushy babies get favored. Maestripieri has taken a snapshot of a struggle between parents and children that has lasted for millions of years.
All of this doesn't help me feel more awake this morning, but at least it helps to remind me that Charlotte and Veronica aren't in this tag-team match out of personal spite. It's just evolution, Dad.


“Abominable” is not the sort of word that most people may associate with flowers, but for Darwin, it was a perfect fit. He saw life on Earth today as the result of millions of years of victories and defeats in the evolutionary arena. Flowering plants, by that reasoning, were among the greatest champions of all. There are some 250,000 known species of flowering plants, and the total is probably double that. The closest living relatives of flowering plants (pine trees, firs, gingkos, and the like—collectively known as gymnosperms) make up a grand total of just over 800. These numbers are all the more remarkable when you consider that flowering plants emerged around 140 million years ago, and gymnosperms had already been on Earth since at least 360 million years ago. In over twice the time, gymnosperms have produced less than 1% of the diversity found in flowering plants. What was the secret of flowers? Darwin wrote to his friend the botanist Joseph Hooker that this question was “an abominable mystery.”
Darwin assumed that there must be one secret to be found, and since then most botanists have agreed. Perhaps it was their ability to be pollinated by bees and other insects, or perhaps the way animals that ate their fruit could disperse seeds in their droppings. In order to test hypotheses like these, scientists need to figure out how all the species in question are related, to see exactly when the explosion of diversity took place over the course of their evolution. But when you’re talking about 250,000 species, that’s no easy task.
In recent years, different teams of scientists have compared the genes of different species of flowering plants. Researchers from Florida and England developed mathematical methods to combine 46 of these trees into a “supertree.” While this tree does not actually have a twig for every species of flower, its 379 branches include every family of species. This is only a step towards the full tree of flowering plants, but it’s a huge one—just ten years ago scientists thought such a goal was impossible to meet. The researchers were also able to plot the tree against time, estimating the dates at which major new groups of flowers evolved. The scientists present their tree today in the Proceedings of the National Academy of Sciences.
Supertree now in hand, the scientists then tried to solve Darwin’s abominable mystery. Did flowers explode in diversity at one particular point in their history? Or was there one particular strategy for survival used by some flowers that made them more diverse than other flowers?
None of the above, it appears. Early flowers did undergo a small burst of diversity, but there have been many bursts since then, and they don’t seem to follow any one rule. There is no one secret to the success of flowers, but perhaps many small ones waiting to be discovered. It’s possible for example that in some cases flowering plants thrived particularly well as the planet has gradually cooled. (Gymnosperms may, by constrast, be locked into a biology better suited to the much warmer climates 200 million years ago.) Darwin’s abominable mystery has split into many small mysteries, like a pool of mercury breaking into an elusive flock of droplets. But when scientists finally create a species-level tree of flowering plants (an achievement that may take decades), they may finally pinpoint some of the solutions.


A while back I had the pleasure to join a team of scientists and teachers to build web site that explains evolution. Funded by the National Science Foundation and the Howard Hughes Medical Foundation, it charts the history of evolutionary thought (both before and after Darwin), and lays out the different lines of supporting evidence for evolution, as well as its relevance to everyday life. It addresses some of the common misconceptions about evolution, and lays out the nature of scientific inquiry. Science teachers can also find ideas for lesson plans and tips for answering common student questions. It's now live, and I think they've done a great job of creating an elegantly simple way to navigate lots of information. (I'm speaking as someone who barely knows the difference between Java and HTML.)
I contributed the history section of the site. Writing for the web reveals to me some of the illusions that ordinary writing can create. History--particularly the history of ideas--does not proceed in a linear fashion like lines of words across a page. It is more like an expanding net, with different people influencing each other across the disciplines and from centuries past. On the Understanding Evolution site, we decided to lay out the history of evolutionary thought as a set of tangled branching vines, with plenty of links joining it all together. While books remain my first love, I'll admit that the web sometimes gets closer to the shape of reality.
Anyway, check it out. As always, comments are most appreciated.


Georgia's State Schools superintendent Kathy Cox has backed down from her ban on the word evolution.
While this is excellent news, Georgia is still left with an incompetent superintendent. For one thing, she thinks Intelligent Design is an acceptable theory to teach in schools. For another, she justified removing the word evolution from state science standards by saying: ""By putting the word in there, we thought people would jump to conclusions and think, 'OK, we're going to be teaching the monkeys-to-man sort of thing.' Which is not what happens in a modern biology classroom."
In a sense I'm sure she did not intend, Cox is right. Monkeys did not evolve into humans, no more than your cousins are your grandparents. In a modern biology classroom, students learn that molecular and fossil evidence indicate that monkeys and humans descend from a common ancestor some 25 million of years ago.
I'm pretty sure she meant something else though, something that is yet another reason why she should resign.


"Conservative board members said they wanted to make sure that schools teach sound science, arguing that evolution is a flawed theory that cannot be proven."
--"In Kansas, key decision on teaching evolution" Associated Press, August 12, 1999.
If the new fad of "sound science" takes hold in Washington, I bet we'll see creationists taking it up again as well, challenging the government funding of "unsound" research into evolution. Stay tuned.


Charles Darwin had no great hope of witnessing natural selection at work in his own time. He assumed that it would operate as slowly and imperceptibly as the water that eroded cliffs and canyons. He would have been delighted to discover that he was actually wrong on this count. By the mid-1900s, scientists were running selection experiments in laboratories and beginning to document the effects of natural selection in the wild, such as the rise of insects that were resistant to pesticides. Still, the work has been slow and painstaking. Peter and Rosemary Grant of Princeton have done some of the best work on natural selection in the wild, documenting its effect on Darwin's finches on the Galapagos island. (Changes in climate lead to changes in the food supply which in turn changes in the beaks.) The Grants have dedicated 30 years of research to the evolutionary fate of this small group of birds. The slowdown comes in part from the months or years that animals need to reproduce, generating the new mutations and rearrangements of DNA that natural selection needs in order to operate. So what would serve as a better case study?
A virus.
Viruses can replicate madly even in a single sick person, and in some cases they can spread across the planet in months. Added benefits include their high mutation rate--which means that they undergo natural selection even faster--as well as their tiny genomes, which makes it far easier to pinpoint the changes that occur during evolution.
Virologists have been studying the evolution of a number of viruses in recent years--flu, dengue, HIV, and so on. And when SARS broke out, they were ready. A team of researchers from China and the University of Chicago have studied the virus from early in the outbreak in late 2002 to its final hushing down in February 2003. They have painted a remarkable portrait of natural selection sculpting a virus for a new host.
The early strains were most like the strains in civet cats, which seem to be where at least part of the human virus came from. The virus did a bad job of infecting humans at that point, in large part because its machinery for invading cells was a bad match for our biology. But then new variants emerged. They tended to lose DNA from one particular gene. In addition, the researchers discovered some 299 individual sites where one nucleotide was substitute for another. The researchers showed that these substitutions altered the proteins made by the SARS virus more often than would be expected by chance--a sign that natural selection was at work. As the virus changed, it became far better at infecting humans, to the point that a single person might infect hundreds of new hosts. At this point, the mutations that emerged in the virus were far less likely than chance to alter the SARS proteins. This is a sign of purifying selection at work-- most mutants that strayed even slightly from the new winning recipe were outcompeted. As a result, by the end of the outbreak, the exuberant bush-like growth of the SARS tree has dwindled to a few successful branches. And all of this evolutionary change took place within just three months. This is more than just an awesome glimpse at evolution in the wild (the wild being our own bodies). It's insights like these that will help scientists make vaccines to control SARS.
You'd think that breakthroughs like these would get people fired up about the promise of evolutionary biology. But apparently the state school superintendent in Georgia would prefer that her students remain in the dark. I guess there must be no SARS in Georgia.
Correction: Thanks to Stan Jones for pointing out that the superintendent is a woman, not a man.


Last week I wrote a post about some new research that suggests that global warming could trigger large-scale extinctions in the next few decades. In particular, I dissected some of the objections that were leveled at the study, pointing out how irrelevant they are to the actual science at hand. Some people who posted comments raised a question that I didn't talk much about: how did biodiversity respond to rapid climate change in the not-so-distant past?
After all, in the past 2.5 million years (known as the Quaternary Period) Earth's climate has become particularly jumpy. It has swung in and out of ice ages that have lasted tens of thousands of years. This warm-cold cycle has been punctuated by sudden jolts, such as a 1200-year long period called the Younger Dryas that occurred some 12,700 years ago. The climate had almost completely recovered from the last ice age, when average temperatures dropped 10 degrees or more and remained cold for more than 1,000 years. Then the Earth abruptly warmed again, in some places by as much as 15 degrees in a decade. If global warming shrinks the ranges of species until they go extinct, then shouldn't massive die-offs have happened 11,500 years ago?
By coincidence, I also happen to come across a paper to be published in the Philosophical Transactions of the Royal Society by G. R. Coope, a paleontologist at the University of London. For over 30 years he has been studying the shells of beetles that have been preserved for thousands, even millions of years. (The glittering items in the picture come from long dead beetles.) Through Quaternary, beetle species have moved far--from Britain to Tibet, for example. But Coope finds little evidence of beetles going extinct in great numbers. "They indicate that insect species show a remarkable degree of stability throughout the Ice Age climatic oscillations." Shouldn't Coope have seen repeated rounds of massive extinctions instead?
I decided to put these questions to a co-author of the paper that caused all the controversy in the first place, Oliver Phillips of Leeds University. "There are several quite complicated issues here," Phillips replied in an email. "Extinctions caused by climate change will be related to a number of factors." The rate at which it happens matters, as does the starting and end points of the shift. Not only that, but what you could call the "evolutionary experience" of a species matters too--which creates, in Phillips's words, "the genetic capacity of the species to adapt, migrate, or simply persist." This complexity means that you can't simply say a rise of 5 degrees will drive some fixed number of species extinct. Consider how a sudden rise of five degrees during a heat wave can kill thousands. A slower rise of 5 degrees on a winter day probably won't kill anyone at all.
"If we compare scenarios for the coming century(ies) with Quaternary history, there are both parallels and differences," says Philips. "For each factor, will the future match the past?" In two important cases, the answer seems to be no: where we're starting from and where we're going. This round of global warming is not beginning in the depths of an Ice Age (or even of a Younger Dryas cold snap). Instead, we may be making a warm world warmer. In fact, if the projections turn out right, we are actually pushing the planet out of the range of temperatures experienced during the Quaternary ice age cycles within 100 years. The planet has been gradually cooling for over 50 million years, and the past 2.5 million years of ice age swings have just been ripples on this falling wave. In centuries to come, the planet may warm to a level not seen in dozens of millions years. When the world warmed at the end of ice ages, giant glaciers retreated from across the temperate zones, revealing new habitats, while isolated tropical species were able to spread out as the tropics became lush and moist. Not exactly comparable to what's going on now.
The ice age cycles have been going on for so long now that many researchers--Coope included--suspect that a number of plants and animals (humans included) have evolved adaptations to these fluctuations. Beetles, for example, could tolerate the advance of a new ice age or a sudden warming by shifting their range. But they may have been adapted to making these adjustments only within the so-called climate envelope that has dominated the Quaternary. They now have a versatility that works only at cooler temperatures than they are starting to experience. "This logic lies behind the climate-envelope approach we used," Phillips explains. It may be significant that the one time that the fossil record of beetles shows them taking a hit is right at the start of the Quaternary Period, when the planet cooled abruptly and snapped into its cycle of ice ages. "So, the lesson from the fossil record is that when we have a major shift well beyond geologically-recent boundaries we have a major extinction," says Phillips.
What may make the coming climate change even more brutal on biodiversity is the fact that it is happening in a world very much unlike the world 11,500 years ago, or at any other point in Earth's history. We humans are reworking the planet in all sorts of ways. Phillips says this is "probably the 'killer' factor here, and one which we didn't even include in the paper." Forests can't follow their climate to a new place if that new place is given over to farming. Biological invasions are disrupting ecological communities, making them less able to handle a stress like cliimate change. And the flux of compounds other than carbon dioxide (nitrogen, for example), is altering the biosphere in unpredented ways. All in all, we're entering a new geological period. Say farewell to the Quaternary, and say hello to the Anthropocene--a time when nothing in nature is untouched by human influence.


If you'd like an example of the latest rhetorical tricks being used by antievolutionists, you can't do better than this press release issued today from the Discovery Institute. The Minnesota legistlature has to choose between two drafts of state science standards written by a committee. A minority of the committee wrote the second draft, which requires that "weaknesses" of evolution be taught. The Discovery Institute (a well-funded cryptocreationist outfit) is trying to mess with biology class, as it has in states across the country.
DI would like to convince us that science is like politics--that there is a middle ground, surrounded on either side by the radical fringe. And DI would also like you to believe that they occupy that middle ground. Seth Cooper of DI tells us that legislators have the chance to let students learn about evolution "fully and fairly," rather than being "held hostage to the demands of extremes on either side of the debate."
So, on one side, we have those who would "like religious views to be presented in biology class," and on the other hand we have people who recognize that evolution is as well established a scientific theory as the germ theory of disease or the theory of quantum physics. In the middle, we have the Discovery Institute, which supports requiring "students to be able to distinguish between changes existing within species (microevolution) and the emergence of new species and changes above the species level (macroevolution)."
Let's look at this bogus spectrum again. I wonder who exactly wants religion taught in biology classes. Is the Discovery Institute selling out other creationists? Of course not. The oldtime "Creation Scientists" of yore never claimed to teach religion in biology class. They had "scientific" proof that a flood created all geological features a few thousand years ago and had no need to open their bible. For them, biology class simply provided an account of the world that they could feel comfortable with. If the Discovery Institute really is so set against the demands of this extreme, then they should work as hard against Young Earth Creationists as they do against science standards. I see no evidence of this. In fact, Young Earth creationists have been happily embraced as fellows at the Discovery Institute.
On the other side of the spectrum, we have the other "extreme" that accepts evolution as a well-established but dynamic part of biology. Let's see who we've got here. Dozens of leading organizations of scientists. The authors of thousands of papers published in peer-reviewed journals. When scientists involved in the Human Genome Project offer insights into how a common ancestor gave rise to fruit flies, vinegar worms, and ourselves, apparently they are giving themselves away as extremists.
Then comes an outright lie.
"Cooper added that the minority report followed guidance from Education Commissioner Cheri Yecke, who had encouraged the standards committee to look to guidelines set down by Congress in the Conference Report of the No Child Left Behind Act. Congress urged states to present 'the full range of scientific views" on controversial topics "such as biological evolution.'
"Last fall, Commissioner Yecke received a letter from Congress stressing that this guidance in the No Child Left Behind Act Conference Report was the official position of Congress on science education. The letter was signed by Minnesota Congressman John Kline and Congressman John Boehner, chairman of the U.S. House Education and the Workforce Committee."
You would never guess from this passage that the wording about evolution was cut out of the act before it became a bill. It is not Congress's official position.
Finally, the press release ends by urging Minnesota to "teach the controversy." The Discovery Institute would like to pretend that their specious claims are actually part of a scientific controversy. If that were true, then you'd expect them publishing new findings in Cell or The Journal of Biochemistry, and being invited to give talks at major scientific venues like the Federation of American Societies of Experimental Biology. Instead, they whine with their bogus claims of censorship. Having been unable to make a dent in the scientific arena, they create a political controversy, through which they hope to get from high schools what they can't get from real science: credibility.
Pharyngula is a good place to see how things develop in Minnesota (Its author is a Univeristy of Minnesota biologist). I hope that they can marshall the same spirited grass-roots opposition to this nonsense that has emerged in other states like Texas and Ohio and Kansas.
Update 8PM: PZ Meyers reports on Pharyngula that the first day of committee hearings today on the science standards featured a Young Earth creationist blaming evolution for venereal disease. I await a press release from the forces of moderation at the Discovery Institute, attacking this extremist. And wait, and wait, and wait....


Sometimes when you take a look at life on Earth, it seems like evolution might be able to produce anything you could ever imagine. Can a mammal become so adapted to swimming in the ocean that it never comes back on dry land? Check. Can a squid evolve eyes as big as dinner plates? Check. Can a mole evolve a nose that acts like a hand? Check. But what about the fact that no ape has ever grown antlers? Or that no bird has ever reached a fifty foot wingspan? Or that, so far as anyone can tell, no animal has acquired hydrogen-producing bacteria in its gut and floated off like an airborne balloon? Are these dreams beyond the limits of evolution? If they are, where do you draw the border of the possible and the impossible?
These sorts questions are easy to come up with, but they rarely come in a form that allows for a scientific answer. But for some years now, Frederik Nijhout of Duke University has been able to test the limits of evolution by experimenting with horned scarab beetles (Onthophagus taurus). The male beetles grow long horns that they use to compete over females. The horns are wonderfully long, and males with longer horns seem to have an edge over those with shorter ones. (Insert your size-comparison joke here.) Is there a limit to how long the horns can get? One limit emerges from how the beetle develops. A horn is the result of a tiny patch of cells in a beetle larva multiplying like mad. They need energy to do this, and the more energy they consume, the less may be available for neighboring cells. In the late 1990s Nijhout found that beetle horns are in an embryonic competition with beetle mandibles, which grow from cells right next to the cells destined to become horns. When he inhibited the growth of horns, beetles grew bigger mandibles, and vice-versa. Mutations that produce bigger horns run the risk of leaving a beetle with smaller mandibles, which would make it harder for the insect to eat enough to survive.
Now Nijhout has discovered that this competition is far more widespread than he initially thought. In a paper to be published in the February 2004 American Naturalist (full text here), he and Armin Moczek looked at the development of horns versus the development of another appendage essential to a male beetle--his genitalia. Unlike mandibles, however, a beetle's privates are far from its horn--in fact, on the other side of its body. Yet Nijhout and Moczek found that when they destroyed the cells destined to become the genitalia, the beetles grew horns that were on average 26% longer than on beetles on which they performed a sham surgery (cutting them open but leaving the genital cells unmolested).
The scientists point out that insect cells need more than just energy to grow--they are controlled by hormones and other regulating molecules. These molecules course throughout the body, and so a greedy organ in one part of the body may be able to stunt the growth of another one far away. Insects may be particularly vulnerable to this sort of constraint, because their adult body takes form very quickly as a larva goes through metamorphosis.
It turns out that only some male horned scarab beetles grow long horns and battle for mates. Others grow to only small sizes as adults and don't grow any horn at all. Rather than fight the big males, they sneak around, trying to grab a female on the sly. Freed from the burden of a big horn, they seem to have channeled their resources into their genitalia. Their genitals are bigger than those of the horned males, and their sperm is superior as well. The horned scarabs beetles may not be able to escape their evolutionary constraint, it seems, but they certainly can do a good job of exploring the possibilities those constraints allow them.
Exactly what the horned scarab beetles have to say about men with big feet, however, remains for future scientists to determine.


If you've ever been to a Central American forest, you've probably heard the hoots and wails of a howler monkey. But these creatures deserve our attention for more than their howls. They turn out to tell us a lot about the evolution of our own senses. We and some of our close primate relatives are remarkable for having powerful color vision. What triggered the evolution of this adaptation some 25 million years ago? Some researchers have proposed that as the global climate cooled, our ancestors were forced to shift from a diet of fruit to leaves. An ability to detect red and green colors would have helped these early primates detect the best leaves to munch on. The descendants of these leaf-munching primates shifted to other foods in later years, but they held onto their color vision.
Before the ancestors of today's Old World monkeys and apes acquired color vision, primates had already spread to South America and this continent began to drift away. None of today's New World monkeys has trichromatic color vision--except for the howler monkey. And a major part of the diet of the howler monkey is, interestingly enough, leaves.
Meanwhile, other scientists have been studying the evolution of our sense of smell. Devolution might be a better name for it. We have hundreds of genes for the receptors that snag odor molecules in our nose. But more than half are broken pseudogenes, mutated beyond any use. Most of the corresponding genes in a mouse are in good working order. Given that mice depend profoundly on their sense of smell, that makes sense, and it also suggests that we have lost much of our distant ancestors' sense of smell. In order to test this hypothesis, a group of researchers recently did a major survey of the olfactory receptors in primates, looking for when in evolution our ancestors lost their receptors. They found that all Old World monkeys have significantly more pseudogenes than a more primitive primate, the lemur. So our noses have been on the downhill slide for some 25 million years. Could it be that a shift in diet from eating fruit--which depends on sniffing out ripe and rotten food--to eating relatively odorless leaves was the trigger for the shift? It's a neat idea but tough to test. You have to find another case in which the same shift happened and look at the noses involved. Oh, wait--the howlers. In a wonderful bit of evolutionary elegance, it turns out that, unlike all other (fruit-eating) New World monkeys, leaf-eating howlers also have lots of pseduogenes for olfactory receptors.
Comparing our genes with howlers reveals another interesting thing. It turns out that olfactory receptor genes mutate beyond use so fast that we should theoretically have far few working genes for smelling. While smell may not be the way we make our way through the world the way it used to, it seems that it still pays to be able to tell when the mayonanise that's been on the counter for a while isn't quite right.


The emotions that other species summon up in the human brain are perplexing. A lion inspires awe and respect. It is the king of the jungle, a great name for a football team, a noble guardian of the entrance to the New York Public Library. A tapeworm, on the other hand, summons disgust mixed with a little contempt. You will never find yourself cheering for the Kansas City Tapeworms. But are these species really so different? Both animals get their nutrition from the bodies of other animals, and tapeworms are arguably more sophisticated in the way they get their food than a lion. Tapeworms escape our immune systems with ingenious biochemistry, and may even be able to eat our antibodies as food. Some species that live in fish make the fish leap around at the surface of the water so they are easier prey for birds, the final hosts of the tapeworm. And is it any less gruesome to be torn apart by a lion than to be host to a tapeworm? The best that a parasite can hope for, if a parasite could ever hope, would be to inspire fear. That's the fate of parasitoid wasps, which, as I mentioned in a previous post, are the inspiration for the monsters of the Alien series. The precision of their cruelty, the intimate ways in which they can use up their hosts, give us chills. Yet they remain truly alien--a malevolence that is separate from the rest of the natural world.
But parasitoids are very much a part of nature, and what they do is really one a variation on what many other organisms do. It helps, I'd argue, to think of parasitoids as hackers. They have hacked the living code of their hosts--the combination-locks of cell receptors, the wiring of metabolic circuits, the calendars of life history. But parasitoids, as living things, can be hacked as well. And among their hackers are organisms that few would consider malevolent or cunning: orchids.
Flowers and insects have had intimate relationships for tens of millions of years. Honeybees, for example, will travel from flower to flower to gather nectar for food, and accidentally pick up pollen grains along the way. Some of those pollen grains will wind up on another flower of the same species, and will fertilize its seeds. Flowers have adapted to take advantage of these insect couriers with bright petals and convenient landing strips. The easier they can make it for insects to carry their pollen, the more successful they'll be in the evolutionary sweepstakes.
Many orchids have evolved particularly elaborate contraptions for insect pollination. One species keeps its nectar at the bottom of a foot-deep tube, so that only a moth with a foot-long tongue can reach it (and press its face into the orchid's pollen). Another orchid has a spring-loaded catapult that slaps pollen onto the back of bees as they walk toward the flower's nectar. But most elaborate of all are the adaptations of certain orchids in Australia and Europe that offer the insects nothing at all.
Take the parasitoid wasp Neozeleboria cryptoides. The females lay their eggs in insects; the males that emerge from the host grow wings, while the wingless females crawl around on the ground and up various plant stems. The females produce a pheromone in order to attract males. It takes less than a billionth of a gram of the stuff to let a male N. cryptoides pick her out from wasps of other species. It's also enough to let him distinguish between virgin females and ones that have already known the pleasure of another male's company. The male homes in on a suitable female and they mate, whereupon he lifts her up and carries her from eucalyptus tree to eucalyptus tree so that she can drink the juices secreted by aphids. He finally leaves her near an inviting host, where she can lay her eggs.
Most of the time this arrangement works pretty well for the wasps, but every now and then a male will get a shock. He is flying along when he gets the unique whiff of a female seeking a mate. He cruises low to the ground, where the female should be waiting on a plant stem. As the smell of pheromones gets more intense, he see her long slender body hove into view. He lands on her and takes hold, only to find that he has not actually found a female wasp. Instead, he's fumbling around on the end of an orchid flower.
In Australia and Europe, some orchids use love-starved males to spread their pollen. They produce exquisite replicas of the pheromones made by a species of wasp, and they grow lobe-shaped wasp decoys. When a male wasp crashes into the orchid, it gets covered with orchid pollen. If it gets fooled again, it allows the pollen to fertilize another orchid.
The closest relatives of many of these sexual deceivers are deceivers of another sort--orchids that produce the aroma of nectar without the nectar. This kinship hints that some ancestral food deceiver underwent a mutation that made it produce an aroma that was vaguely sexy to some insects. Over time, more mutations helped them create the aroma of attraction instead of food. Some of the structures in the orchid flower began to swell, offering another illusion to the male wasps. The wasps themselves may well have driven the orchid's evolution as well. N. cryptoides. The orchid, like the wasp itself, is a parasite, demanding a toll from the wasps but offering nothing in exchange. Males learn to avoid orchid patches after getting tricked a few times, and any mutation that helps them to a better job of distinguishing between orchids and female wasps will give them an advantage over less discriminating suitors. In order to benefit from sexual deception, the orchids need to continually reinvent their perfumes, so that they are harder to tell apart from wasp pheromones. In the process, they lost the flowery aromas left over from their ancestors, until their odor was pure mimic.
These orchids have not become a faithful replica of a female wasp, however. They offer an exaggerate set of the cues a male wasp relies on to recognize a female was. (It's a bit like the grotesque bust and rear of a woman in a soft-porn cartoon.) In this month's issue of the Journal of Evolutionary Biology, the Swiss biologist Florian Schiestl reports on what male wasps find sexy in an orchid. He was able to do this because earlier this year he isolated the orchid mimic of the N. cryptoides pheromone. Schiestl coated dummy wasps of different sizes with different amounts of synthetic pheromone. He then presented male wasps with pairs of the dummies and recorded which ones they chose.
He found that males prefer bigger females to smaller ones. Size is probably a good clue to the health of a female. The wasp-mimicking structures on the orchids are a third longer than the actual wasps and over five times wider, which just so happens to be the strongest preference of the male wasps. Anything larger was no more attractive than small sizes. In other words, the orchid is perfectly exaggerated. The fake wasp body is as big as it needs to be, without being bigger.
Meanwhile, the male wasps are attracted to stronger pheromone scents over weak ones. Schiestl found that orchids on average produce 10 times more pheromone than female wasps. The flower can pump out vast amounts of pheromones compared to the females, most likely because the females are working under some special constraints. Many predators and parasites use pheromones as signals of potential prey, and so the wasps could risk death if they were too loud in their calls. But these wasp enemies pose no threat to the plants, and so they can shout as loud as they want. And since wasps don't bump into orchids all that often, the louder the better. As a result, an orchid is actually more attractive to a male wasp than a female of his own species.
But even now, the orchids haven't finished hacking into the wasp life cycle. After female wasps mate and their eggs begin to develop, they stop producing pheromones. The developing eggs give off another chemical that male wasps can recognize and which tells them that this female is no longer a virgin. Schiestl has also found that after some sexually deceptive orchid flowers have been pollinated, they release the same postprandial chemicals. The male wasps are repelled--perhaps towards one of the other flowers on the same plant that are still releasing come-hither signals.
In the comments to my previous blog about parasitoids, Walt pointed out that the next move in the Alien series actually going to be a battle between the alien parasitoid and the dreadlocked beast from the Predator series. (Here's the preview.) Now, one of my fondest memories in childhood was watching the various face-offs between Godzilla, Mothra, and the rest. But if nature is our guide, the alien should really meet its match in the luxurious, baffling embrace of a flower.


Last week I briefly mentioned some stark estimates about the potential extinctions that could be triggered by global warming. Since then, some global warming skeptics have tried to pour cold water on these results by making some dubious claims about natural selection and extinctions. While I have reported about global warming from time to time, I leave blogging on the subject to others (particularly David Appell over at Quark Soup). But in this case, evolution is drawn into the mix.
Here, in a nutshell, is what the scientists wrote last week in their Nature paper (which the editors have made available for free). They studied over 1,000 species in all sorts of terrestrial habitats, from Mexican deserts to Australian rainforests. Using information on their ranges, the scientists estimated the range of temperature, moisture, and other climate conditions in which the species currently persist (what they call a "climate envelope"). As the climate changes in the coming century, these climate envelopes will change shape. Some may expand towards the pole, while others will slide, and others will shrink. To estimate the shape and position of these new climate envelopes, the scientists looked at the shift in these ranges in three different scenarios for life in 2050--a "small" change of .8-1.7 degrees C, a medium change of 1.8-2.0 degrees, and a big change of over 2 degrees. (These ranges come from IPCC projections, which in turn are based on a range of potential future emissions of carbon dioxide. For comparison, the planet has warmed an estimated .6 degrees C in the past century.)
Some species that can spread quickly may be able to move into their new climate envelope. Others that disperse slowly--animals that only live in isolated patches of heath, for example--may only be able to survive in the overlap between today's climate envelope and the envelope of 2050. Others still may simply have nowhere to go--Australia's rainforest, for example, are on the northern coast of the country. Global warming is predicted to chew away at their habitat at the south, but they can hardly find new territory in the ocean to the north.
The researchers estimated how much range all of these species will lose. Ecologists have long known that the number of species that a territory can support is proportional to its size. That means that if a forest gets cut into a fragment, it will lose some species. If a species is found nowhere else, it becomes extinct. Based on this relationship between area and biodiversity, the scientists concluded that global warming will knock out a significant percentage of species. At the low end of global warming scenarios, they estimate 18% becoming extinct, and at the high end the number is 35%.
Potent stuff. In terms of speed and magnitude, these results suggest that global warming alone could trigger mass extinctions on par with some of the all-time great catastrophes. In the news coverage of the research and in later comments, skeptics tried to mock it. One line of mockery took on the science of predicting extinctions. The best example of this comes from Gregg Easterbrook, who called the research "nonsense." His blog was promptly picked up approvingly on the blog of the conservative magazine Reason, and will no doubt continue to circulate in such quarters.
(A note on Easterbrook, which you are welcome to skip: In the past, I have pointed out how his understanding of evolution can get downright foolish. This point doesn't bear more repeating. I'm revisiting him now only because he offers a common sort of poorly reasoned "skepticism" directed at the science of extinction. Granted, Easterbrook accepts the reality of global warming and claims to be concerned about the potential for extinctions in the future. But his criticisms of this particular research represents a pretty widespread argument that needs to be challenged.)
COMPUTER MODELS: First, Easterbrook complains that the study is, in his words, "entirely a computer simulation." He claims that "as anyone familiar with this art knows, computer models can be trained to produce any desired result....Computer models are also notorious for becoming more unreliable the farther out they project, as estimates get multiplied by estimates, and then the result is treated as specific. This is a 50-year projection, and everything beyond the first few years should be treated as meaningless statistically, given that tiny alterations in initial assumptions can lead to huge swings at the end of a 50-year simulation. Nature is a refereed journal, but it appears that all the peer-reviewers did was check to make sure the results presented corresponded to what happened when the computer models were run. There does not appear to have been any peer-review of whether the underlying assumptions make sense."
This sort of complaint only makes sense if you haven't bothered to read the paper and become familiar with some of the referenced papers on which it is built. The species-area relationship is an iron-clad rule in ecology that's been tested and retested many times. The concept of climate envelopes has been tested as well; it has proven its mettle by allowing scientists to accurately predict how species shifted their ranges as Ice Ages altered past climates. Moreover, the researchers--well aware of the uncertainty that can plague these sorts of studies--came up with extinction estimates with three different methods for analyzing the loss of habitat. They got the same results with all three tests, which is evidence that while the estimates are not precise (and aren't claimed to be) they are robust. So Easterbrook's claims about the study being sensitive to tiny changes in the underlying assumptions doesn't hold up.
COUNTING SPECIES: Easterbrook and others often try to raise doubts about extinctions by confusing the numbers. He points out that the lead author, Chris Thomas, was quoted in the Washington Post as saying as many as 1.25 million species could go extinct. Amazingly, it seems, Thomas is unaware that the International Union for the Conservation of Nature, which says that 12,259 speces are threatened. "The jump from 12,259 imperiled species to 1.25 million extinctions is a hundredfold increase!" Easterbrook claims. "The rate of species imperilment will rise in a short period to 100 times the current rate, and based solely on climate change? That sounds extremely implausible. In fact it sounds like cockamamie galimatias."
It sounds this way if you ignore the difference between the recorded numbers of total and threatened species and the estimate numbers. Scientists know that there are far more total species on Earth than have been recorded. All you need to do is go to a tree in some remote rainforest and pick off the insects, nematodes, and other critters living on it. Many will be new to science. Based on the proportion of new species scientists find each time they take a survey, they can estimate how many species await discovery. There's a lot of debate over just how many species there are out there, but all good estimates are in the millions--in some cases in the tens of millions. Thomas's study looked at the percentage of extinctions recorded in each habitat. Because the extinctions all follow a general pattern, he then calculated what 37%--his worst case estimate--means in terms of the actual number of species on Earth. If you assume 3.3 million species,, you get 1.25 million extinctions. But the fact is that Thomas was actually using one of the lowest estimates of the total number of species. If there are 15 million species, the figure would be 6 or 7 million.
As for the good folks at the IUCN, they have an even harder time than the scientists who look for new species. They need data on a species's historical and current range, its historical and current population, the threats to its habitat, its life history, and many other pieces of information before they can decide that a species is threatened. This takes many years of field work, and so only a small fraction of species have gotten that sort of attention. It is no surprise that they only register 12,259 species as threatened. Certain groups of species are much better studied than others--birds, for example, have been the focus of naturalists for centuries, and most species have probably been found. If you look at these well-studied groups--among which most threatened and extinct species have been identified--you find a pretty consistent rate of extinctions. Stuart Pimm calculates it to be 100-1000 times higher than the typical "background" rate of extinctions in the fossil record.
RECENT EXTINCTIONS: Another common tactic for questioning extinction science is to claim that we should already have a long record of human-caused extinctions and don't. Easterbrook asks why, for example, we haven't seen a lot of extinctions in the past 20 years in the Pacific Northwest, a place that has experienced warming as well as habitat destruction and whose wildlife is very well studied by scientists. "For anything even remotely close to Thomas's 1.25 million extinctions to be a hard number, we should already be seeing the bow wave in the form of dozens if not hundreds of extinctions in well-studied areas like the Pacific Northwest. Instead we see, um, zero."
I don't see how Easterbrook gets to decide exactly what evidence in recent history is enough to falsify this study. Did he run the computer model with data from the Pacific Northwest over the past twenty years and come up with 20 extinctions? (I'm reminded of creationists who used to point out that there were no intermediate fossils between land mammals and whales. That proved evolution was false. Then, when a species of whales with feet was found, they claimed that the lack of a species in between this intermediate and true whales was proof that evolution was false. "Keep moving the goal posts" is the strategy.)
In setting up his goal posts, Easterbrook reveals many of the ways in which he misunderstands the study of extinctions. Different habitats experience different rates of extinctions. For one thing, different regions undergo different rises in temperature and shifts in rainfaill. For another, the most sensitive species are the ones with the smallest ranges. That's why the deforestation of the northeastern US wiped out relatively few birds--because most have big ranges and could therefore survive. The Pacific Northwest is not a particularly major hotspot of these small-range species.
The biggest misunderstanding that Easterbrook and his fellow "skeptics" have is that extinctions happen overnight. In fact, extinctions take time. Thomas and his coauthors talk about 18-37% of species being "committed to extinction" by 2050--not actually extinct. Studies in African and Asian forests have shown that species don't disappear immediately from fragments of forests. Instead, they may need as long as 50 years or more to finally give up the ghost. Any extinctions driven by global warming in the past couple decades might not come to pass for decades. That doesn't mean, however, that animals and plants aren't already responding to climate change with longer growing seasons, shifting ranges, earlier arrivals at breeding ground and so on. The wheels are in motion.
Easterbrook was hardly the only voice raised against the Nature study. In the Washington Post article, for example, we learn that "one skeptic, William O'Keefe, president of the George C. Marshall Institute, a conservative science policy organization, criticized the Nature study, saying that the research 'ignored species' ability to adapt to higher temperatures' and assumed that technologies will not arise to reduce emissions."
United Press International gave more attention to O'Keefe's claims--
"As with everything climate-related, however, the verdict is not unanimous. In a review of the literature on the subject published in July 2003, the George C. Marshall Institute in Washington, D.C., concluded: 'The facts do not support claims of mass extinctions arising out of climate change. Whether through adaptation, acclimation or migration, available research suggests that the threats may be overstated.'
"The Marshall report, by Sherwood Idso, Craig Idso and Keith Idso, said to expect 'a biosphere of increased species richness almost everywhere on Earth in response to the global warming and increase in atmospheric (carbon dioxide) concentrations of the past century and a half that have promoted a great expansion of species range throughout the entire world.'"
The Idso report does not appear in a scientific journal, but simply lingers on the George C. Marshall Institute web site. You can search for a reference to it in the major scientific journals on global change, but you will search in vain. Along the way you won't find papers offering support for the report's wide-eyed conclusion. How then does it--and the Marshall Institute--earn a place in reports on this new research? As Chris Mooney suggests, in the case of the Washington Post, a naive notion of how to get both sides of the story may be to blame. I suspect something else is going on in the UPI article. UPI is owned by the conservative Rev. Sun Young Moon, who also owns the Washington Times, which ran UPI's story--along with many previous anti-global warming stories. The George C. Marshall Institute meanwhile has helped promote other sketchy research that claims to overturn predictions about global warming. (See David Appell's article, for example.)
Adapations to global warming are certainly an important factor in how the natural world will respond to all our greenhouse gases. Animals and plants don't simply keel over if things get a little warm. They have strategies encoded in their genomes to adapt in their lives to changing environments. They are plastic. And on top of that, over time they can evolve to become better adapted to a changing environment. Over the past 20 years, evolutionary biologists have documented rapid natural selection in the wild. In the face of intense fishing, for example, some salmon have evolved to be 25% smaller in just a few decades. If global warming emerges as predicted, it will be a particularly powerful selective force, raising temperatures and altering climate at an astonishing clip compared to past climate change. Life is already adapting to the warming world. In the Proceedings of the Royal Society of London, Canadian researchers report that red squirrels in the Yukon have responded to the warming climate by breeding 18 days earlier than they did in 1989. Part of this shift was the result of changing genes.
It's possible that rapid evolution will let some species avoid extinction by adapting to the new climate. But some species may find their evolutionary path blocked. Julie Etterson and Ruth Shaw of the University of Minnesota studied the potential evolution of partridge peas in Minnesota. Adaptations to warmer, drier climates, they found, won't come for free. Those changes will interfere with how the pea plants grow, which will lower their overall fitness. As a result, the slow evolution of the plants will lag behind the climate change, and may leave them badly suited for survival. We therefore can't assume that natural selection will save us from the risks of climate change.
Climate change, of course, is nothing new on Earth, and so the fossil record can offer some clues to the balance of adaptation and extinction in the coming century. The Idsos imagine that a few decades of high levels of carbon dioxide and elevated temperatures can whisk us back millions of years, merging forests into "super forest ecosystems" far more diverse than anything that exists today. It's actually possible to do something beyond blithe hand waving of this sort with the fossil record, though. In a recent issue of The Journal of Mammalogy, for example, paleontologists looked at how the diversity of North American mammals responded to different rates of climate change. So far, they conclude, the changes that have happened so far are within the normal variability of mammal history. But within a few decades, global warming will transform the community of mammals beyond anything seen in the past 60 million years, causing widespread extinctions.
But the fossil record is missing some crucial elements of today's world. Even if an animal or plant might be able to spread into a new climate envelope, we humans may block their path. It's hard to imagine how the trees, fungi, insects, reptiles, and other species could move out of a rain forest preserve and into a farming region or a swath of industrial properties.
Despite the speed with which global warming appears to be occurring, we still have a lot to learn about how many extinctions it will cause. There is indeed a lot of uncertainty and plenty of room for debate. But we don't have time to waste with the distraction from the likes of Easterbrook and the Marshall Institute.


They say that history is written by the winners, but if that's true, then natural history is written by those who can write. Our ancestors split from the ancestors of chimpanzees some 6 or 7 million years ago, and since then they've given rise to perhaps twenty known species of hominids (and potentially many more waiting to be discovered). Today only our own species survives, and only ours has acquired the intelligence to learn things about the distant past--such as the fact that we are the product of evolution. Our survival and our intelligence sometimes blur together, with the result that a lot of the research on human evolution (and most of the popular accounts of it) revolve around what makes our own lineage unique and successful. All the other branches of hominid dynasty become our foil--the losers who, through their extinctions, reveal what is most glorious about ourselves. As a way of thinking, this is both unfair and foolish. We become satisfied with our own false assumptions about other hominids, and may miss some lessons they have for us. Exhibit A: our ancient thick-headed cousin Paranthropus.
Paranthropus, which existed from about 2.5 to 1.5 million years ago, was among the first hominids to be discovered by paleoanthropologists. In 1938, a young South African schoolboy led Robert Broom to a spot where he had found fossils of jaws and teeth. Broom dug up more pieces of the skull and realized that it belonged to some kind of ape. A closer look revealed that it was more like humans than chimps or gorillas. For example, the hole at the base of the skull was far forward like humans, suggesting that the creature could walk upright. But compared to the other hominids that had been found at that point, Paranthropus was peculiar for its big frame and its massive jaws and teeth. If paleoanthropologists had to pick a hominid that looked like our direct ancestor, Paranthropus was definitely not it.
Over time, as more hominid bones emerged, Paranthropus solidified its reputation as a dead end of human evolution. The conventional wisdom ran like this: until about 2.5 million years ago, most hominid species were runty, small-brained apes that were unusual only for spending a fair amount of their life on the ground. But then the climate became drastically cooler and drier. This change drove hominids into two major branches. On one branch was Paranthropus, a five-foot tall creature with molars as thick as your thumb and buttressed jaws. On the other branch were the earliest members of our own genus Homo. They were shorter, and their teeth and jaws were small. Over time their teeth and jaws got even smaller, while their brains got bigger. Around 1.5 million years ago the climate went through yet another change. The planet got so cool that it slipped into cycles of Ice Ages, altering the African landscape over thousands of years rather than millions. Many mammals became extinct as a result, and Paranthropus went with them. Homo, meanwhile, had evolved to the point where it now stood six feet tall, could make sophisticated stone tools for scavenging meat, and was even beginning to venture out of Africa altogether.
So why Homo and not Paranthropus? Evolutionary biologists have theorized for a long time that there are two directions in which organisms can evolve: they can become specialists or generalists. It's astonishing just how specialized some species can get. Think for example, of lice that live only on humans. In fact, they come in two species, one that lives only on human hair and one that lives only on the human body. Think of the aye-aye of Madagascar, a primate armed with a hideously long index finger it uses to fish out insects from hollow trees. Specialists, the theory goes, thrive only in times and places of tranquility, in which evolution can fine-tune life to fit very narrow niches. Generalists, on the other hand, can live anywhere on anything. Think of rats, cockroaches, and the like. During good times, new specialist species may emerge and thrive. But when some environmental catastrophe hits, the jack-of-all trade generalists are the ones equipped to adapt and to survive.
Paranthropus and Homo, paleoanthropologists generally agreed, were classic examples, respectively, of specialists and generalists. Paranthropus evolved the ability to crush seeds and other hard plant matter, losing the ability to feed on anything else. Homo meanwhile had to search for any food it could find, whether it was honey, tubers, or carcasses. Homo survived thanks to its generalist skills--which depended in part on its growing brain. We are what we are today, then, thanks to our generalist ancestors.
This conventional wisdom is widespread. (The web site for the TV series Walking with Cavemen offers a version here.) But just how accurate is it? The overall notion of generalists and specialists seems to hold up pretty well. A couple weeks ago in Science, English scientists reported how they had watched specialist and generalist bacteria evolve in their lab. Some of the specialists that emerged lived only on the surface of their microbial broth. But as they became more fit in their narrow niche they lost the ability to adapt to other niches. But what about the particular case of Paranthropus and Homo?
Bernard Wood and David Strait, two paleoanthropologists, took a critical look at the evidence to date--everything from isotopes in fossil teeth to species ranges. You'd expect certain things from a specialist--it should only eat a few foods, for example, and it should be more prone to give rise to new species (but those species should tend to go extinct faster than generalists). When a specialist's environment changes, it should follow its food to a new range.
All told, Wood and Strait looked at eleven different predictions. In most cases, the evidence ran counter to the idea that Paranthropus was a specialist, and many of the remaining cases, the results were simply ambiguous. "On balance," they write in a paper in press at the Journal of Human Evolution, "Paranthropus and early Homo were both likely to have been ecological generalists."
At first their conclusion looks patently absurd. The very sight of Paranthropus seems to tell you that it's dead end. But Wood and Strait point out that looks can be deceiving--especially looks based on nothing more than bones. The howler monkey, for example, has massive intestines that allow it--unlike most other primates--to eat leaves. But howlers only eat leaves during part of the year; they also eat fruits and flowers. Their intestines don't cut down their options--just the opposite has happened. Likewise, Paranthropus's huge jaws and teeth may have allowed it to crush seeds, but there's no good evidence to suggest that this anatomy prevented it from eating other food. A big bite can help you eat a lot of things. And if you want to rely on the shape of the face to decide who's a generalist and who's a specialist, then you might well conclude from the shrinking jaws and teeth of Homo that our ancestors were the specialists.
Wood and Strait are pretty agnostic about what should take the place of the conventional wisdom, but they do make a few suggestions. Perhaps the climate change 2.5 million years ago led Paranthropus and Homo into two different kinds of generalist ways of life. Paranthropus broadened its diet as it turned its head into a nutcracker. Homo meanwhile broadened its diet with tools and meat. The mystery of why we survived and Paranthropus didn't becomes dark and deep. We can't give credit toour wonderful brains for making us generalists able to live anywhere. Being a generalist doesn't seem to have been a guarantee for survival. (And Wood and Strait also point out that Paranthropus's brain actually expanded over its million-year dynasty, showing that it also had some potential upstairs.)
Today we believe that our technology has made us into the ultimate generalist, transcending nature itself, and perhaps even the planet. Paranthropus looks on our happy beliefs from its oblivion and wonders.


Evolution isn't simply about the genes you gain. It's also about the genes you lose.
The word loss has a painful, grieving sound to human ears, and so it can be hard to see how it can have anything to do with the rise of diversity and complexity in life. And until recently, evolutionary biologists didn't pay much attention to lost genes because they were preoccupied with the emergence of new ones. New genes, they found, can be produced in many ways. A gene can get accidentally duplicated, for example, and the copy can mutate, taking on a new function. Or pieces of two separate genes can get fused together, producing a new sort of protein. Or an old gene can get acquire a new switch that turns it on and off according to a different set of signals. As genomes of more and more species have been sequenced, scientists have combed them for new genes. They look for genes that are unique to a species, or some group of species, and are not found in distantly related organisms. They want to sort out the old genes common to much of life and the new ones that created a new body plan in one lineage. Consider the genome of one of the closest living relatives of vertebrates, a delicate sleeve of a creature called Ciona. Scientists found that over 2500 of its genes (a sixth of its entire genome) can also be found in the genomes of vertebrates such as fish--but not in the genomes of invertebrates such as fruit flies or vinegar worms. So here, scientists have argued, may be some of the genes that set us vertebrates apart.
But in just the last few years, evolutionary biologists have also been getting interested in the genes that have vanished. The mutations that erase genes are pretty well understood. A gene may initially get shut down by some disabling mutation. Later, through a copying accident, the gene may get snipped out of the genome altogether. These deletions can be devastating, causing swift death or long agonizing disease. But in some cases, the loss can be borne. Individuals manage to survive without the gene, and over time, more and more of them emerge, until the gene disappears from the species altogether.
Gene loss is particularly important in the evolution of the parasites and mutualists that live within our cells. We depend for our very survival, for example, on oxygen-consuming bacteria that invaded our cells some 2 billion years ago and became mitochondria. Comparisons to their free-living relatives have shown that mitochondria have lost the vast majority of their genes, holding only onto a few they still need to keep up their end of the symbiotic bargain they have with their hosts (us). Losing a gene can actually be an advantage to an organism that live in a host that has genes of its own that produce proteins that serve much the same function as its own. A relative of mitochondria has also stripped down, but for a different reason. Rickettsia, the cause of typhus, can only live inside cells, but it is a deadly pathogen rather than a helpful mutualist.
But free-living organisms have lost their own fair share of genes as well, and those who overlook it may misread the history of life. Case in point: Bacteria can acquire genes not just through heredity, as we do, but grabbing them from other bacteria. (Imagine acquiring someone's DNA through a handshake, your eyes turning from blue to brown. It's a bit like that.) Scientists have been debating how important these two routes of evolution have been for microbes. Do they trade just a handful of minor genes, or can they swap the very core of their genome?
Some new research suggests that much of the evidence for rampant gene trading may actually be an illusion created by lost genes. Think of a cookbook analogy. Imagine that some family in a remote village long ago developed a recipe for a blueberry soup. They keep the recipe a secret, handing down copies of the recipe only to their children. Over time, the children move to surrounding villages, taking the recipe with them and handing it down to their children. But gradually some branches of the family lose it, perhaps in kitchen fires or by accidentally tossing it in the trash. Many generations later, you take survey, recording who still has a copy of the recipe for blueberry soup. You find that most of the people who have it live near one another, close by the ancestral village. But there are also isolated families scattered here and there who also have copies of the same recipe. You might assume that in these cases, the rule of secrecy was broken, and members of the family handed out copies of the recipe to strangers. Only by understanding who had lost it, could you see that the rule had been upheld.
Bacteria have no monopoly on gene loss, though, as a new report in Current Biology makes clear. Australian biologists reported their study of the genome of a coral. Corals belong to one of the oldest lineages of the animal kingdom (a phylum known as Cnidaria, which also includes jellyfish and sea anemones). Cnidarians left fossils almost 600 million years ago, tens of millions before the first fossils of many other animal groups. They are also biologically simpler than most other animals. They lack brains or complex sensory organs, relying instead on nerves that form simple nets. They don't have a mouth and gut running from one end of their body to the other. Only after Cnidarians branched off on their own did new animals emerge with heads and tails, with different sorts of sense organs and neurons, with muscles for swimming and burrowing, and with many other tissues. The Australian biologists decided it would be interesting to compare the genome of a coral (Acropora millepora, the coral in the picture here) to the genomes of animals on younger branches of the animal tree. They compared its genome to ones from both invertebrates (fruit flies and vinegar worms) and vertebrates (humans).
Out of the 1376 genes that the Australian scientists looked at in coral, they found 492 matches in the other animals. But overall, these matches were far more like human genes than of the flies and worms. In fact, 58 of the coral genes (11%) could be found only in the human genome and were nowhere to be found in the other animals. In other words, a sizeable chunk of the genes that existed in the earliest animals have been lost in flies and vinegar worms, while they have survived in corals and humans. These lost genes may change the way scientists understand the evolution of animals. The researchers who used the Ciona genome to identify new vertebrate genes used only fruit flies and vinegar worms as points of comparison. In fact, a lot of these genes may not have all that much to do with the rise of vertebrates at all. Our search for what makes us special will have to turn elsewhere.
Why do species lose genes, and what effects do the loss have on their future evolution? It's puzzling, for example, that humans and corals still carry genes that must date back 600 million years or more--genes that are intimately involved in the development of embryos, for example--and yet fruit flies and vinegar worms (and presumably many other invertebrates) thrive without them. The geneticist Maynard Olson has proposed that losing genes isn't just a mutation organisms can learn to live with, as it were, but actually can offer a big improvement. According to his "less is more" hypothesis, losing a gene can open up a new ecological niche an animal's ancestors never could enjoy.
Olson points out that wild mice have a body clock that senses the changing length of day through the seasons, and uses that information to control when they can have babies. Lab mice have lost this clock, allowing them to breed year-round in their unchanging environment. "Less is more" could be the reason that many animals such as fruit flies and vinegar worms have lost so many genes. It may also one of the things that makes us humans unique. A comparison between humans and mice, for example, shows that 2% of the genes of our common ancestor that lived some 100 million years ago were lost. A closer look our immediate relatives--the apes--shows that we lack a particularly important gene, one that makes a molecule that studes the surface of their cells. It's particularly common on their neurons. And significantly, it appears that our ancestors lost the gene 1.5 mllion years ago, just around the time our brains began to expand dramatically. Scientists speculate that the presence of this surface molecule somehow held back the evolution of more complex brains. Only when it was gone could our ancestors explore their full evolutionary potential. Lose the gene, and you open up a new world.


Just before the winter solstice brings autumn to an end, here's a chance to blog about the great evolutionary biologist--and student of fall foliage--William Hamilton. Hamilton, who died in 2000, has never reached the household-name status of other evolutionary biologists such as E.O. Wilson or Richard Dawkins or Stephen Jay Gould. But he deserves a place of privilege, for all his profoundly influential ideas. He found an explanation for altruistic behavior in many insect species by expanding biology's notion of fitness to include the genes an individual shares with its relatives. He offered one of the best-supported theories for the origin of sex--as a way for a species to keep ahead of its parasites in their evolutionary arms race. And he proposed that sexual displays--such as peacock tails and rooster combs--are signals that males send to females to reveal their ability to fight off parasites and otherwise live well.
It wasn't just the ideas he came up with that made Hamilton extraordinary--it was the way he came up with them. They just seemed to pop into his head, obvious and simple, and he proceeded to write them down in clipped, humble prose, tossing in a few equations to give a sense of their underlying beauty. And then he was off to the next idea, or a trip to the Amazon. Hamilton wasn't much interested in promoting his ideas to the world at large, to become a talking head or a writer of best-selling science books (in part because he was extremely shy and humble). That's probably one reason why Hamilton is sliding into obscurity even as his ideas live on.
In the current issue of Biology Letters, there's an example of Hamilton's enduring legacy. One of the last papers Hamilton wrote before he died (after an ill-fated trip to Central Africa to investigate a controversial theory about the origin of HIV), appeared in 2001 in the Proceedings of the Royal Society of London. He and co-author Samuel Brown asked why it is that leaves change color in the fall. There are many possible explanations. Perhaps leaves just look that way as they inevitably die, for example. Hamilton, however, believed there was an adaptation involved. He and Brown proposed that a brilliant leaf was, like a peacock's tail, a signal. A peacock's tail takes a huge investment of energy, energy that could otherwise be diverted to fighting off parasites or surviving other stresses. A strong male can afford to use up this energy, which makes the tail an honest ad for its parasite-fighting genes. In the case of leaves, trees are not sending signals to other trees--they are sending signals to tree-eating insects.
Trees, after all, are as besieged by insects as birds or other animals are by internal parasites. They fight their enemies a sophisticated arsenal of chemical agents, sticky traps, and other weapons of mass arthropod destruction. Hamilton and Brown proposed that trees that have a strong constitution warn off insects by changing colors in the fall. In a sense, they say, "I can shut down my photosynthesis early in the fall, pump a lot of red or yellow pigments into my leaves, and still have enough energy left to annihilate your babies when they hatch in the spring.. So just move along."
Warning colors are a well-established fact in biology. Poisonous butterflies and snakes deter predators with them, and other species try to horn in on the protection by mimicking their appearance. But the notion that trees were warning off insects was quite new--just the sort of brilliant notion Hamilton might have while taking a stroll one autumn day. (Note: In forumlating his hypothesis, Hamilton depended heavily on a theory called the Handicap Principle formulated by Amotz Zahavi in the 1970s.)
For evidence that autumn leaves are signals, Hamilton pointed to some interesting patterns. Aphids, for example, lay their eggs on trees in the fall; when the eggs hatch, the larva devour leaves voraciously. Hamilton and Brown found that aphids are less common on trees that have bright red or yellow leaves. And species with bright leaves tend to be burdened with more species of aphids specialized for feeding on them than trees with drab leaves.
Hamilton left this jewel of an idea behind after his death for other scientists to investigate. It's a challenge to test, because there are so many links in the theoretical chain. "Vigor," for example, is a tricky thing to measure in trees; you could, for example, shower a tree with aphids, close it up in a gigantic net, and see how well it defends itself against them. That's a huge amount of work, however, that yields you one data point. And you'd still have to find a way to eliminate other factors, such as weather, the age of the tree, and so on.
But recently scientists have found a reliable clue to vigor in the shape of a tree's leaves. Vigorous trees produce very symmetrical leaves, while weaker trees produces misshapen ones. Symmetry signifies much the same thing in swallow tails and gazelle horns and human faces. When a complex organ like a leaf or a feather forms, any environmental stress can throw off its development from perfect symmetry. In stronger indviduals, the develoment of the organ is better shielded from these insults.
In September 2001 a team of Norwegian biologists took advantage of the symmetry of vigorous leaves and went gathering leaves of birch trees. They collected them from 100 birch trees all told. Half of the trees were shimmering yellow, and the other half were still green. As Hamilton would have predicted, they found that the yellow leaves were consistently more symmetrical than the green ones. The researchers had gathered half their yellow and green leaves from a healthy stand of trees, and the other half from the middle of an outbreak of birch-feeding moth larvae. On average, the trees in the healthy stand had more symmetrical leaves than the moth-infested ones, once again just as Hamilton would have predicted. Finally, the biologists looked at how trees with different colors fared the following spring. They found that trees with strong colors suffered less damage from insects compared to trees with weak colors.
These results are powerful support for Hamilton, although they don't tell the whole story. How much do aphids depend on the sight of leaves when they choose a tree, for example, as opposed to their smell? Still, it's a disconcerting idea that's gaining strength: a beautiful fall landscape is a giant shout of "Back off." When you see a tree at its most autumnally glorious, be sure to remember Hamilton.
Update 9/27/04: Here is the sequel: some scientists think that fall colors mean something else.


To those who are new to my web log, thanks for checking it out. To those who have come from my old site, thanks for clicking through.
This week, while a sickly laptop robbed me of the opportunity to blog, a steady stream of interesting papers were published. Three struck me as particularly fascinating, because they illustrate the different ways evolutionary changes alter our world.
1. Scourges in waiting
When SARS failed to take hold in the United States, it was easy to feel smug about our defences against new epidemics. The nasty influenza strain now spreading across the United States should puncture that arrogance. We face outbreaks the same way people faced hurricanes in the 1800s--they sweep over us without warning, and we are pretty bad at predicting what will come next. What we need is a kind of evolutionary forecasting, in order to know how to head off the next disaster. Humans got HIV from chimpanzees, for example, but there are dozens of related viruses lurking in chimps and monkeys that might--or might not--also make the leap. Of all the pathogens that seem harmless at the moment, which one will become a killer?
Understanding the evolutionary fortunes of diseases is not easy. Earlier this year I wrote a piece in Science about the debate going on these days over exactly what combination of forces can make a disease deadly or harmless. This week in Nature, a group of scientists reported some important discoveries about what it takes to be a major killer. The work is disturbing, because it suggests that even a pathogen that doesn't seem capable of much harm could swiftly evolve into an epidemic.
A disease is in a continual state of birth and death--the pathogen infects new hosts where it can reproduce and spread to other hosts; meanwhile sick people either get better or die. Epidemiologists get most worried about diseases where the rate of new infections outpaces the end of old ones. They reason that a disease where the opposite is the case will either die out or just cling to a bare existence. A mathematical model of diseases suggests otherwise. It seems that if a pokey pathogen has even a slight rise in its rate of new infections, theres an opportunity for rapid evolution. A few lineages of the pathogens will have the opportunity to infect a chain of people, and that will offer the chance for it to evolve into a fast-spreading strain.
The researchers propose a way to test low-level diseases to see whether they are at or near this dangerous level. And they also point out that their results mean that some diseases that haven't caused all that much concern may be poised to strike. A couple generations ago, many more people were protected from smallpox by vaccines than they are today. That vaccine also protected them from other viruses that are still pretty much limited to other animals, particularly monkey pox. Today monkeypox is not spreading fast, but the slow decline of immunization to smallpox may nudge monkeypox up into the breakout zone.
2. Shrinking Trophies
The same issue of Nature also included a report of some unintended evolution brought about by mountain-sheep hunters in Canada. For decades at Ram Mountain in Alberta, the hunters have shot the biggest rams with the biggest horns. There were two reasons for this pattern--hunters want a good trophy for their efforts, and wildlife managers believed they were conserving the population by allowing younger rams to survive and have lambs of their own before getting shot. But the researchers found that the hunters have altered the gene pool in the process. Genes that help produce big horns and big bodies are vanishing from the population. Meanwhile, rams that produce smaller horns and grow to smaller sizes were favored. The horns have shrunk 25 percent as a result.
The title of the paper is "Undesirable evolutionary consequences of trophy hunting" and Living Code's Richard Gayle rightly asks what exactly is so undesirable about the change if youre not a hunter. But this burst of evolution may have some other side-effects that could threaten the well-being of the ram population. Horns are an example of the many kinds advertisements that males use to attract females. Roosters have combs, peacocks have tails, crickets have chirps. While television ads may not be particularly honest, these biological ads often reflect the quality of a males genes. Good genes can confer bigger size or a stronger resistance to diseases to offspring--things a female prefers in a mate. As hunters shift the balance of the ram population to males with smaller horns, they may also be shifting it to smaller, more disease-prone lambs that are less likely to live long enough to have offspring. The entire population may become maladapted to the tough habitat of the Canadian Rockies.
Rams are not the only animals whose evolution we're altering even as we try to manage them wisely. Earlier this year in Science, I wrote another article about how fishing has driven the evolution of smaller fish. If we want to conserve these animals, we have to take into account the way we can change the rules of natural selection.
3. Chimp genomes and human nature
Evolution can produce quick changes in a few years, but with a few million years it can produce far more complex changes. One example of this emerged this week inScience , which published some of the early fruits of the ongoing chimpanzee genome project. Researchers were able to pinpoint 1547 human genes that appear to have undergone intense natural selection since our ancestors diverged from other apes.
I described the approach behind this kind of research in an essay that appeared in Natural History last December. In the new research, scientists were able to scan the entire chimp genome for genes that they could find good counterparts in both the human and mouse genomes--a little less than 8000 all told. By tallying up the differences between the different copies, they could pick up signs of natural selection.
The fast-evolving genes were a grab bag. Some are linked with hearing, which may suggest that our ability to listen to language coevolved with our ability to speak. Weirdly, some genes that build olfactory receptors in the nose were evolving fast, too. It's weird because over half of these receptor genes are broken in humans, a reflection of our shift away from relying on smell. Its possible that a preference for certain sexy odors in the opposite sex drove the evolution of certain receptors.
Like most early work in genomics, this paper's net is cast wide but shallow. These genes do not tell us what made us uniquely human; they really just lay out thousands of new research projects to figure out what they do. At the same time, the signal of natural selection will become much clearer when scientists finish some more genome projects, like that of a monkey or a gorilla and can throw them into the comparison.
And it's likely that the proteins these new genes make are only part of the story of human origins. It's not just what your genome makes, but when and where, that makes a difference. Some preliminary work is showing that many genes that chimps and humans share are made at high rates in the human brain. They may help us fire more neurons without damaging our brains.
We live in remarkable times, when the inner workings of our closest living relatives are being unveiled and giving us insights into our own history. Unfortunately, in 20 years, this information may all that's left of chimpanzees. As they are hunted and their forests are logged, they will fade like a genomic Cheshire Cat, leaving behind a string of As, Ts, Cs, and Gs in databases around the world.
(Update, December 15, 10:30 pm: Thanks to Richard Gayle for further insights on the rams.)


In a post last month, I pointed out how aerospace engineers can learn a lot from looking at the fossils of ancient flying reptiles. Today's issue of Nature contains a variation on that theme: ancient swimming reptiles can teach geneticists a lot as well.
Almost all humans have five fingers. Genetic disorders can produce extra fingers and toes, but only rarely. Five fingers is generally the upper limit not just for humans, but for all vertebrates on land. You can find plenty of tetrapods whose ancestors lost one or more of those five fingers. Horses have just one; snakes none. But tetrapods with more than five digits are incredibly rare. In most cases, these aren't true digits--wrists bones or other parts of limbs have evolved into finger-like appendages (the panda's "thumb" made so famous by Stephen Jay Gould, for example). But if you're looking for seven or eight real digits--made of three or four rod-shaped bones extending from the wrist or ankle, you're out of luck.
As Gould pointed out in his essay "Eight Little Piggies" (from his book of the same name), nineteenth century biologists treated this pattern as a geometrical law. Five digits were part of the tetrapod "archetype"--the divine blueprint on which all variations were built. But that turned out not to be the case. In the 1980s, the paleontologists Jenny Clack and Michael Coates discovered that the earliest tetrapods that lived some 360 million years ago--vertebrates with legs and toes--had six, seven, or even eight toes.
At the time, biologists were just starting to figure out how genes build toes--and limbs in general. They also were studying how genes build fish fins, and they found evidence that some simple tinkering with just a handful of genes could turn a cluster of ray-shaped bones in a fin into a wrist complete with fingers. Clack and Coates showed that the way in which the extra fingers were arranged on their eight-fingered fossils was consistent with such a flip. (I summarized the state of this research as of 1998 in my book At the Water's Edge .)
Clack and other paleontologists have discovered more tetrapods from this same early stage, and they have many different arrangements of digits. A lot of evolutionary experimenting was going on, probably in part because mutations could produce radical changes in the limbs of early tetrapods. There weren't yet a lot of the regulatory genes in place for producing the standard five-fingered plan. Only about 20 million years after fins became hands and feet, tetrapods were finally sticking to the plan.
So why did our ancestors eventually settle on five fingers? One possibility is that for walking five fingers are better than six or twelve or any other higher number. Bear in mind that the very earliest tetrapods were more like fish with fingers. They had gills and could not have stood upright on their own limbs, suggesting that they lived mostly underwater. There they clambered over submerged logs and debris, much as frogfish do today. It may be no coincidence that the trend towards a consistent set of five fingers roughly matches the trend towards living ashore. Multiple toes probably helped give tetrapods better balance than just a couple, but there may have been a counterforce that put a cap at five--perhaps five digits are the most that can fit around a wrist or an ankle and still allow an animal to walk on dry land.
It's likely that part of the answer to this question has to do with how genes build the digits. Digits are already beginning to form when the limb is a tiny bud on the side of an embryo. There may be a tradeoff between the size of digits and the number of digits that can form in such a limited space. In order to make digits big enough to support a tetrapod on land, perhaps five is the most that constraints will allow. (Jenny Clack investigates these ideas in her excellent 2002 book Gaining Ground.)
Many of the most powerful genes in the construction of hands and feet also play just as important a role in building the entire skeleton, as well as brains and other organs. Any change in the way they work in a hand can have complicated effects on the way other parts of the body develop. Indeed, when people who have extra fingers or toes often suffer other disorders such as in the eyes or the skeleton. There may even be a connection between polydactyly and cancer. Tetrapods that could regulate the development of their fingers may have been favored by natural selection not only because they could do a better job of walking, but because they didn't get sick.
Maybe. Any hypothesis about the evolution of our hands and feet now has to contend with an intriguing fossil from China reported in Nature. The fossil was formed by an early marine reptile that lived 242 million years ago--over 100 million years after tetrapods had settled on five digits. The researchers report that the reptile (which has yet to be named) had six toes on their hindlimbs and seven on their forelimbs. It harkens back to the earliest tetrapods not only in having extra digits, but in where the digits form. In both the reptile and the earliest tetrapods, the new digits are tacked on to the wrist beyond the thumb.
The scientific report is unfortunately very short. The authors don't even name the creature or investigate what its closest relatives were. That's important to get a handle on how its strange digits evolved. But the paper certainly leaves me hungry for more. The authors point out, for example, that this reptile belonged to a lineage that had returned to the water and suggest that it had converged back on the frogfish-like hands and feet of early tetrapods that had not yet moved on land. Most marine reptiles I know about are more like sharks or dolphins, cruising open water and using their hands as steering paddles. So this particular reptile might represent an intermediate stage from land to sea.
What's even more intriguing is what these fossils say about this reptile's genes. Its extra digits are not malformed like the ones that people get from genetic disorders. That suggests that seven or eight digits was normal for the species, and that this single reptile wasn't a freak of nature. How did this reptile overcome the web of constraining genes that have prevented so many other species from acquiring extra fingers? How did they turn back the clock 100 million years? And why didn't any other known tetrapod that went back to the water (whales, seals, turtles, etc.) turn back the clock this way? The answers to these questions are not just a matter of paleontological curiosity. They may help geneticists understand how our own bodies are built, and how weaknesses are built into its design.


Futurepundit has an interesting post based on a new paper about so-called junk DNA. Only 2% or so of the human genome actually encodes protein sequences. The rest is a grab-bag of broken genes and virus-like sequences called mobile elements that hijack the cell's DNA copying-machinery from time to time and insert new copies of themselves back into the genome. A pair of scientists have come up with some ideas about why organisms like us have junk-rich genomes, while bacteria have barely any. I was going to post on it until pre-Thanksgiving business overwhelmed me.
After summarizing this research, Futurepundit then predicts that people will use genetic engineering to strip junk DNA from their genomes. The appeal is obvious--why slow ourselves down with all that seemingly useless DNA? Why not use some of that space for new and improved genes that let us live for centuries or become smart enough to read the new Medicare bill over breakfast? There are also arguments for getting rid of junk DNA that Futurepundit doesn't mention. When mobile elements jump around to new homes, they can trigger diseases as they mutate the genome.
Junk-free genomes may indeed become possible in the future, but they're probably not a wise idea. Even if junk DNA doesn't benefit us in any obvious way, that doesn't mean that we can do without it. Many stretches of DNA encode RNA which never become proteins, but that doesn't make the RNA useless--instead, it regulates the production of other proteins. Some broken genes (known as "pseudogenes") may no longer be able to encode for proteins, but they can still help other genes produce more of their proteins. (Scientists can't yet say how these particular pseudogenes do this, but the evidence is clear that they do.) Junk DNA can serve other functions as well--such as bulking up cells to a suitable size. And there are doubtless going to be many other discoveries coming in future years about important benefits from the mysterious 98% of our genome that doesn't fit a 1950s conception of useful DNA. (For more on this, you can read an essay I wrote for Natural History.)
None of this is meant to dispute the fact that much of junk DNA acts selfishly on evolutionary time-scales. There's plenty of astonishingly selfish behavior among these stretches of genetic material, like the mobile elements that have to get other mobile elements to make copies of them. It's just that we have to recognize that evolution works on different levels--on the levels of genes, genomes, cells, organisms, groups, and maybe even species and related groups of species. And something that's selfish at one level can become selfless at another level. Recently, for example, scientists found evidence that many mobile elements include sequences that can shut down their own spread. This is a feature of many successful parasites--they can thrive in their host without killing them too quickly.
It's on this evolutionary scale where purging junk DNA makes the least sense. The pasting and copying of junk DNA is a major source of new genetic variation. Instead of changing a nucleotide here or there, mobile elements can shuffle big stretches of DNA into new arrangements, taking regulatory switches and other genetic components and attaching them to different genes. While some of this variation may lead to diseases, it also prepares our species to adapt to new environmental challenges. (Similarly, pseudogenes that are truly broken still have the potential to become working genes again. Some scientists have proposed calling them "potogenes." )
If we turn ourselves into a genetically modified monoculture, we'll have to rely solely on our own genetic engineering, while abandoning a natural system of genetic engineering that's been finely honed over billions of years. We may be clever, but I just don't think we're quite so smart yet to take such a step.
Recently Jurgen Brosius at the University of Munster wrote an eloquent paper in Bioessays that made some similar points (although not on junk DNA). It's entitled, "From Eden to a hell of uniformity? Directed evolution in humans." Here's part of the abstract:
"The first major concern is that the genome will never be a completely reliable crystal ball for predicting human phenotypes. This is especially true for predictions concerning the performance of alleles in future generations whose populations might be subjected to different environmental and social challenges. The second, and perhaps more important, concern is that the end result of germline intervention and genetic enhancement will likely lead to the impoverishment of gene variants in the human population and deprive us of one of our most valued assets for survival in the future, our genetic diversity."
To fend off threats to the mysterious wilderness that is our junk DNA, I propose the establishment of the Junk DNA Preservation Society.


The glow of a beetle has inspired an elegant bit of evolutionary detective work that appears in the Proceedings of the National Academy of Sciences. Americans like myself are familiar with fireflies, but in the tropics the night is also illuminated by beetles. When Darwin came to Brazil on the Beagle, he amused himself by noting how the beetles were "rendered more brilliant by irritation." Naturalists have gotten a bit more sophisticated at studying beetles since then. They now know that the male beetles use the light organs on their underside to get the attention of females that are sitting in the trees and bushes; when the female sees a glow she likes, she registers her approval by flashing light organs on her back. (Fireflies do the same thing, but while they flash, the beetles give off a steady glow.) Scientists also know how the glow is made--a gene creates a protein called luciferase, which cuts up another protein called luciferin, releasing photons at a distinctive frequency. Depending on the species, the frequency is different.
The authors of the PNAS were attracted to the glow of one beetle in particular: a species that lives on Jamaica, Pyrophorus. plagiophthalmus. This species is peculiar, because its males can glow in a wide range of colors, from green to orange. Why so many colors? It's all too easy to say, "Well, natural selection made it that way," and leave matters at that. In fact, it's possible that natural selection had no immediate role at all. Maybe Jamaica was colonized by a handful of beetles that just so happened to have some rare mutants in their midst, and they all proceeded to breed like crazy. Or perhaps it's the females that have been evolving, and the genes they use for their own light organs also produce light in the males.
The scientists realized that the beetles offered a fabulous opportunity to study adaptation. Thanks to previous generation of beetle-loving scientists, they knew a lot about just about every link in the chain that joins the sequence of a gene to a living, breathing organism. They could even take the gene out of beetles and stick it into bacteria in a petri dish, where the gene would continue to produce light. By tinkering with each nucleotide in the gene, they could see exactly how the light changed as a result.
The scientists found that the colors of the male beetles don't depend on genes shared with the females. Instead, they are the product of three different versions of the same gene (alleles). The alleles produce green, yellow, and orange light, and since each beetle can carry two copies of the gene, they can make various colors. The scientists then reconstructed the evolutionary history of the gene, by comparing the alleles to genes from beetles on neighboring islands. It turns out that the green allele is the oldest. It's likely that the first colonists of Jamaica all glowed green. Then, with a few changes to the gene's sequence, a new version emerged that produced yellow light. And then most recently, an orange gene emerged. In other words, the glow has steadily been shifting down through the spectrum towards the red end.
And the shift, the researchers showed, has taken place thanks to natural selection. Scientists can detect natural selection in DNA in several ways, one of which is to compare the number of differences between genes that lead to changes in their respective proteins to differences that cause no change. If the protein-changing mutations are significantly more common than the silent ones, natural selection must be at work.
The scientists don't actually know why the beetles are turning orange. It may be, for example, that birds showed up on the island that have a harder time spotting orange beetles than green ones. Or maybe some extinct beetle also glowed green, leading to dead-end interbreeding for females who picked the wrong species. Whatever the answer, the scientists have shown that there is an answer out there beyond the random flux of wandering beetles. Now the scientists have to go out and find the last links in this evolutionary chain.
CORRECTION 11/21/03 1 PM: Thanks to Dough Gladstone for pointing out that fireflies are also beetles. Still, my childhood would have been subtly yet significantly different if I had spent it watching unblinking glows at night instead of the lazy winks of lightning bugs.


Texas may be off the hook for now, but Razib at Gene Expression observes that some medical students at the University of Oslo are lobbying for anti-evolution lectures. I guess I'll try not to be in Norway if I need antibiotics.


Over the past couple years, a few pounds of rock from Australia have been the subject of a fierce scientific battle between geologists and paleontologists. Some paleontologists have claimed that microscopic marks in the 3.5 billion year old rocks are the oldest fossils of life yet found. Some geologists have recently argued that the marks are just odd mineral formations that could have been created without the help of life. Today in Science, the geologists have struck again. A team from Spain and Australia mixed up some silica, carbonate, barium, and other compounds that can be found in the Australian rocks. With a little lab cooking (which they argue is akin to how the rocks formed) they were able to create little lumpy chains. When UCLA's William Schopf discovered similar little lumpy chains in 1993, he declared that he had found fossils of cyanobacteria (also known as blue-green algae). Not only did the chains look like living chains of cyanobacteria, but Schopf also found organic carbon around them. The geologists who published the Science paper today point out that non-living processes can create "organic" carbon too, and when they added this carbon to their recipe, they found that they could readily coat their pseudofossils as well.
Richard Kerr, the estimable senior writer for Science's news section, talked to some other geochemists and paleontologists, and many of them were impressed with the new work. And it's not the first challenge that Schopf has had to deal with. Last year, researchers argued that the rocks Schopf found were formed around hydrothermal vents--not exactly the place where photosynthetic cyanobacteria would be found. Likewise, other evidence for isotopic signatures of life in 3.8 billion year old rocks from Greenland have also been challenged.
It's been bracing to watch these scientific battles, and it should serve as yet another refutation of the absurd notion embraced by certain board of education members that scientists who study evolution are "dogmatic." Even the most high-profile research on the most important aspects of the history of life are fair game for rigorous scientific challenges. Never willing to let self-consistency slow them down, antievolutionists have seized on these new reports, claiming that they call into question all evidence of ancient life (perhaps even radiometric dating). It would be nice if--just once--they would actually do the hard work involved to make such a claim: publish a paper in a peer-reviewed journal showing how they went into a lab and created mineral formations that mimic all fossils. Or even a few fossils. Even one.
The fact is that other evidence for ancient life still stands. There are isotopic signatures dating back 3.7 billion years, for example, that have not been challenged. Fossils as old as 2.5 billion years are generally considered the real deal. While these dates are still inconceivably old, they raise some fascinating issues about how long it took for life to arise on Earth. It was starting to look as if life might have gotten started perhaps 4 billion years ago. Earth is 4.55 billion years old, but for several hundred million years it was colliding with assorted failed planets and other pieces of interstellar rubble, which would have literally boiled off the oceans and made it unlikely for any early life that might have gotten started to survive. Before these new challenges popped up, it looked as if life got started pretty quickly as soon as things calmed down. That might have suggested that life elsewhere in the solar system (or the universe) could be pretty common. Now it's not so clear whether life starts easily or needs hundreds of millions of years more to get going.
Schopf and company haven't backed away from their original research, though, and they shouldn't be counted out. The research published today only shows that geological processes can create structures that look like bacterial fossils. There's plenty of evidence that bacteria can form these structures, too, particularly from younger bacterial fossils that haven't been so degraded by the ravages of time. And in Kerr's article, Schopf points out that he found internal walls between the lumps in his chains, which look a lot like the walls of bacteria. The pseudofossils are hollow tubes. Their creators told Kerr that if they altered their recipe a bit, they could probably make internal walls, but if that's true, they should have waited to actually get those results before they submitted their paper. This is a story that's far from over.


Time always marches forward, of course, but does evolution?
It's certainly easy to impose a march of progress on the course of evolution. That's why the sequence of apes transforming into humans as they march from left to right is so universal. Of course, there are also pictures in which Homo sapiens, having risen up to noble, upright proportions, begins to crouch back down again, until he (never a she, I've noticed) is crouching in front of a computer or a television or facing some other ignoble end. As I wrote in Parasite Rex, this anxiety--an anxiety mostly about ourselves and not about nature--led biologists to come up with the concept of degeneration. While most life strove upwards towards more complexity, some backsliders slipped down again. Barnacles (once nimble crustaceans) were a classic example.
If a lineage could degenerate, could it then regenerate--could it recover the complexity its ancestors had lost? In 1893, the French biologist Luois Dollo declared absolutely not. It was too unlikely that evolution could retrace its steps so carefully to restore some lost trait.
Dollo's Law survived the rise of genetics and the modern synthesis of evolutionary biology, albeit it a very altered form. It was no longer an ironclad law, like the laws of physics, but instead a striking pattern that speaks to how evolution works. If a lineage of animals no longer needs some feature--eyes for example--the genes that build eyes gradually mutate, usually into dead pseudogenes. It would be a nearly impossible roll of the dice that would mutate all of those genes precisely back into the form they had before. Whales didn't' re-evolve fish fins, for example, but instead evolved paddles, for example.
To mark the centennial of Dollo's Law, the late Stephen Jay Gould wrote an influential paper in which he used coiled sea shells as evidence of the new and improved Dollo's Law. In some lineages of gastropods, the shell has uncoiled. Gould pointed out that an uncoiled shell allows a gastropods to grow flexibly around obstacles or to reach out towards sources of food. Gould made a careful study one of these groups and argued that none of its member species had ever managed to re-evolve a coiled shell. He suggested that the uncoiled gastropods had become so committed to their new way of life that natural selection could not return them back to their former coiled glory.
Now, on the 110 year anniversay of Dollo's Law, comes a fascinating report that challenges Gould. In a paper published online today in the Proceedings of the Royal Society of London, biologists Rachel Collin and Roberto Cipriani take a look at another group of gastropods--the Calyptraeidae, which includes slipper shell limpets, cup-and-saucer-limpets, and hat shells. Out of 200 species, just a dozen or so are coiled. It used to be thought that the coiled species branched off first, before the common ancestor of the remaining species lost its whorl.
But that's not what shook out when the biologists constructed a family tree for these gastropods by sequencing three different genes in 94 species. They discovered that the gastropods actually re-coiled on at least two separate occasions.
At first this seems hard to swallow. Fossils of these particular shells suggest that they had been uncoiled for anywhere between 20 and 100 million years before re-coiled species arose. How could the genes for coiling have survived all that time? Collin and Cipriani point to a study that came out earlier this year on stick insects. That study showed that some stick insects lost their wings, but that their descendants re-evolved them many times over. These re-coiled limpets may be rare, but they are not flukes.
The answer to these puzzles appears to lie in the genes that assemble these animals. In the case of stick insects, the genes for building wings were probably preserved because they continued to do something else in another part of the body--they built legs. Collins and Cipriani propose a related hypothesis for the shells. The re-coiled gastropods develop directly from eggs, but many other gastropods have distinct stages in their life cycle. As larvae, they develop one type of shell, and then as adults, they develop a completely new shell. Collin and Cipriani envision a lineage losing its coils in its adult shell, but still retaining them as larvae. So the coiling genes were still on active duty for millions of years. Then, in some lineages, these larval coiling genes were borrowed to build coiled adult shells. And finally, through other evolutionary changes, these gastropods lost their larval stage and simply developed uncoiled shells.
So evolution can, it seems, double back on itself sometimes. But only if it tucks away the secrets of the ancestors in a safe place.


Chris Mooney, CalPundit, Signal+Noise and others have been doing a great job of keeping track of the woeful textbook battles down in Texas. The Board of Education there has been arguing over how evolution should be presented in the textbooks they're about to buy for the state's high school students. The Discovery Institute, the headquarters of "Intelligent Design" proponents, has been lobbying them hard to present their ideas on equal footing with those of evolutionary biology. It looks this morning like they've lost (again).
The conservative members of the board are disappointed--they say they wish that textbooks weren't so "dogmatic" about evolution. The Fort Worth Star-Telegram article to which I link above quotes Don McLeroy, a Republican board member, as saying, "People don't realize the threat of scientific dogmatism [with regards to] evolution in our society."
You hear this rhetoric a lot these days from the various opponents of evolution. They claim that we stifle the pursuit of knowledge and the full debate of ideas if children are taught evolution without equal time for alternative ideas. It has a nice ring to it, until you actually stop and think about what they are actually saying. You could just as easily undermine the teaching of any science in the United States with the same line of pseudo-reasoning.
Imagine that a group of people don't like genetics. They don't like what it implies about human nature, for example--that our personalities and actions are influenced, to some extent, by molecules we inherit from our parents. That's bad for society. So they set out to discredit genetics--not in the scientific arena, but in the court of public opinion. They carefully cherry-pick fragmentary information from the scientific literature. They point out, for example, that geneticists themselves claim that only 2%of the genome actually codes for DNA. What does the other 98% do? They don't know! The human genome has been sequenced, and it turns out that humans have 30,000 genes. How do they manage to become such complex organisms with only twice the genes of a fruit fly? They don't know! These dogmatic geneticists claim that all organisms use DNA as the basis of heredity. But, the anti-geneticists point out, the geneticists have yet to actually show that DNA even exists in over 99% of all species on Earth!
Well! It's obvious that geneticists are pushing genetics out of some hidden ideological agenda which will ruin the nation. Clearly we cannot allow them to dominate the classroom. Until every last species has had its genome entirely sequenced, until we know what every last nucleotide does in every genome, there must be room for alternative ideas. Now, we're not talking here about so-called alternatives that are really just genetics in disguise. There are those out there who are arguing that the DNA to RNA to protein paradigm has to be broadened. They say, for example, that genes don't always code just a single protein but get spliced up into different versions. They also say that a lot of RNA that never gets made into protein still plays a big role in regulating the production of other proteins. But they're just tinkering with genetics in order to save it from itself!
No, what we need is some new thinking. And by new thinking, we mean going backwards fifty, a hundred, maybe 300 years. Let's teach Lysenko. Even better, let's teach the homunculus--the little pre-formed person that was believed int he 1600s to be lurking in every sperm. We can trick out the homunculus with jazzy terminology we borrow from information theory--let's say that the human body is too complex for blind genes to form it. Ergo homunculus! The fact that not a single scientific paper in a peer-reviewed biology journal has been based on the homunculus is no reason not to teach it to children. Those journals are just the tools of the geneticists, anyway. We need to teach the controversy!
You get the idea. Apply it as you wish to chemistry, physics, geology, and so on. And watch America's already unsteady grasp on science slip even further.


The other day I (among others) came down on Gregg Easterbrook for his poor grasp of science. Finding myself procrastinating today, I wandered over to his blog and had yet another good laugh. In a post today, he actually displays some interest in evolutionary biology. After discussing some work suggesting that wine might be able to prolong life, he gets into the evolution of longevity. I raised my eyebrows at this point, thinking perhaps he'd moved away from the muddled stuff he's written about evolution in the past. But then the goofiness returns.
First he describes how experiments to extend the lifespan of flies and other lab animals with a low-calorie diet makes them sterile, and declares, " It's as if evolution declared: If you're going to have sex and make more of yourselves, then you must die and get out of the way."
Then later, he argues that "evolution seems to have wanted us to grow to sexual maturity, reproduce, care for young through infancy, and then be gone. After that, natural selection loses interest in us entirely, evolving little if anything in the way of life-extension. The low-calorie-diet cellular defense may be in our genes to increase an organism's odds of living until reproduction through the famines and poor hunts that must have characterized the primordial world."
If you can manage to hack your way through the vague language, you get to an insoluble contradiction--either longevity means you can have kids, or it means you can't. Which is it?
Easterbrook's super-simplistic picture of evolution fails in other ways as well, such as the way he writes about how evolution "wants" anything at all. (Does an apple fall because gravity wants it?) And while he's right to say that senescence is shaped by evolution, he offers a one-size-fits-all explanation. He has no way of explaining why humans live for decades, and many species only a few days or weeks. Why do women experience menopause long before they die? Evolutionary biologists see longevity as the product of several trade-offs. It depends in part on the risk of death from predators, the cost of producing offspring, and the advantages or disadvantages of older members in a group. There's some evidence, for example, that menopause is part of a life-history strategy to allow human grandmothers to invest in the success of their grandchildren, rather than have more children of their own.
But in the end, Easterbrook's not really interested in these mundane details. He concludes his biology lesson this way:
"The fact that our bodies seemed designed to live much less longer than possible seems evidence that the human form is the result of an unguided natural process. Perhaps God struck the spark of life, then allowed evolution to determine the rest. If God made us specifically, why cause our lives to be needlessly short? I guess we can be allowed to dream that the reason is God is eager to show us something better than this world."
Take a moment, if you need to, to reread this. Yes, he really did write that. I guess I can be allowed to dream that God is waiting most eagerly for the mayfly, which He made to live only a day.


My hotel here in Wisconsin has a great high-speed connection and I have some downtime, and so I'll post on a really interesting paper that just came out that may tell us a lot about how we got so complex.
When I say "we," I'm speaking very broadly. Humans, other mammals, reptiles, birds, amphibian, and fish are all very complex, particularly compared to our closest invertebrate relatives. The picture I've attached here is of Ciona, one of these closest relatives. Little more than a small sleeve-shaped filter feeder, it's not too impressive. In particular, its body is not too complicated. It doesn't have ears, eyes, noses, stomachs, livers, and the many other organs that vertebrates have--organs that have to be constructed from many kinds of cell types.
Scientists have been studying the genes of Ciona and our other invertebrate cousins to find some clues to what happened to give rise to that complexity. One possibility is that in our lineage, a big portion of the genome was duplicated--or perhaps the whole genome was duplicated. Perhaps it even doubled more than once.
In the latest issue of Genome Biology, Spanish researchers take a new look at Ciona's genome. They looked at so-called "mobile elements" in these animals' DNA. Mobile elements are not actually genes, but viral-like stretches of DNA that make copy of themselves that can be inserted into other places in the genome. We humans have a staggering amount of mobile elements or their defunct relicts--perhaps close to half of our genome is composed of them. Most vertebrates studied have hefty amounts of mobile elements too. Among other branches of life, some have lots of this jumping DNA, while others have little.
The Spanish researchers have found that Ciona has very little jumping DNA. On top of that, it doesn't defend against jumping DNA the way vertebrates do--by clamping down on these stretches of the genome in a process called methylation to keep the jumping DNA from jumping. These mobile elements seem to be an innovation in vertebrates, not seen in our closest invertebrate relatives.
So what may have happened some 550 million years ago is that our genomes duplicated. This opened up the possibility for jumping DNA to move around the genome without causing too much harm, since each gene came in pairs. If one gene got knocked out by an intruding mobile element, the other gene could still operate. Vertebrates evolved methylation as a way to keep these mobile elements from getting too far out of control. But mobile elements may also have played a positive role in vertebrate evolution--they became the main source of mutations, carrying surrounding parts of genes with them to new places and creating new genes. These mutations were the raw material for evolution. New genes led to new cell types, and a new way of living.


Evolution is nature's great R&D division. Through mutation, natural selection, and other processes, life can find new solutions for the challenge of staying alive. It's possible to see a simplified version of this problem solving at work in the lab. The genetic molecule RNA, for example, can evolve into shapes that allow it to do things no one ever expected RNA to do, like join together amino acids. Over millions of years, evolution can solve far bigger problems. How can a mammal became an efficient swimmer? How can a bug fly?
Humans would like to build ocean-going vehicles as efficient as dolphins, and miniature robots as efficient as flies. For these and many other wish-list items, researchers are turning to the products of evolution for inspiration. Last year in Popular Science I wrote about one of the most interesting people doing this kind of work right now, the Macarthur genius grant winner Michael Dickinson of CalTech. Dickinson is figuring out how flies fly. It's surprising that they can fly at all, actually, since simple engineering calculations would suggest that they can't even get off the ground. But those calculations are based on our own crude notions of aerodynamics. When it comes to flight, it's hard to imagine a world beyond fixed wings. Dickinson has shown that by continually adjusting their wings with tiny tilts and twists, flies take advantage of various loopholes in the laws of aerodynamics. They don't need a supercomputer to figure these movements out. In fact, they actually have only a few thousand neurons. Their flight algorithms are installed in their anatomy with an elegant simplicity that humans can only ape. (Dickinson is now involved in a project to build insect-sized robots based on the principles he's uncovered.)
Evolution has found at least three other solutions to the problem of flight. Birds and bats can fly, birds with airfoils of feathers, and bats with the webbing between their fingers. Evolution's third solution disappeared over 65 million years ago. Pterosaurs, close relatives to dinosaurs, had hands that stretched out into absurdly long spars. Draping down to their feet were two sheets of membrane that they flapped, creating lift. It's a solution so weird that you might doubt that it could really let an animal fly. But pterosaurs thrived for some 150 million years, taking all kinds of forms that foreshadowed today's birds--from little pigeon-sized flappers to flamingo-like lake-feeders to gigantic soaring species the size of small planes.
We can't watch pterosaurs in flight, but their fossils preserve a few clues about how they managed to stay airborne. In Nature today, a team of paleontologists described the shape of pterosaur brains. While their brains rotted away long ago, some pterosaur skulls are preserved well enough to show the shape of the brains they contained. The scientists scanned the skulls and then reconstructed the brains, comparing them to birds, dinosaurs, and other animals. One important thing they found was that regions called the floccular lobes (located in the back of the brain) were huge compared to birds or mammals. These lobes may have helped pterosaurs stay balanced, because they take information in from the semi-circular canals of the ear. But they are also involved in the brain's awareness of the body--otherwise known as proprioception. As paleontologist David Unwin points out in an accompanying commentary, proprioception may have been a crucial sense for pterosaur flight. New fossils show that pterosaur wings were not just simple sheets of tissue, but have lots of muscles and even tendons running through them. It's possible that pterosaurs could make lots of fine adjustments to the shape and angle of their wings by tightening or loosening different patches of them (a fine-tuned strategy that reminds me of the tricks of insect flight). Big floccular lobes would be necessary to keep these complex wings under control.
What makes this new research particularly interesting at the moment is an article in the current issue of Popular Science by Carl Hoffman about engineers who are trying to build "smart wings" for airplanes. These wings would be embedded with sensors that could detect the changing air flow around them, and could respond by altering their shape on the fly (sorry). Smart wings could make planes faster, nimbler, and more efficient. The engineers profiled in the article take birds as their model, but when you think about it, bird wings are pretty remote from what they have in mind. The engineers do not plan on building wings out of giant feathers. The flat sheets that pterosaurs used are a far better analog. And the new research on their fossils suggests that they may have been remarkably smart wings at that. Aeronautical engineers would be well advised to invest in some rock hammers.


Loyal denizens of the blogosphere will forgive me if I begin this post by sketching out the details of the recent Gregg Easterbrook affair for those who haven't kept up with the details. Easterbook, a senior editor at the New Republic, started up a blog recently where he cranked out postings at a feverish pace about all sorts of stuff ranging from politics to religion to science. Recently, he questioned the conscience of Jewish movie executives who allowed Quentin Tarantino's movie, Kill Bill, to be made. A furor ensued, and Easterbrook lost his column with ESPN Magazine (owned by Disney, the same company that produced Kill Bill). Easterbrook apologized for mangling his words.
As David Appell and Atrios point out, the mangling continues. Easterbrook has now got a post about research on extra dimensions in the universe, and says what's really interesting is the possibility of the existence of a "plane of spirit." The only reason that physicists don't take it seriously, Easterbrook contends, is because "to modern thought, even one extra spiritual dimension is a preposterous idea."
I predict more such mangling in the future. For some years now, I've read Easterbrook's occasional pronouncements on evolution and have shaken my head. He likes to call evolutionary biologists fundamentalists, and claims that Intelligent Design is a "sophisticated theory now being argued out in the nation's top universities." (I've visited a fair number of the top biology departments, and I can vouch that Easterbrook's wrong.) It's a "rich, absorbing hypothesis," he crows, "the sort of thing that is fascinating to debate, and might get students excited about biology class to boot." The quotes come from a 2000 Wall St. Journal Op-ed piece. You can read the rest here, at the main web site for Intelligent Design agitprop.
In both physics and biology, Easterbrook seems to use his own personal neat-o-meter to decide what is a legitimate scientific question. Wouldn't it be neat if there was a hidden spiritual plane just like the planes of string theory? Wouldn't it be neat if you could prove that life was designed? When anyone brings up the flakiness of his musings, Easterbrook claims that mainstream science is just as flaky. Billions of dimensions? Hah! "Yet which idea sounds more implausible--one unseen dimension or billions of them?" (Actually, Gregg, it's more like 10 dimensions.) Rigorous experiments on possible precursors to DNA and cells? Hey, no one was there, so any theory that's fascinating to debate is worth teaching in the classroom. Besides, the kids get sooo bored when you bring out those real papers from peer-reviewed journals.
It amazes me that Easterbrook continues to trot out misinformed musings about science with a mysteriously authoritative tone. I'm reminded of Tom Wolfe's dissection of Susan Sontag's grand pronouncements in "In the Land of the Rococo Marxists"--"Who was this woman?...Perhaps she was exceptionally hell-bent on illustrating McLuhan's line about indignation endowing the idiot with dignity, but otherwise she was just a typical American intellectual of the post-World War II period. After all, having the faintest notion of what you were talking about was irrelevant."
As Atrios rightly points out, "physicists understand, even if Easterbrook does not, what their posited extra dimensions mean." And evolutionary biologists understand what it takes to establish and investigate a scientific theory about life--even if Easterbrook clearly does not.
Correction: Thanks to Steve, who in the comments pointed out that Easterbrook's column is on ESPN.com, not ESPN magazine.


When Charles Darwin was thrashing out his theory of evolution, he would doodle sometimes in his notebooks. To explain how new species came into existence, he wrote down letters on a page and then connected them with branches. In the process, he created a simple tree. Across the top of the page, he wrote, "I think."
That single tree has given rise to the thousands of trees that are published in scientific journals these days. A particular tree may show that humans are more closely related to chimpanzees than gorillas. It might show how the SARS virus in humans descends from viruses in other animals.
When you look at the picture of a tree in a scientific paper, it is easy to take it as an illustration of an unadorned fact. That is not, however, how science works. A tree represent a hypothesis that offers the best explanation of the data at hand. It shows the most likely pattern by which new species might have branched off one another, taking on new traits along the way, and giving rise to the range of species a scientist is studying.
These hypotheses are not simple to come by. Large-scale studies of phylogeny only became possible when computers began turning up the desks of biologists. You need that computing power because even in a simple comparison of a dozen species, there are so many alternative trees to test. Say you have 3 species, A, B, and C. A and B might be more closely related, or maybe A and C, or B and C. Three choices. But as you add more species, the possibilities explode to millions and more. Sifting through those possibilities takes both gigaflops and smart statistics.
From life's long reign, we have relatively few pieces of information to figure out the shape of its tree. The first evolutionary biologists to draw trees could only compare features that they could see through a microscope or on a fossilized skeleton. These days, most trees are based on genes. Once scientists could sequence genes, they tapped into a far richer lode of information than previous generations could reach. What's more, genes offer a much crisper picture of the evolutionary process than, say, a horn or a petal. After all, mutations to genes lead to inherited changes in how the body develops. Whereas the change to the body may be hard to tease out, the mutation may be as simple as snipping out a few nucelotides in a gene sequence.
But gene trees are not unadorned facts, either. Some genes have evolved relatively quickly, so that if you compare them in different species that took millions and millions of years to diverge, it may offer a distorted picture of how they are related. On the other hand, a gene that evolves too slowly may not be able to distinguish the fine details of a recent explosion (like the cichlids of Africa that I wrote about recently). In bacteria and other single-celled organisms, the picture gets even more fuzzy when you consider the fact that they can trade genes with one another, rather than just inheriting them from ancestors. In some regions, the tree of life is more like a mangrove, with branches grafting together rather than splitting apart.
One convenient thing about building evolutionary trees is that you can get an idea of how much confidence you can have in it. One way is to pick out a random subset of your data to base a new tree on. In some cases, the switch may produce a tree with a different shape. Perhaps just one section of it changes. Or perhaps the tree barely changes at all. By repeatedly testing the evidence in different combinations, it's possible to estimate how likely each branch point is authentic.
Gene trees have shed a lot of light on the history of life. Just to pick one case among many, several different studies have strongly supported the notion that hippos are the closest relatives to whales on land. But these studies are like telescopes for looking back in evolutionary time, and they are only as precise as their design allows. Studying one gene in all animals may give you a different picture of animal evolution than studying a different one. It's not as if one gene will point to snapdragons as the closest relative of fish, or link mushrooms and monkeys. But it can get hard to determine whether comb jellies are more closely related to jellyfish or to crustaceans, vertebrates, and other more complex animals. This is may sound esoteric, but it's not really. If comb jellies are closer to us, scientists could find some important clues in them about our own evolution. If they're out on a more distant branch, they aren't so important to our own evolutionary story.
In recent years, some scientists have argued that the best way to bring the evolutionary telescope into tighter focus is to study a bunch of genes at once. Fortunately, in this age of genomics, we're swimming in genes. Scientists have just started running studies in which they compare dozens of genes in various species. The results have been promising. But until now, no one had looked systematically at how much help multiple genes could offer to unsolved mysteries in phylogeny.
All of this is a very long preamble to a fascinating study in Nature this week from Sean Carroll at the University of Wisconsin and some of his current and former students. They looked at seven species of yeast, all of whose genomes have been fully sequenced in recent years. They picked out 106 genes in all seven species, choosing them because they clearly show signs of being variations of each other, descended from a common ancestral gene that duplicated many times Then they used each gene to come up with a tree showing how the yeast are related. Many of the genes produced different trees. Not surprising. What was surprising was what happened when they analyzed all 106 genes together. Suddenly, a single tree emerged as the most likely. And no matter how they tested the tree, they found 100% confidence at every node. As the authors note, this certainty is unprecedented, and they argue that they have established the evolutionary history of these seven species.
It seems that the annoying disagreements from individual genes fade away when a computer can crunch down on a lot of them. Carroll and his co-authors realized that they may have been indulging in overkill by using 106 genes, and so they narrowed down their data set to see how few genes they needed to get the same sort of overpowering results. They could get down to just 20 genes and still produce the same tree.
Carroll et al haven't found the guaranteed method to figure out every evolutionary tree. Each group of species will have its own peculiarities to take into account. But their astonishing results offer a very sunny forecast for phylogenies in the next few years. The days of "I think" may be over.


The Great Lakes of East Africa swarm with fish--particulary with one kind of fish known as cichlids. In Lake Victoria alone you can find over 500 species. These species come in different colors and make their living in many different ways--sucking out eyeballs of other cichlids, scraping algae off of rocks, and so on. What's strange about all this is that the Great Lakes of East Africa are some of the youngest lakes on Earth. By some estimates, Lake Victoria was a dry lake bed 15,000 years ago. All that diversity has evolved in a very short period of time.
East African cichlids are therefore not just pretty fish. They are natural experiments in evolution--in particular, in the capacity that animals and other organisms have to explode into new forms. These adaptive radiations have happened many times over the history of life, at lots of different scales. The so-called Cambrian explosion 530 million years ago, for example, saw the rapid rise of many different kinds of animals, including our own earliest vertebrate ancestors.
In the PNAS Early Edition, Japanese biologists pinpoint one ingredient in the cichlid explosion: turning a gene into more than one protein. The rule of one protein for one gene is at the heart of the old Central Dogma of molecular biology, but in recent years it's become abundantly clear that the genome operates in a much more sophisticated way than that. When the DNA of a gene gets converted into a template of RNA, different segments of the gene may get spliced together to create different sequences. You can get hundreds, even thousands of different proteins from the same gene through alternative splicing.
The Japanese biologists studied hag, a gene that's responsible for the patterns of pigments in cichlids. They found three different versions of hag RNA in African river cichlids, which have relatively low diversity. By contrast, the cichlids of the Great Lakes had two or three times as many versions. The researchers point out that it takes very little evolutionary change to create new alternative splicing. They propose that alternative splicing offered a quick way to create dramatically new color patterns in cichlids. For cichlids, pigment is a key to sexual success, and so adding new colors to a population of fish could quicky divide it up into smaller populations that only mated with one another. From there, it's a quick path to new species.
Alternative splicing had its mass media debut three years ago when the Human Genome Project found surprisingly few genes. How could we be so much more complex than a fruit fly if we had only twice as many genes? Alternative splicing seemed like the obvious answer--we must make a lot more alternative proteins from our genes. But the papers that followed, like this one, failed to find a correlation between complexity and alternative splicing. That doesn't mean alternative splicing hasn't been an important player in evolution--scientists just have to be careful that they look for the right aspects of evolution to see it at work.


Thanks again for the comments on my previous two posts about eugenics. As a novice blogger, I was surprised by their focus. I expected comments about the past--the historical significance of the eugenics movement--but instead the future dominated, with assorted speculations about the possible futures that genetic engineering could bring to our species. By coincidence, I've been thinking about the future as well, but from a different angle, thanks to a pair of papers in press at Trends In Ecology and Evolution. Instead of introduced genes, they're interested in introduced species.
Before humans came on the scene, animals and plants had a much harder time moving to new places. Unless they were birds or windblown spores, they couldn't cross oceans to new continents or islands. They had to wait for a land bridge like Panama to emerge, offering a path to a new habitat.
Then humans started moving species around. When Polynesians spread across the Pacific, for example, they brought pigs and rats in their canoes. As canoes gave way to tankers and airplanes, the traffic in species took a steep climb. Harold Mooney at Berkeley has called this new arrangement the New Pangaea. In a sense, we've created a single supercontinent in which animals and plants can mingle across its length and width.
In one of the Trends papers, Julian Olden of Colorado State University and his colleagues wonder about what life on New Pangaea is going to be like. (At the Trends in Ecology and Evolution web site you need a subscription to get access to the full text, but you can check Olden's publications page.) Olden and co. see a pretty grim picture, although they leave open the possibility for some bright spots.
As more species shuttle to new homes, the authors predict that diversity will suffer. The damage will come at many different levels:
--Within a single species, for example, one population may acquire combinations of genes not shared by other populations. This is the key ingredient for making new species, but it's also important for letting the old species survive--when catastophes strike (droughts, fires, etc.), the genetic variation can act as an insurance policy, allowing some members of the species to survive. According to the Olden paper, in the New Pangaea many species will become more uniform genetically. Cutthroat trout, for example, have been stocked all over the world from a single American population. The newly arrived trout breed with native subspecies, merging with them and blurring their distinctiveness. Captive breeding and genetic modification may make the problem worse.
--Just as there will be less variation within species, there will be less variation between species. Closely related species often still retain the ability to interbreed. In many cases, the only thing keeping them from hybridizing is being physically separated. Bring them together, and they'll really get together. The hybrids will mate with the members of the parent populations, and gradually the two species will merge.
--In other cases, the biological invaders will simple drive species extinct through competition. That's already happening now. Think of zebra mussels, which have devastated the diversity of native shellfish in the Great Lakes and surrounding rivers.
The result of all this loss of diversity, according to the Olden paper, is that nature is going to get homogenized to a level that may have never been seen before in the history of life. Some scientists say that we're now leaving the Holocene Era and entering the Homogenocene.
It's hard to say how nature will change on New Pangaea. There some evidence that ecosystems get more vulnerable to droughts and other calamities when they lose diversity. The network of connections that keeps it intact becomes simpler, and thus easier to tear down. New Pangaea may even affect the future of evolution itself. With less genetic variation, species may be less likely to adapt to a change in the climate or some new predator. It may become harder for new species to form. For one thing, so many once-isolated populations are hybridizing with invaders. For another, there will be no refuge from competition, where populations can experiment and find new ways of making a living.
It's a mistake to react to papers like this one by collapsing in complete apocalyptic despair. For one thing, other scientists have looked at the same heap of evidence and drawn different predictions. Dov Sax and Steven Gaines, two biologists from UC Santa Barbara, show in another Trends paper in press (pdf here) that invasions can add diversity to a region, not just take it away.
We're not just talking about just adding zebra mussels to the plus column. Invaders that hybridize with native species don't always destroy diversity--as I mentioned the other day, hybrids can be the source of new species. And the invaders themselves evolve, as they adapt to their new home, in some cases potentially becoming entirely new species themselves. Sax and Gaines show some impressive data in their paper. On oceanic islands, for example, plant diversity has actually doubled over the past few thousand years, even when you take into account the species that have become extinct.
If biological invasions were the only force acting on diversity, some researchers even predict that the diversity of New Pangaea should ultimately rebound to pre-human levels. Michael Rosenzweig, a leading evolutionary ecologist at the University of Arizona, summarizes his argument in this paper, starting on page 10. (But see this opposing view from Michael Collins at the University of Tennessee.)
It would be an even bigger mistake to use the work of Sax and Gaines to say that everything is just fine and that any concern about the future a symptom of eco-freak hysteria. Biological invasions may or may not cause an overall loss of species over the long term. But that's not the only force that's shaping the New Pangaea. The space available for species is also shrinking, as forests, prairies, and other habitats contract. That means that the global level of biodiversity will shrink. And so while we may be dealing some species a winning hand at the moment, they'll be inheriting a world that suffered serious ecological damage--which may make their victory a Pyrrhic one.
We, of course, are the biggest winners of all on New Pangaea, having put ourselves on every continent. The big question for us is what we're winning. To me, it's a question that's far more important to our future than whether a trip to the doctor's office will add 10 points to your unborn child's IQ.


Ask and ye shall receive. In a recent post on eugenics, I claimed that the connection between early 20th century genetics and early 21st century genetic engineering was weak. I asked if anyone thought I was wrong, and in no time I got a comment from Razib at Gene Expression.
He suggests that I'm limited by conventional preconceptions, taking issue on both my points--first about the prospects of engineering intelligence and second about the prospects of a new species of engineered humans. I think he's got a stronger argument on the first point than the second.
On the first point, Razib argues that it wouldn't be as hard as I think to engineer more intelligence. I said maybe thousands of genes would have to be tinkered with, and he pointed out that if individual genes typically accounting for around 1% of variation, then it shouldn't take thousands of genes to engineer significantly brighter people. OK--I'll give in on the thousands, although nobody can really say what the exact number is. But even with hundreds (or even dozens) of genes, you're still dealing with a level of complexity that dwarfs anything I've seen reported in this area, even in mice. And if I'm blindered by conventional preconceptions, then at least I'm in good company. Here's an essay Steven Pinker wrote last summer that lofts the same bucket of cold water.
On the second point--making a new species--Razib thinks that you could get enough barriers up around the new population of engineered humans to get speciation. He writes:
"...those barriers can be social, if some religious nutsos decided to create biphallic sons, there would be issues with these sons being able to get mates from the mono-phallic majority. Additionally, GE [genetic engineering] would by its very nature alter the ground rules for speciation as mutation in the context of genetic drift and natural selection plus physical barriers thrown up by geography, etc. might not be the only sources of reassortment & segregation of genes within a population...."
It's true that barriers can be social--songbirds develop new tunes that make them sexy only to certain females, for example. But you still need some serious isolation to get them singing a new song before you bring the new population back in touch with the old one. (Like putting them on another island for a while.) Otherwise, the differences just wash out. I suppose you can try to imagine some Dr. Moreau engineering men with twin-penises (along with bivaginal women, I guess?), but it just shows how far you have to go into X-Files territory to make an argument for speciation. Genetic engineering is certainly a form of mutation that the world has never really seen before. But that doesn't mean that it cancels all the rules about how new species form.


The folks behind the Macarthur genius grant chose wisely this week when they gave one to Loren Rieseberg, an evolutionary biologist at Indiana University. Rieseberg does fascinating work on the origin of new species (that little subject). Specifically, he's shown how new plant species emerge from hybrids. When two species of plants form a hybrid, it doesn't necessarily become a sterile dead end. In fact, hybridization is an important source of entirely new species. Rieseberg does his work mainly on sunflowers, and so whenever I walk past a charming row of them, I think of the weird inter-species mingling that may lurk in their past.


Today Daniel Kevles, a Yale historian, has an interesting review in the New York Times of a new book about eugenics. The book in question is War against the Weak, by Edward Black. It's a cinderblock of a book, and it's got a lot of chilling material to offer on how popular eugenics was in the United States in the earlier part of the century. A lot of people sincerely believed that criminals, blind people, sick babies, social misfits, and non-Nordic immigrants had to be stopped from poisoning the American gene pool. We're not talking about a few racists here and there--we're talking about leading biologists, doctors, philanthropists, Congressmen, and Supreme Court justices.
I was curious to see whether Kevles--who is an expert on the history of eugenics--would react to bhe book the same way I did when I reviewed it last month for Discover. For the most part he does. For all the merit of Black's research, Kevles says, he tries too hard to turn a diffuse social phenomenon into a grand conspiracy. He tries to connect the dots into a straight line leading directly from the United States to the Nazi gas chambers. The truth is a lot more complicated, and even Black's own book undermines his own argument.
But I was a bit disappointed to find Kevles winding up his review on a more positive note: "Black's book does prompt us to wonder what in medical genetics and biotechnology we are taking socially and morally for granted today that our descendants might indict us for tomorrow." The link between eugenics and today's research on DNA is pretty dubious. Black does everything he can to make it sound like the eugenicist conspiracy lives on in the biotechnology start-ups and genetics labs of the world. He even coins a label, "newgenics." He warns that the rich will make designer babies with genes for intelligence and good looks, and that these manufactured kids will give rise to a new species--a capitalist realization of the old dreams of a master race.
I don't doubt that some rich people will try to monkey with their kids' genes. I don't doubt that some genetic information may get into the hands of insurance companies and leave some people with the wrong alleles without coverage. But the fear that newgenics will make eugenics real at last--that it will alter the genetic make-up of the species--is a flaky one.
Of course, certain genes influence our intelligence, but each one has a tiny influence. If you wanted to make your kid significantly smarter, you'd have to tinker with a huge number of them, perhaps hundreds or thousands. And you wouldn't just have to calculate each gene's individual contribution to intelligence, but you'd have to figure out the interactions between the genes. And then you'd have to grapple with the fact that genes are estimated to contribute only half the variation in intelligence, with the environment making up the other half. How genes and environment interact, no one really knows.
And the idea that the rich will create a new species is just as silly. For a new species to incubate, it needs barriers around it to keep genes from the old species from flowing in and its new genes from flowing out. It also needs a population big enough to survive inbreeding and the random flukes of life might snuff it out. If a lot of people colonized another solar system, you could get another species. But I have a hard time believing that genetically modified people could launch a species of their own on Earth. You'd have to get tens of thousands of people to have sex only with one another for centuries. Good luck.
I'd be happy to hear whether anyone thinks I'm wrong. But as far as I can see, here's what's going to happen: some rich control freaks are going to screw up their kids' lives, and some vulnerable people are going to be disenfranchised. In other words, more of the same.


Over the past couple weeks an unplanned experiment has taken place that shows what sort of science makes it into the popular consciousness and what doesn't. In the past couple weeks we've had three pieces of research on the same evolutionary puzzle in the same high-profile journal (Nature). One was all over the place--I'll just link the USA Today article as one example. The other two vanished with barely a peep.
All three papers tackle the puzzle of kindness. Why do we cooperate when there are powerful evolutionary forces that would seem to work against cooperation? If I waste a lot of my time and energy helping you, I have less time working towards my own reproductive success. And if we do cooperate for whatever reason, shouldn't there be a big evolutionary advantage to cheating--reaping your kindness and offering none of mine? We humans cooperate a lot, but our cooperation is mediated by cultural institutions (governments, religions), and even more so by language. So does that mean cooperation in our species could exist before the rise of human culture and language?
Big questions, with many inviting spots into which you can sink your teeth. Frans de Waal of Emory University sinks his teeth into monkeys and apes. He has argued for a long time that many primate species cooperate in very complicated ways. If he's right, the roots of human cooperation were already in place millions of years ago, in the common ancestor we shared with other primates. Human cooperation depends in large part on our strong sense of fairness. It's common for people in any culture to get mad when they don't get their fair share, or when they see someone else getting cheated. Well, as de Waal and fellow Emory primatologist Sarah Brosnan reported last week, capuchin monkeys get mad, too.
They trained the monkeys to perform a simple exchange: they gave the monkeys a token, which the monkeys then returned in exchange for a cucumber. The capuchins learn fast how to play the game. But if they saw that another capuchin was getting rewarded with better stuff--i.e., a grape--they might heave back the cucucmber, or refused to give up the pebble. Watching another capuchin get rewards for doing nothing made them even crankier.
The press was all over that one. The New York Times ran an editorial about what capuchins can teach us about making America fairer. Now I'm all in favor of Frans de Waal getting attention, since his work is so fascinating. Capuchins are a lot more distantly related to us than chimpanzees, so we could be talking about a sense of fairness dating back 30 million years in our ancestry. But I don't think the attention actually corresponded that much to the importance of his work. If people really are interested in the evolution of cooperation, they should have been just as interested in a couple wonderful papers that Nature published two weeks earlier. Those scientists did something more than observe cooperation. They watched cooperation itself come into being.
Instead of capuchins, the authors of these studies work with bacteria. Bacteria may not help at barn raisings or drive on the right side of the road, but many species are incredibly social. The soil bacteria Myxococcus xanthus are the Roman army of the microbial world. Instead slithering alone through the muck, hundreds of thousands of the microbes join together to lay down a matrix of fibers several inches wide. Then the bacteria race around the network in great swarms, skittering along the fibers with their glue-tipped legs. Any individual bacterium could sneak along the network without helping build it, but most M. xanthus make the sacrifice for the collective.
Greg Velicer and Yuen-tsu Yu knocked out the gene that makes the main protein in the glue-tipped hairs of M. xanthus. Suddenly, the bacteria couldn't use their highways. They could still move around by themselves with a back-up transportation--they glide slowly on a carpet of goo that they shoot through holes in their membranes. But after a few months, the bacteria started building networks and swarming again. Studies on these bugs showed that they had picked up several mutations during that time, evolving a new way of building their highway system. (The picture of the petri dish at the top of the post shows all the stages of the process. The cloud at the top of the dish is a wild-type colony. The two little clumps are mutants who can't form networks. And the squiggly psychedic fireworks at 2 and 8 o'clock are re-evolved matrix builders.)
Paul and Katrina Rainey, two New Zealand scientists, reported on another matrix created by Pseudomonas fluorescens, a species that feeds on organic matter. They put P. fluorescens in a beaker of broth, and before long a mutant strain began to emerge. Instead of floating in the broth, the mutants create a mat on the surface. (The mutants are called "wrinkly spreaders.") By living in this new spongy habitat, the wrinkly spreaders can consume lots of oxygen from the air while still enjoying the organic matter in the broth.
The Raineys also found that after a while, cheaters evolve to take advantage of the wrinkly spreaders. These guys live in the mat without contributing to it, and actually thrive more than the wrinkly spreaders themselves. These cheaters can actually doom the entire colony, because their dead weight makes it easier for the mat to sink to the bottom of the beaker, where they can't get any oxygen and die.
I don't think it's just geekiness on my part that makes me think that these projects are as cool as de Waal's irritated monkeys. And yet I can find barely a nod to them in the press. I suspect that people are drawn to de Waal's work for two reasons. One: the monkeys are cute. Two: it's easy to look at the monkeys like animals in Aesop's fables, dramatizing human nature. But capuchins aren't metaphors for progressive tax codes or faith-based initiatives or whatever platform someone's pushing. The research on them is important because it gets at evolutionary principles, at the way moral systems can be encoded in emotions rather than computed by reason. And the bacteria papers are just as important because they show how even microbes can find a payoff in working together. In fact, these studies are arguably more important, because cooperation comes into existence time after time, showing just how powerful an evolutionary force it can be. (And it also is another headache to those who would say that something like cooperation could never evolve. Which is always a plus.)