\n"; echo $styleSheet; ?>
include("http://www.corante.com/admin/header.html"); ?>

I've got an article in today's New York Times about one of my perennial fascinations—musical hallucinations. One of the reasons that I find this condition so interesting is that it gives us a look under the neurological hood. Our brains do not simply take in objective impressions of the world. They are continually coming up with theories, and they test them against perceptions every moment of our waking lives. It would be impossible to test them against a complete picture of reality, because the world is simply too complex and ever-changing. Instead, the brain makes quick judgments on scraps of information, revising bad theories that don't make good predictions or using good theories as the basis for actions. Some scientists argue that musical hallucinations are evidence that our brains even make theories about music. When we hear stray sounds, we match them to tunes in our memory, in a sort of internal game of Name That Tune. Unfortunately, some people can't test their theories well enough, it seems, and so they wind up thinking a church choir is singing in the next room, when in fact there is only silence.
There's one line of evidence that supports this explanation of musical hallucinations that I didn't have room in the article to explore. It turns out that some people have an analogous problem with their vision. They suffer from a condition known as Charles Bonnet syndrome, in which they have visual hallucinations. In some cases, the hallucinations are nothing but textures or wallpaper-like patterns. In other cases, people may see a row of people floating in front of them. Reginald King, the elderly gentleman who described his musical hallucinations to me, also suffers from Charles Bonnet syndrome. He told me about how he would see patterns on the ceiling, or sometimes a cat or a dog running across his bed.
Victor Aziz, one of the scientists I interviewed for this story, has noticed that some other people also experience both visual and musical hallucinations, and doesn't think it's a coincidence. It's possible that regions of the brain that handle processing complex structures of both sound and sight can short-circuit in a similar way, producing similar hallucinations. And interestingly, brain scans of people with visual hallucinations are strikingly similar to those of people with musical hallucinations. In each case, the higher information-processing centers become active even when the regions that normally relay information from the senses are quiet. If we accept a theory of what we see, it's as real as the theories of what we hear.


How long can an idea stay tantalizing?
Back in 2003, I blogged about an experiment that suggested, incredibly enough, that our long-term memories are encoded by prions— the misfolded proteins that are generally accepted to be the cause of mad cow disease. The evidence came from studies of a protein (known as CPEB) that plays a key role in laying down memories in neurons. Scientists found that it had a structure much like prions. When a normal protein misfolds and becomes a prion, it acquires the ability to lock onto other proteins and force them to misfold in the same way. The misfolding can spread until it has devastating results—as in the case of mad cow disease, in which prions from cow brains get into our own brains. But the discovery of prion-like memory proteins hinted that maybe they could play a beneficial role as well.
Not long after I blogged on this research, I ran into a neuroscientist I know (and who shall remain nameless). He sneered at the prion paper, pointing out that the authors of the paper didn't show that the protein acts like a prion in neurons. Instead, they had only shown that it acts like a prion when it is inserted into yeast. They took this peculiar step because yeast have prions, and they had the tools to study prion behavior in yeast. It is far harder to experiment with prions in neurons. But this neuroscientist I spoke to thought they shouldn't have gone public until they had taken this last, hard step.
I've been waiting ever since. And in the June issue of Nature Review Genetics I came across a paper entitled "Prions as adaptive conduits of memory and inheritance." One of the co-authors is Susan Lindquist of MIT, one of the scientists who made the memory-prion connection back in 2003. Eager for an update, I read on. And what do I find? There's a lot of new research on the role of prions in yeast, where they may play an important role in evolution. But as for prions and memory, there's nothing beyond what Lindquist had to offer in 2003.
My patience has probably been irreparably damaged by today's minute-by-minute news cycle, but I have to wonder why we're still in prion-memory limbo. Is the next experiment too hard to do? Does it take years to finish? Or is the link between memories and prions just not there?
Just as I'm tempted to give up hope, out comes another paper. It may not seal the deal, but at least keeps me eager for more. Psychiatrists in Switzerland were inspired by the original prion-memory experiments to look for evidence in people's genes. Some studies have suggested that the strength of people's memories is at least partly the result of genetic variation. But no one knew which genes were involved. So the psychiatrists took a look at the prion protein gene (PRNP), which causes mad-cow disease when it misfolds. (No one is sure what it does for us in its normal shape.) People have different versions of PRNP, some of which are more prone to misfolding than others. The scientists genotyped 354 subjects to see which version they carried and then gave them a memory test.
In a paper in press at Human Molecular Genetics, they report that people with one or two copies of the misfolding version recalled 17% more information than those without a copy. It's a puzzling result for many reasons, not the least of which is the fact that the link originally proposed between prions and memory did not involve PRNP but CBEP. But it's enough to keep me wanting more.




Evolutionary psychologists argue that we can understand the workings of the human mind by investigating how it evolved. Much of their research focuses on the past two million years of hominid evolution, during which our ancestors lived in small bands, eating meat they either scavenged or hunted as well as tubers and other plants they gathered. Living for so long in this arrangement, certain ways of thinking may have been favored by natural selection. Evolutionary psychologists believe that a lot of puzzling features of the human mind make sense if we keep our heritage in mind.
The classic example of these puzzles is known as the Wason Selection Task. People tend to do well on this task if it is presented in one way, and terribly if it is presented another way. You can try it out for yourself.
Version 1:
You are given four cards. Each card has a number on one side and a letter on the other. Indicate only the card or cards you need to turn over to see whether any of these cards violate the following rule: if a card has a D on one side, it has a 3 on the other side.

_________________________________________________________________________________
Version 2:
Now you're a bouncer at a bar. You must enforce the rule that if a person is drinking beer, then he must be over 21 years old. The four cards below each represent one customer in your bar. One side shows what the person is drinking, and the other side shows the drinker's age. Pick only the cards you definitely need to turn over to see if any of these people are breaking the law and need to be thrown out.

_________________________________________________________________________________
The answer to version one is D and 5. The answer to version two is beer and 17.
If you took these tests, chances are you bombed on version one and got version two right. Studies consistently show that in tests of the first sort, about 25% of people choose the right answer. But 65% of people get test number two right.
This is actually a very weird result. Both tests involve precisely the same logic: If P, then Q. Yet putting this statement in terms of social rules makes it far easier for people to solve than if it is purely descriptive.
Leda Cosmides and John Tooby of the University of California at Santa Barbara have argued that the difference reveals some of our evolutionary history. Small bands of hominids could only hold together if their members obeyed social rules. If people started cheating on one another--taking other people's gifts of food, for example, without giving gifts of their own--the band might well fall apart. Under these conditions, natural selection produced a cheating detection system in the brain. On the other hand, our hominid ancestors did not live or die based on their performance on abstract logic tests. Rather than being a general-purpose problem-solver, the human brain became adapted to solving the problems that our ancestors regularly faced in life.
The Wason Selection Task has become the center of the debate over evolutionary psychology. Some critics, such as the French psychologist Dan Sperber, claim that Cosmides and Tooby can't make such strong statements about human reasoning from the Wason Selection Task. Others claim that the brain can't be sliced up into modules so nicely.
The controversy has taken a very interesting turn now, thanks to brain imaging. A team of Italian psychologists had people lie in an MRI scanner and work their way through a set of puzzles that followed the same line of logic as the ones I presented above. They then compared how the brain responded to the challenges to see if indeed the brain works differently when it is solving problems in terms of social exchange than when the problem is more abstract.
The psychologists didn't use a conventional Wason Selection Task like the ones above, because they wanted to make the problems as similar as possible, except that one dealt with social exchanges. Brain imaging requires this sort of strict experimental design, because it's very easy to see differences in brain activity that aren't actually relevant to the question a scientist wants to answer. For example, if one puzzle just so happens to involve picturing an object, some of the brain's visual processing may become active. So the researchers told their subjects that the puzzles would involve a hypothetical tribe. A purely descriptive puzzle might require subjects to consider the rule, "If a person cracks walnut shells, then he drinks pond water." The subjects might then see a set of cards that read, "He didn't drink pond water," "He didn't crack walnut shells," He cracked walnut shells," and "He drank pond water." The researchers also had their subjects solve puzzles that involved social exchanges. The rule in these cases might be, "If you give me sunflower-seeds, then I give you poppy petals."
The psychologists report the results of the test in a paper in press at the journal Human Brain Mapping (click the html link to get the whole paper for free). The results are fascinating--although the researchers don't claim to have settled the debate over the cheater module. Both the social exchange and descriptive version of the puzzle activated the same network of regions on the left side of the brain. One region (the angular gyrus) is considered important for semantic tasks. A second region is located near the left temple (the dorsolateral prefrontal cortex). It's essential for considering many different pieces of information at once. The third region, the medial prefrontal cortex, becomes active when people need to bear in mind a larger goal while they solve the many small problems it poses. Previous studies have shown that the left side of the brain plays a much more important role than the right in reasoning and coming up with explanations for how the world works in general.
Now here's the kicker: the social exchange version of the problem doesn't just activate this left-brain network. It also activates the same regions in the right side of the brain. Many studies in which people have thought about social situations have tended to turn on the right side of the brain more than the left, and so in one sense this result isn't too surprising. But it is surprising when you consider that the descriptive version of the puzzle that only switch on parts of the left side of the brain involved thinking about other people and their actions. You might think that that would be social enough to engage any parts of the brain specializing in social thinking. Apparently not. Only when the puzzle involved rules for social exchanges did the right-brain network come on line.
Is this the cheater module? It's conceivable that the Italian psychologists tapped into some social brain circuit that isn't specifically adapted for enforcing social rules, but for some somewhat broader group of social problems. It would be interesting if a test other than the Wason Selection Task could trigger the same left versus left-right patterns. The precise evolutionary forces that shaped this feature of the mind may not be clear yet. But this experiment is an important step towards working out the biology between the strange results of the Wason test. Clearly, our brains throw a lot more neurons at logic problems when they concern our social lives instead of abstractions. Analytic philosophers are made, you could say, but political philosophers are born.
Update: 7:15 pm-- I decided to change the first version of the test to avoid ambiguity.
Update: Tuesday, 8:15 am-- Some commenters have argued that people do better with the bar version of the puzzle because people have more experience with it than with abstract logic. Actually, many variations of the puzzle have been tested out, and the same results emerge. Notice, for example, that the Italian scientists who did the most recent study put the puzzles in terms of a hypothetical tribe, with which the subjects had no experience at all. Despite this different format, almost precisely the same fraction of the subjects got the different versions write as in more familiar versions of the test, such as the bartending example.
Thanks also to the sharp readers who pointed out that the puzzles need to be If-Then propositions.


I've got an article in tomorrow's New York Times about a startling new way to control the nervous system of animals. Scientists at Yale have genetically engineered flies with neurons that grow light-sensitive triggers. Shine a UV laser at the flies, and the neurons switch on. In one experiment, the scientists were able to make decapitated flies leap into the air by triggering escape-response neurons. In another, they put the trigger in dopamine-producing neurons, and the flash sent healthy flies walking madly around their dish. (You can read the paper for free at Cell's web site.)
In working on this story, I was reminded of the research being done now with implanted electrodes, which I wrote about last year in Popular Science. Much of this research focuses on listening in on neurons to control robots or computers. But the electrodes have also been used to send electricity into the brain to control an animal. In one case, scientists steered a rat by sending jolts into its brain.
But those who feel anxious about the genetic engineering I write about tomorrow should bear a couple things in mind. First off all, this method only lets scientists turn on an entire type of neurons. All the escape-response neurons became active in the first experiment. All the dopamine-producing neurons became active in the second. That's a far cry from a complex set of signals that might make an animal carry out a complex behavior. But that's not what the scientists who designed this new method had in mind, anyway. They want to develop new ways to do experiments on the nervous system.
Still, science fiction writers should pay heed. It's conceivable, for example, that a completely unethical scientist could engineer similar triggers into a human brain (although it could also fail completely). And another thing that inspires the sci-fi imagination is the experiment on dopamine-producing neurons. Dopamine is a neurotransmitter that give the brain a sense of expectation and anticipation, priming it to learn how to gain rewards. It's also what cocaine exploits to produce its addictive pleasure. In other words, when the scientists switched on their laser, the flies got the biggest high of their lives.


Scientists studying people in minimally conscious states have published the results of brain scans showing that these people can retain a surprising amount of brain activity. The New York Times and MSNBC, among others, have written up accounts.
I profiled these scientists for a 2003 article in the New York Times Magazine, when they were at an earlier stage in their research. Things certainly have changed since then. When my article came out, hardly anyone had heard of Terri Schiavo, the Florida woman in a permanent vegetative state who is at the center of a battle between her parents, who want to keep her feeding tube in, and her husband, who wants it taken out. Since then, her case has made national headlines, and a law has been passed in her name. I for one will be keeping close attention to how this new paper is received (and used) in the debate over Terri Schiavo, because I had the displeasure of watching my article get pulled into the debate and distorted for political ends.
The key point to bear in mind about this new research is that there's a difference between people in a permanent vegetative state and people in a minimally conscious state. Neurologists have developed bedside tests to determine which state a given patient is in. People in minimally conscious states show fleeting, but authentic, awareness of their surroundings, for example. People in vegetative states do not. Neurologists cannot make this diagnosis from the reports of family members, because it is easy to see awareness in a loved one when there is, in fact, none. That doesn't mean that family members are necessarily wrong if they say a loved one is aware. It's just that a doctor needs to test a patient objectively, using methods that don't rely on his or her own interpretation.
Some people have argued that this test is circular: people are simply defined minimally conscious if they pass a test for minimal consciousness. But the designers of the test have shown that it does have predictive power. For one thing, people who rise to a minimally conscious state have a small but real chance of recovering consciousness (although they may never return to their former selves). People who stay in a permanent vegetative state for many years, by contrast, almost never recover.
The brain scan findings now being reported also strengthen the notion of a minimally conscious state. The researchers scanned the brains of patients diagnosed as minimally conscious, playing the voice of loved ones through headphones, scratching their skin, and doing other tests to check for the function of their brain. They found that the patients responded in important ways. Some patients responded to the recordings with strong activity in regions of the brain involved in language and memory, for example. But in the absence of stimuli, the brains of the patients used less energy than a person would under anesthesia.
On the other hand, earlier scans of people diagnosed as being in a permanently vegetative state showed at most only isolated islands of activity in the cortex, where higher brain functions take place. So the difference detected by bedside tests is mirrored by a difference detected in the brain scanner.
It's crucial neither to overplay or underplay the importance of this work. People who are coping with the staggering burden of a loved one in a truly permanent vegetative state should not see this as evidence that their loved one is conscious and simply "locked in" to an unresponsive body. Nor should pundits raise false hopes by claiming that this is the case.
But it is also true that people with impaired consciousness are not getting the attention they deserve, starting with a good diagnosis. Thirty percent of people in a permanent vegetative state may actually be minimally conscious. It would be fantastic if some day doctors could make a precise diagnosis of brain-damaged patients simply by running them through some tests in a scanner. For now, though, only a handful of people with impaired consciousness in the entire world have been scanned at all. Eventually, it might be possible to use the knowledge gained from these tests to start finding ways to help people recover more of their consciousness, perhaps through brain stimulation. Today there's nothing a doctor can do but wait and watch.
Unfortunately, people with impaired consciousness are more likely to be simply warehoused, getting hardly any attention from a neurologist. Are we, as a society, ready to give these voiceless people the care they deserve?




I am sure that in 50 years, we are going to know a lot more about how the mind works. The fusion of psychology and genetics will tell us about how our personality is influenced by our genes, and they'll also show exactly how the environment plays a hand as well. The preliminary evidence is just too impressive to seriously doubt it. Likewise, I am sure that we will have a deeper understanding how our minds have evolved, pinpointing the changes in DNA over the past six million years have given us brains that work very differently than apes. Again, the first results can't help but inspire a lot of hope.
Given where I stand on all this, I would have thought that I'd enjoy Dean Hamer's new book, The God Gene: How Faith is Hard-Wired In Our DNA. The time is ripe, judging from the string of books that have been published in the past few years on the link between religion and biology. I thought that Hamer, a geneticist, might be able to throw some interesting information into the mix, thanks to his expertise in behavioral genetics. The book turned out to be elegant and provocative, and, as I write in my review in the new issue of Scientific American, disappointingly thin on the evidence. From a single study that Hamer hasn't even published yet, he weaves an incredibly elaborate scenario in which faith is an adaptive trait. I wouldn't be surprised if it is the product in some way of natural selection, but now is hardly the time to be writing a book claiming to have figured out its origins--not to mention making appearances on talk shows and the like. Too many links between behavior and genes have already crashed and burned (including some Hamer himself has made).
Update, 9/27: Scientific American has posted the review on their site

The soft spot on a baby's head may be able to tell us when our ancestors first began to speak.
We have tremendously huge brains--six times bigger than the typical brain of a mammal our size. Obviously, that big size brings some fabulous benefits--consciousness, reasoning, and so on. But it has forced a drastic reorganization of the way we grow up. Most primates are born with a brain fairly close to its adult size. A macaque brain, for example, is 70% of adult size at birth. Apes, on the other hand, have bigger brains, and more of their brain growth takes place after birth. A chimpanzee is born with a brain 40% of its adult size, and by the end of its first year it has reached 80% of adult size. Humans have taken this trend to an almost absurd extreme. We are born with brains that are only 25% the size of an adult brain. By the end of our first year, our brains have reached only 50%. Even at age 10, our brains are not done growing, having reached 95% of adult size. For over a decade, in other words, we have newborn brains.
It's likely that this growth pattern evolved as a solution to a paradox of pregnancy. Brains demand huge amounts of energy. If mothers were to give birth to babies with adult-sized brains, they would have to supply their unborn children with a lot more calories in utero. Moreover, childbirth is already a tight fit that can put a mother's life in jeopardy. Expand the baby's head more, and you raise the risks even higher.
Extending the growth of the brain obviously gave us big brains, but it may have endowed us with another gift. All that growth now happened not in the dark confines of the womb, but over the course of years of childhood. Instead of floating in an aminotic sac, children run around, fall off chairs, bang on pots, and see how loud they can scream. (At least mine do.) In other words, they are experiencing what it's like to control their body in the outside world. And because their brains are still developing, they can easily make new connections to learn from these experiences. Some researchers even argue that only after the brains of our ancestors became plastic was it possible for them to begin to use language. After all, language is one of the most important things that children learn, and they do a far better job of learning it than adults do. If scientists could somehow find a marker in hominid fossils that shows how their brains grew, it might be possible to put a date on the origin of language.
That's where the soft spot comes in.
The oldest hominids that look anything like humans first emerged in Africa about 2 million years ago. They were about as tall as us, with long legs and arms, narrow rib cages, flat faces, and small teeth. The earliest of these human-like hominids are known as Homo ergaster, but they rapidly gave rise to a long-lived species called Homo erectus. H. erectus probably originated in Africa, but then burst out of the home continent and spread across Asia to Indonesia and China. The Homo erectus people who stayed behind in Africa are probably our own ancestors. The Asian H. erectus thrived until less than 100,000 years ago. They could make simple stone axes and choppers, and had brains about two-thirds the size of ours.
Paleoanthropologists have found only a single braincase of a baby Homo erectus. It was discovered in Indonesia in 1936, and has since been dated to 1.8 million years old--close to the origin of the species. While scientists have had a long time to study it, they haven't made a lot of progress. One problem is that the fossil lacks jaws or teeth, which can offer clues to the age of a hominid skull. The other problem is that the interior of the braincase was filled with rock, making it hard to chart its anatomy.
In the new issue of Nature, a team of researchers rectified this problem with the help of a CT scanner. They were able to calculate the volume of the child's brain, and then they were able to map the bones of the skull more accurately. As babies grow, the soft spot on their skull closes up and other bones are also rearranged in a predictable sequence. Chimpanzees, our closest living relatives, also close up their skulls in the same pattern, with some small differences in timing. The H. erectus baby, its skull shows, was somewhere between six and eighteen months old. Despite its tender age, the Homo erectus baby had a big brain--84% the size of adult Homo erectus brains as measured in fossil skulls.
A single battered braincase still leaves plenty of room for uncertainty, but it's still a pretty astonishing result. At a year old, this Homo erectus baby was almost finished growing its brain. It spent very little time developing its brain outside the womb, suggesting that it didn't have enough opportunity to develop the sophisticated sort of thinking that modern human children do. If that's true, then it's unlikely it could ever learn to speak. If these researchers are right, then future CT scans of younger hominid skulls should be able to track the rise of our long childhood.



"A world without memory is a world of the present," Alan Lightman wrote in Einstein's Dreams. "The past exists only in books, in documents. In order to know himself, each person carries his own Book of Life, which is filled with the history of his life...Without his Book of Life, a person is a snapshot, a two-dimensional image, a ghost."
Most people would probably agree with Lightman. Most people think that our self -knowledge exists only through the memories we have amassed of our selves. Am I a kind person? Am I gloomy? To answer these sorts of questions, most people would think you have to open up some internal Book of Life. And most people, according to new research, are wrong.
Neuroscientists would call Lightman's Book of Life episodic memory. The human brain has a widespread system of neurons that store away explicit memories of events, which we can recall and describe to others. Some forms of amnesia destroy episodic memories, and sometimes even destroy the capacity to form new ones. In 2002, Stan B. Klein of the University of California at Santa Barbara and his colleagues reported a study they made of an amnesiac known as D.B. D.B. was 75 years old when he had a heart attack and lost his pulse. His heart began to beat after a few minutes, and he left the hospital after a few weeks. But he had suffered brain damage that left him unable to bring to mind anything had done or experienced before the heart attack. Klein then tested D.B.'s self-knowledge. He gave D.B. a list of 60 traits and asked him whether they applied to him not at all, somewhat, quite a bit, or definitely. Then he gave the same questionnaire to D.B.'s daughter, and asked her to use it to describe her fater. D.B.'s choices significantly correlated with his daughter's. D.B.'s Book of Life was locked shut, and yet he still knew himself.
A few other amnesiacs have shown a similar level of self-knowledge, but it's hard to draw too many lessons from them about how normal brains work. So recently Matthew Lieberman of UCLA and his colleagues carried out a brain-scanning study. They wanted to see if they could find different networks in the brain that make self-knowledge possible. They also wanted to see if these networks functioned under different circumstances--for example, when thinking about ourselves in very familiar contexts and unfamiliar ones.
They picked two groups of people to test: soccer players and improv actors. They then came up with a list of words that would apply to each group. (Soccer players: athletic, strong, swift; actors: performer, dramatic, etc.) They also came up with a longer list of words that applied specifically to neither (messy, reliable, etc.). Then they had all the subjects get into an fMRI scanner, look at each word, and decide whether it applied to themselves or not.
The volunteers' brains worked differently in response to different words. Soccer-related words tended to activate a distinctive network in the brains of soccer players, the same one that actor-related words switched on in actors. When they were shown words related to the other group, a different network became active. And, as Lieberman and his colleagues report in an upcoming issue of the Journal of Personality and Social Psychology, it just so happens that they had predicted precisely which two networks would show up in their scans. (Here's the full pdf on Lieberman's web site.)
When people were presented with unfamiliar words, they activated a network Lieberman calls the Reflective system (or C system for short). The Reflective system taps into parts of the brain already known to retrieve episodic memories. It also includes regions that can consciously hold pieces of information in mind. When we are in new circumstances, our sense of our self depends on thinking explicitly about our experiences.
But Lieberman argues that over time, another system takes over. He calls this one the Reflexive system (or X system). This circuit does not include regions involved in episodic memories, such as the hippocampus. Instead, it is an intuition network, tapping into regions that produce quick emotional responses based not on explicit reasoning but on statistical associations. (The picture I show here is a figure from the paper, with the X and C systems mapped out.)
The Reflexive system is slow to form its self-knowledge, because it needs a lot of experiences to form these associations. But it becomes very powerful once it takes shape. A soccer player knows whether he is athletic, strong, or swift without having to open up the Book of Life. He just feels it in his bones. He doesn't feel in his bones whether he is a performer, or dramatic, and so on. Instead, he has to think explicity about his experiences. Now D.B.'s accurate self-knowledge makes sense. His brain damage wiped out his Reflective system, but not his Reflexive system.
This research is fascinating on its own, and even more so when you think about the evolution of the self. Judging from the behavior of humans and apes, I'd guess that the Reflective system seems to be far more developed in us, while apes may share a pretty well developed Reflexive system. Does that mean that a Reflexive self existed before a Reflected one? Is the self we see in the Book of Life a recent innovation sitting an ancient self that we can't put into words? And does that mean that chimpanzees have a Reflexive self? Is that enough of a self to warrant the sort of rights we give to humans because they are aware of themselves?


Our brains are huge, particularly if you take into consideration the relative size of our bodies. Generally, the proportion of brain to body is pretty tight among mammals. But the human brain is seven times bigger than what you'd predict from the size of our body. Six million years ago, hominid brains were about a third the size they are today, comparable to a chimp's. So what accounts for the big boom? It would be flattering ourselves to say that the cause was something we are proud of--our ability to talk, or our gifts with tools. Certainly, our brains show signs of being adapted for these sorts of things (consider the language gene FOXP2). But those adaptations probably were little more than tinkerings with a brain that was already expanding thanks to other factors. And one of those factors may have been tricking our fellow hominid.
In the 1980s, some primatologists noticed that monkeys and apes--unlike other mammals--sometimes deceived members of their own species, in order to trick them out of food or sneak off for some furtive courtships. The primatologists got to thinking that deception involved some pretty sophisticated brain power. A primate needed to understand something about the mental state of other primates and have the ability to predict how a change in that mental state might change the way other primates behaved.
The primatologists then considered the fact that humans aren't the only primates with oversized brains. In fact, monkeys and apes, on average, have brains twice the size you'd predict for mammals of their body size. Chimpanzees and other great apes have particularly big brains, and they seemed to be particularly adept at tricking each other. What's more, primates don't simply have magnified brains. Instead, certain regions of the brain have expanded, such as the neocortex, the outer husk of the brain which handles abstract associations. Activity in the neocortex is exactly the sort of thinking necessary for tricking your fellow ape.
Taking all this into consideration, the primatologists made a pretty gutsy hypothesis: that the challenges of social life--including deception--actually drive the expansion of the primate brain. Sometimes called the Machiavellian Intelligence hypothesis, it has now been put to its most rigorous test so far, and passed quite well. Richard Byrne and Nadia Corp of the University of St. Andrews in Scotland published a study today in the Proceedings of the Royal Society of London. (The link's not up yet, but here's a New Scientist piece.) They found that in 18 species from all the major branches of primates, the size of the neocortex predicts how much deception the species practices. Bigger brains mean more trickery. They were able to statistically rule out a number of other factors that might have created a link where none existed. And they were able to show that deception is not just a side-effect of having a big brain or something that opportunistically emerges more often in big groups. Deception is probably just a good indicator of something bigger going on here--something psychologists sometimes call "social intelligence." Primates don't just deceive one another; they also cooperate and form alliances and bonds, which they can keep track of for years.
While deception isn't just an opportunistic result of being in big groups, big groups may well be the ultimate source of deception (and by extension big brains). That's the hypothesis of Robin Dunbar of Liverpool, as he detailed last fall in the Annual Review of Anthropology. Deception and other sorts of social intelligence can give a primate a reproductive edge in many different ways. It can trick its way to getting more food, for example; a female chimp can ward off an infanticidal male from her kids with the help of alliances. Certain factors make this social intelligence more demanding. If primates live under threat of a lot of predators, for example, they may get huddled up into big groups. Bigger groups mean more individuals to keep track of, which means more demands on the brain. Which, in turn, may lead to a bigger brain.
If that's true, then the human brain may have begun to emerge as our ancestors huddled in bigger groups. It's possible, for example, that early hominids living as bipeds in patchy forests became easier targets for leopards and other predators. Brain size increased modestly until about two million years ago. It may not have been able to grow any faster because of the diet of early hominids. They probably dined on nuts, fruits, and the occasional bit of meat, like chimpanzees do today. That may not have been enough fuel to support a really big brain; brain tissue is incredibly hungry, demanding 16 times more energy than muscle, pound for pound. It was only after hominids began making butchering tools out of stones and got a steady supply of meat from carcasses that the brain began to expand. And it was probably around this time (between 2 and 1.5 million years ago) that hominids began evolving the extraordinary powers of deception (and other sorts of social intelligence) that humans have. We don't just learn how other people act--we develop a powerful instinct about what's going on in their minds. (I wrote about the neuroscience behind this "mentalizing" last year in an article for Science.)
So next time you get played, temper your anger with a little evolutionary perspective. You've just come face to face with a force at work in our evolution for over 50 million years.
UPDATE 7/3/04: A skeptical reader doubted some of my statements about the brain and the energy it requires. Those who crave more information should check out Northwestern University anthropologist William Leonard's article "Food for Thought" in Scientific American.

In the New York Times this morning, the poet Diane Ackerman has written an essay about the brain, in which she waxes eloquent about its ability to discern patterns in the world. The essay is distilled from her new book, An Alchemy of the Mind, which I've just reviewed for the Washington Post. I didn't much like the book, although it took me a while to figure out what was bothering me about it. If you read the essay, you can get the flavor of the book, not to mention Ackerman's general style in her previous books (which have taken on subjects such as endangered species and the senses). Ackerman has a fondness for sipping tea, tie-dye dresses, and hummingbird feeders, and an even greater fondness for writing about them. I know people who have been put off by her aesthetics, and I find them cloying as well. But that wasn't really at the heart of my dislike of the book. (And besides, my own aesthetics leans towards shark tapeworms and dissected sheep brains, so I'm hardly one to complain about other people.) It took me a few days to realize that the problem with the book was embedded in a deeper problem: how we talk about nature (which includes our own minds).
By we, I don't mean cognitive neuropsychologists or planetologists or molecular ecologists. I mean the rest of us, or the collective us, the ones who consciously or unconsciously create the language, metaphors, and stories that serve as our shared understanding of the world. The words we use, even in passing, to describe genes or brains or evolution can lock us into a view of nature that may be meaningful or misleading. When people say, "Being dull is just written into his DNA," they may only intend a light joke, but the metaphor conjures a false image of how personality emerges from genetics and environment and experience. This figure of speech may seem like nothing more than a figure of speech until people step into the office of a genetics counselor to find out about their unborn child.
The brain suffers from plenty of bad language. In some cases, the language is bad because it's unimaginative. In Alchemy of the Mind, Ackerman points out that calling neurotransmitters and receptors keys and locks does a disservice to their soft, floppy nature. In other cases, though, the language is bad because it's based on gross simplifications of outmoded ideas. Yet it survives, taking on a life of its own separate from the science. My favorite example, which I wrote about last year, is the bogus story you always hear about how we only use ten percent of your brain.
Ackerman indulges in this sort of bad language a lot. One example: she loves referring to our "reptile brain," as if there was a nub of unaltered neurons sitting at the core of our heads driving our basic instincts. The reality of the brain--and of evolution--is far more complex. The brain of reptilian forerunners of mammals was the scaffolding for a new mammal brain; the old components have been integrated so intimately with our "higher" brain regions that there's no way to distinguish between the two in any fundamental way. Dopamine is an ancient neurotransmitter that provides a sense of anticipation and reward to other animals, including reptiles. But our most sophisticated abilities for learning abstract rules, carried out in our elaborate prefrontal cortex, depend on rewards of dopamine to lay down the proper connections between neurons. There isn't a new brain and an old brain working here--just one system. Yet, despite all this, it remains seductive to use a phrase like "reptile brain." It conjures up lots of meanings. Ackerman floods her book with such language, which I grouse about other bad language in my review.
Which makes me wonder, as a science writer myself: is all poetry is ultimately dangerous? Does scientific understanding inevitably get abandoned as we turn to the juicy figure of speech?
Update: 6/14/04 11 AM: NY Times linked fixed


I always like book reviews that combine books that might not at first seem to have that much in common. In the new issue of Natural History, the neuroscientist Williams Calvin reviews Soul Made Flesh along with The Birth of the Mind, a fascinating book by Gary Marcus of NYU. If you haven't heard of Marcus's new book--which explores how genes produce minds--definitely check it out.


It's strange enough hearing yourself talking on the radio. It's stranger still to see a transcript someone makes of you talking on the radio. Recently I was interviewed about Soul Made Flesh on Australian Broadcasting Corporation's show "All in the Mind." Instead of an audio archive, ABC has posted a transcript of the show. While I can't claim I spoke in perfect paragraphs, we had an interesting talk about how the brain became the center of our existence.


I was asked a couple weeks ago to contribute a piece to a special series of articles in Newsweek about the future of Wi-Fi. I must admit that a fair amount of the stuff that's on the Wi-Fi horizon seems a little banal to me. It's nice to know that I will be able to swallow a camera-pill that will wirelessly send pictures of my bowels to my doctor, but it hardly cries out paradigm shift. On the other hand, I've been deeply intrigued and a little disturbed by the possibility that the next digital device to go Wi-Fi is the human brain. Here's my short essay on the subject.


My book Soul Made Flesh looks at the roots of neuroscience in the 1600s. The first neurologists saw their work as a religious mission; they recognized that it was with the brain that we made moral judgments. In order to finish the book, I looked for living neuroscientists who carry on those early traditions today. I was soon fascinated by the work of Joshua Greene, a philosopher turned neuroscientist at Princeton. Greene is dissecting the ways in which people decide what is right and wrong. To do so, he poses moral dilemmas to them while he scans their brains. I mentioned Greene briefly in Soul Made Flesh and then went into more detail in a profile I wrote recently. Greene and I will join forces tomorrow on the show New York and Company on WNYC tomorrow around 12:30 pm. You can listen to us on the radio or on the web.


This week I am in England to give some talks about Soul Made Flesh, which has just been published here. In addition to talking on the BBC, I'll be talking at Blackwell's in Bristol on Tuesday, and at the Museum of the History of Science at Oxford University on Wednesday. I've posted details and links to even more details on the talks page of my web site.
It's a bit daunting coming here, the very place where much of my book is set. But the response has been kind so far. This morning the eminent historian Lisa Jardine wrote a generally good review in the Sunday Times. Meanwhile, stateside, another Brit (Adam Zeman) wrote a positive review in The New York Times Book Review.
While I'm here, I hope to have a little spare time to blog a bit on some interesting new research (and fisk the latest creationist shenanigans). If logistics get the better of me, I'll definitely get back on track next week when I get home.


In February I wrote an article in Popular Science about a project to implant electrodes in a monkey's brain allowing the monkey to control a robot arm with its mind. The goal of this work is to let paralyzed people operate prosthetic limbs by thought alone. Now the research team has announced another big step in that direction: their first work on humans.
They implanted their electrodes into the brains of people undergoing surgery for Parkinson's disease and tremor disorders, and then had the patients play a video game with a joystick. (In brain surgery, patients don't get general anasthesia.) After a little gaming, the researchers removed the electrodes and the surgery resumed. The signals the electrodes captured from the brains of patients as they produced action commands proved to be so clear that a computer was able to use them to predict which way the patients had moved the joystick. Now the researchers are applying to the government to do long-term research on electrodes implanted in quadriplegics.
As is the case with many neuroscientific breakthroughs (memory-boosting drugs for the elderly, sleep-suppressing drugs for narcoleptics), the thorny question arises: should healthy people be allowed (or required for their job) to get an implant? After all, wouldn't you want to run your computer, your car, or your military killer-robot with your mind?


Attention Virginian readers of the Loom: I'll be heading to warmer climes later this week to speak in Charlottesville at the Virginia Festival of the Book. On Thursday at 4 I'll be speaking on a panel about science and society. On Friday at 4 I'll be speaking again on scientific discoveries and how they change us. I'm looking forward to listening to my fellow panelists, who include Robin Marantz Henig and James Shreeve. See you there.


I've posted a new batch of reviews for Soul Made Flesh on my web site. The newest is from Ross King, the author of Brunelleschi's Dome and Michelangelo and the Pope's Ceiling. His review in yesterday's Los Angeles Times is a rare sort--he likes the book (which he calls "thrilling") for what the book really is, rather than as a projection of some phantom in his own mind.
A review of a different sort comes from Simon Conway Morris of the University of Cambridge. Conway Morris is a first-rate paleontologist who has shed a lot of light on how the major groups of animals alive today emerged in the Cambrian Period. In recent years he's also started to nudge some more spiritual notions into public view, suggesting for instance that the evolution of life has displayed a built-in direction towards us, or at least something like us. Conway Morris reviews Soul Made Flesh in the March issue of Bioscience, which is published by the American Institute of Biological Sciences. I can't complain about a review that calls my book "a wonderful read," but on the other hand, I found it odd that Conway Morris criticizes me for concluding that we know something more about how the brain works now than people did in 1600. He seems to think I'm attacking his personal notion of the human soul, when in fact I'm actually talking about how the seventeenth century notion of the soul was transformed--in part--into an understanding of the brain. As peculiar as it may be, it's well-written, though.


Three weeks ago, I gave a talk at Stanford University about my new book Soul Made Flesh. A wonderful crowd turned out and peppered me with excellent questions afterwards, each of which could have become new talks of their own. CSPAN was there to film it, and they'll be broadcasting the talk this Saturday, March 20, at 9 am EST on BookTV.
You may want to check out this little RSS a commenter forwarded to me that converts the BookTV schedule to any time zone. Also, if you miss the talk, it will probably repeated on another weekend, so check back to their site.
Here's an added incentive to watch: you don't have to look at the nervous, quavering face of an author, interspersed only by slow pans across a silent, expressionless audience. I brought along a Powerpoint file loaded with gorgeous, bizarre seventeenth century artwork to go along with the story of the search for the soul and the dawn of neurology. Straight from my laptop to your eyes.


When George Bush quietly dismissed two members of his Council on Bioethics on the last Friday in February, he probably assumed the news would get buried under the weekends distractions. But ten days later, its still hotsee, for example, two articles in Slate, and an editorial in the Washington Post, as well as Chris Mooney's ongoing coverage at his blog. Bush failed to appreciate just how obvious the politics were behind the move. The two dismissed members (bioethicist William May and biochemist Elizabeth Blackburn) have been critical of the Administration. Their replacements (two political scientists and a surgeon) have spoken out before about abortion and stem cell research, in perfect alignment with the Administration. Bush also failed to appreciate just how exasperated scientists and non-scientists alike are becoming at the way his administration distorts science in the service of politics (see this report from the Union of Concerned Scientists, which came out shortly before the bioethics flap). And finally, Bush failed to appreciate that Blackburn would not discreetly slink away. Instead, she fired off a fierce attack on the council, accusing them of misrepresenting the science behind stem cell research and other hot-button issues in order to hype non-existent dangers.
The chairman of the council, Leon Kass, failed as well when he tried to calm things down last Wednesday. He claimed that the shuffling had nothing to do with politics, and that he knew nothing about the personal of his new council members. Reporters have pointed out the many opportunities when Kass almost certainly did learn about those views.
But Kass stumbled on another count, one that I think speaks to a profound problem with the council and one that I havent read much comment on. Kass claimed that Blackburn had to be replaced because the council will now be focusing on neuroscience, rather than reproduction and genetics, Blackburn's areas of expertise. If thats true, then the council is not ready for a shift to the brain. If the Bush administration wanted to beef up the council's neuroscience credentials, surely they would have replaced Blackburn and May with neuroscientists. They did not. In fact, the council as it's now constituted has only one member who does research on neuroscience.
Even more troubling, though, is the indifference the council has shown to what neuroscience tells us about bioethics itself.
Kass has written in the past about how we should base our moral judgments in part on what he calls "the wisdom of repugnance." In other words, the feeling you get in your bones that something is wrong is a reliable guide to what really is wrong. The Council on Bioethics embraces Kass's philosophy. They have declared that happiness exists to let us recognize what is good in life, while real anger and sadness reveal to us what is evil and unjust. "Emotional flourishing of human beings in this world requires that feelings jibe with the truth of things, both as effect and as cause," they write. By extension, repugnance is a good guide for making decisions about bioethics. If cloning gives you the creeps, its wrong.
But what exactly produces those creeps? In recent years neuroscientists and psychologists have made huge strides in understanding both emotions and moral judgments. They've scanned people's brains as they decide whether things are right or wrong; they've looked at the brain's neurochemistry, and they've gotten insights from the brains of animals and the fossils of ancient hominids as well. And their conclusions seriously undermine the philosophy of the council.
In the April issue of Discover, I have an article about one of the leaders in this new field of "neuro-morality," a philospher-neuroscientist named Joshua Greene at Princeton University. Greene argues that feeling that something is right or wrong isn't the same as recognizing that two and two make four, or that the sky is blue. It feels the same only because our brains respond to certain situations with emotional reactions that happen so fast we aren't aware of them. We are wired to get angry at deception and cruelty; even the thought of harming another person can trigger intense emotional reactions. These "moral intuitions" are ancient evolutionary adaptations, which exist in simpler versions in our primate relatives.
When our ancestors stood upright and got big brains, Greene argues, these moral intuitions became more elaborate. They probably helped hominids survive, by preventing violence and deception from destroying small bands of hunter-gatherers who depended on each other to find food and raise children. But evolution is not a reliable guide for figuring out how to lead our lives today. Just because moral intuitions may be the product of natural selection doesn't mean they are right or wrong, any more than feathers or tails are right or wrong.
Jonathan Cohen, Greenes coauthor (and boss and at Princeton), was invited to speak to the council at a public meeting in January. He suggested that we need to understand that moral intuitions are not automatically moral truths--particularly when they're applied to complicated ethical quandaries about science and technology that our ancestors never had to confront. It was good of the council to invite Cohen, but judging from their comments after Cohen's talk, the message didn't really take. The wisdom of repugnance seems to be still in charge.
That's too bad, because understanding our moral intuitions is crucial to making sound decisions about cloning, stem cells, giving psychiatric drugs to children, and all the other issues the council is charged to consider. The neurobiology of moral judgments promises to reveal why these issues are such political flashpoints, by showing how each side in these debates becomes utterly convinced that the right choice is as obvious as the color of the sky. Theres a biology to bioethics, and the Presidents council needs to understand it.
UPDATE: Welcome to readers clicking through from the National Review Online link. Rameh Ponnuru's objections to this post are rather scatter-shot--he suggests that the "wisdom of repugnance" is not the philosophy of the council--or it is in the case of a couple people who don't agree with the Administration on a couple points--or it's not. In the interest of clarity, please note the quotations I offered above on the nature of emotions. These come from "Beyond Therapy," the book-length report published by the council. In these passages and elsewhere, "moral realism" as it's known, is an underlying assumption. This is of course, a consensus statement and not the opinion of a monolithic entity, but it's the document that people will look at as the council's stance. (It was Blackburn's objections to "Beyond Therapy" that appear to have gotten her in big trouble, judging from her post-dismissal comments.) And if you look over the transcripts of Cohen's presentation, the comments of several members are consistent with a desire to see in Cohen's work that idea that moral intuitions are faithful guides to moral truths.


Over on my web site I've posted an article I've just written for the Sunday Telegraph Magazine in England about an eerie brain disorder called musical hallucinosis. You've probably had a tune stuck in your head for an hour at least once in your life. Now imagine that the tune played all day and night--and imagine that it sounded as real as if a marching band was standing by your window.
Here's how it starts:
Janet Dilbeck clearly remembers the moment the music started. Two years ago she was lying in bed on the California ranch where she and her husband were caretakers. A mild earthquake woke her up. To Californians, a mild earthquake is about as unusual as a hailstorm, so Dilbeck tried to go back to sleep once it ended. But just then she heard a melody playing on an organ, "very loud, but not deafening," as she recalls. Dilbeck recognized the tune, a sad old song called When You and I Were Young, Maggie.
Maggie was her mother's name, and when Dilbeck (now 70) was a girl her father would jokingly play the song on their home organ. Dilbeck is no believer in ghosts, but as she sat up in bed listening to the song, she couldn't help but ask, "Is that you, Daddy?"
She got no answer, but the song went on, clear and loud. It began again from the beginning, and continued to repeat itself for hours. "I thought, this is too strange," Dilbeck says. She tried to get back to sleep, but thanks to the music she could only doze off and on. When she got up at dawn, the song continued. In the months to come, Dilbeck would hear other songs. She heard merry-go-round calliopes and Silent Night. For a few weeks, it was The Star-Spangled Banner.
Go here to read the rest.
Brain disorders always grab our attention because they have the power to warp the fabric of reality, or at least our experience of it. But they can tell us even deeper things about ourselves--specifically, how the human mind was assembled over the course of evolution. Autistic people, for example, lack what psychologists call a theory of mind--an intuition of what other people are thinking. In an article I wrote last year for Science, I detailed research that shows how the evolution of a theory of mind was key to the rise of social intelligence in humans, perhaps even making language possible.
Musical hallucinations may offer some clues to another important feature of human evolution: our capacity for music. Like language, music in humans has no real counterpart elsewhere in the animal kingdom. Birds and whales sing, but their songs have little of the flexibility and creativity that marks human music. And music, researchers are finding, is processed by a complicated network of regions in the brain. Musical hallucinations may emerge when that network is cut off from the outside world by deafness, and it seizes on stray impulses in the brain, cranking them up into the perception of real tunes. But how did this special faculty for music evolve? Scientists I've spoken to don't think there's a good explanation out there yet. When a good explanation does come along, it will have to account for music either as an adaptation in itself or as a byproduct of other adaptations--or some combination of both. The building blocks of music seem to be nested within our ability to understand language and other complex noises--detecting pitch, tempo, and so on. One could argue that proto-music gave our ancestors some reproductive advantage. Perhaps songs gave bands of hominids a powerful sense of solidarity. Or perhaps it is nothing but a fortunate fluke, its pleasure deriving from reward networks that evolved for other functions long before anyone hummed a tune.
Update: 3/9/04 1:20 Theory of mind link fixed


If you want to hear about brain science at its birth and today, check out the public radio show Tech Nation, this week. In the first half of the show, I'll be talking about Soul Made Flesh. In the second half, Steven Johnson will be talking about his excellent new book, Mind Wide Open. You can find out where and when you can listen to the show at the program's web site, or listen to it on their site archive.
(A note to subscribers: sorry for the mysterious email address that appeared on your notification. I have yet to fully master the mysteries of Movable Type.)


If you live in the Bay Area, please join me noon on Monday, February 23, at Stanford University for a talk about Soul Made Flesh. (Here are all the details.)
The talk is sponsored by the Stanford University Center for Biomedical Ethics and the Stanford Brain Research Institute. It's gratifying that such great organizations that are dedicated to twenty-first century neuroscience are interested in the adventures of a motley crew of seventeenth century alchemists and natural philosophers.
The talk is free and open to the public. And if that's not incentive enough, CSPAN will be there to film the talk for BookTV. If you ever wanted to be on television asking a question about the soul with a boom mike dangling over your head, now's your chance. (When BookTV decides on the broadcast date, I'll post it here and on my events page.)


I'll be on Fresh Air with Terry Gross today, talking about Soul Made Flesh. If you miss it today, it will be archived at the show's web site.


If you're in New York, you've got two chances on Tuesday January 27 to hear me talk about Soul Made Flesh. At 5:30 I'll be giving a talk in the "Mind Over Body" lecture series at New York Public Library's Science and Industry Branch at 188 Madison Ave. I'll then be heading to the East Villiage to talk in the more intimate setting of KGB (85 E. 4th St.) at 7:30. Both events are free.


At noon today in New York I'll be at the Makor Center of the 92nd St. Y at 35 W. 67 St. to talk about Soul Made Flesh.


Today I'll be talking for an hour about Soul Made Flesh on Minnesota public radio. You can listen to the broadcast live online at 11 am EST (the show will be archived). At 2 pm EST, you can listen online again when I talk on the Glen Mitchell show on Dallas public radio.
Some thoughts on the intersection of evolution and global warming coming this afternoon. In the meantime, check out Pharyngula's check-box comparison of the similarities between Soul Made Flesh and Quicksilver. Damn, why did I leave out those pirate neurologists...?


By sheer coincidence (or some journalistic twist of fate) two magazine articles of mine are coming out this week, and they just so happen to make a nice neurological pairing.
In Science, I've written an essay about what seventeenth-century natural philosophers have to teach twenty-first century neuroscientists about the brain. In the February issue of Popular Science, my cover story looks at the latest work on brain-machine interfaces that will let people control machines with thought alone. Inevitably, the Pop Sci piece can only focus on a time scale of a few years. But the latest brain-machine interfaces seem to me to be the ultimate incarnation of the dreams of the scientific revolution.
Before the 1600s, the world was filled with souls and soul-like forces. In addition to the immortal human soul, there were souls in our organs, in plants, in stars. Water rose in a straw because it abhorred a vaccuum and sought to fill it. In the 1600s, natural philosophers began to dismantle these souls. Galileo busted up the old Aristotelian physics. Descartes offered up the body as an earthen machine. Robert Boyle saw matter as corpuscles--what we call molecules and atoms--colliding and reacting without any purpose driving them from within. Many of these natural philosophers believed that it was essential to take the soul out of nature in order to save Christianity from pagan alternatives. But they also believed that doing so would let mankind master nature. If nature was made up of blind matter that was obedient to God's laws, then unlocking those laws through observation and experiment would turn the world into a scientific paradise of riches and health.
This philosophy had one particularly troubling aspect: how did the human mind fit into the world? Was it also just matter in motion? Thomas Hobbes was happy to say it was. Others didn't want to be mistaken for atheists.Boyle's friend Thomas Willis used the priniciples of the scientific revolution to get the first good understanding of the brain, which he envisioned as a chemical engine of memory, perception, and emotions.
Today this approach to nature has given rise to, among other things, brain-machine interfaces. If, as promised, they someday give paralyzed people some measure of control, they will be yet another example of promoting health through the mastery of nature. But the remarkable thing is what is being mastered here. As one of the bioengineers I spoke to pointed out, he and his colleagues don't see the brain as some mysterious organ, but as a very complicated digital device that is sending out a series of 1s and 0s. By reading the code, they can do something with it. The brain itself--complete with its intentions and plans--has become yet another natural thing to be harnessed. In my opinion, this is both thrilling and terrifying.
I've posted the text and the pdf version of my Science essay on my web site. The table of contents for the February issue of Popular Science is online, but they haven't posted the articles yet. When I get some time, I'll put the text on my site and update this post with the link.


I can already see the grim look many Americans will have as they chew on their Christmas roast tomorrow. They'll be thinking about yesterday's report that a cow in Washington state tested positive for mad cow disease. There's some comfort in knowing that so far it's just a single cow, and that American cattle are regularly screened for bovine spongiform encephalitis. The grimmest look this Christmas may be on the faces of McDonald's shareholders and cattle ranchers. A single Canadian cow that test positive wreaked havoc on the entire beef industry up north. But this Christmas also brings a fascinating discovery about the bizarre agents that cause disorders such as mad cow disease: they may actually record our memories.
The work comes from the lab of Eric Kandel, the Columbia University neuroscientist who won the 2000 Nobel Prize for medicine. Kandel got the prize for figuring out some of the molecular underpinnings of memories. Each neuron has one set of branches that send outgoing signals and another set that receives incoming ones. These signals can only jump from one neuron to the next if an outgoing branch nuzzles up to an incoming one, creating a junction called a synapse. Kandel studied how the neurons in a sea slug change as memories are laid down. (These are obviously not memories of the Proustian sort--just simple associations, such as the memory of a shock coming after the flash of a light.) He showed that new synapses are created and other ones grow stronger as memories form. Kandel also identified a number of the molecules that seem to be responsible for strengthening these connection. (His Nobel prize lecture makes for good reading.)
Kandel did not rest on his laurels, but immediately tackled some of the big questions about memory that he and other neuroscientists had yet to figure out. A neuron may have tens of thousands of synapses, but only a few of them may change as a memory forms. Yet the instructions to make proteins that cause this change come from a neuron's single bundle of DNA. If the nucleus gets a signal to form new synapse-strengthening proteins, how do the proteins go only to the right synapses. And, even more importantly, how do those synapses stay strong for decades, when proteins themselves live only a short period of time?
Kandel and his coworkers reasoned that a memory-forming synapse must get some sort of "synaptic mark" that tagged it for synapse-strengthening proteins. They then looked for molecules that might be responsible for the mark. As they report in the December 26 issue of Cell, they have discovered what may well be the synaptic mark in a compound called cytoplasmic polyadenylation element binding protein (CPEB for short). CPEB can be found in cells throughout the body, but they found a special form of it in the neurons of sea slugs, and then later found it in fruit flies and mammals. They found that CPEB is synthesized during the earliest stages of memory formation, and probably drives the production of molecules that physically lay down new synapses and tells them where to grow. Evidence suggests that the protein can do this by "waking up" dormant RNA molecules in the synapse. (RNA is the messenger molecule that carries copies of genetic information to the protein-building factories of the gene.)
To understand how CPEB could do all this, the researchers looked closely at its structure. That's when they had a shock: CPEB has much the same structure as the agent that causes mad cow disease.
Mad cow disease is infectious, but it's caused not by a virus or a bacterium. Instead, it's caused by a rogue protein called a prion. The normal version of the protein (called PrP) may do a number of jobs in the body, and seems particularly important in the brain. But sometimes a PrP gets a funny kink in it and folds into a new shape. This new prion then bumps into a normal PrP and forces the normal copy to take on its own strange shape. The prions clump together and force others to join them in Borg-like fashion. Mad cow disease can spread if cows eat feed that has been supplemented with other cows--in particular, if the feed contains prions. Humans eating those sick cows can take in the prions as well and get a fatal brain disease of their own called Creutzfeld-Jacob disease.
Prions were the object of scorn and skepticism for years, in part because they were so different as pathogens from viruses or bacteria. Prions had no genetic material, and yet they spread like genetically-based pathogens. Eventually the evidence became too much to ignore (and also won Stanely Prusiner of the University of California at San Francisco a Nobel of his own). But prions were revolutionary in another way that most people don't know about: they enjoy a unique kind of evolution.
In the early 1990s scientists realized that yeast contain prions. These aren't mutant PrPs, however, but two completely different proteins that just so happen to have the ability to change shape and force other proteins to clump with them. Unlike mad cow prions, yeast prions don't necessarily harm their hosts--in fact, they actually make yeasts thrive better than without them. And since yeasts are single-celled, they can pass down their prions to their offspring. (A prion in your brain won't get down to your sperm or eggs, so you can't infect your kids.)
In other words, a yeast can inherit prions from its parents, despite the fact that it has inherited no prion gene. This non-DNA based inheritance is a lot more like what Lamarck was talking about than Darwin.
Kandel and his Columbia team joined forces with an expert on prions in yeast, Susan Lindquist of MIT. Together, they inserted copies of the gene for the synaptic mark CPEB into yeast so that they could experiment on them and see whether they were in fact prions. They found that indeed, CPEB can exist in two different states. In one, the protein roams the cell alone. In the other, it forces other CPEB to change shape and form clumps with it. They also found that only when it takes on its prion form can CPEB bind to RNA.
The researchers propose a simple but elegant hypothesis for how prions can build memories. They suggest that certain signals entering a synapse can trigger CPEB to become a prion. As a prion, it can wake up sleeping RNA in the synapse, creating proteins for strengthening it. It also keeps grabbing other CPEB molecules and turning them into prions as well, so that even after the original prion has fallen apart, others continue to do the job. The neverending power of prions, in other words, is what keeps our memories alive.
In a commentary in the same issue of Cell, Robert Darnell of Rockefeller University says that if this work holds up to scrutiny (if it's replicated in neurons rather than yeast, for one thing), it will prove "nothing less than extraordinary." It would be extraordinary enough if memory proved to be based on prions, but the finding--along with the earlier work on yeast--raises the possibility that prions actually do a lot of important things in our bodies, and that we cannot understand them unless we are willing to let go of our vision of life as nothing but genes creating proteins. That may not make this Christmas's roast any tastier, but it should help revive the low reputation of prions.


I will never figure out the publishing world. My new book, Soul Made Flesh officially publishes on January 6, 2004. But Amazon and Powell's both say they've got it now and can get it to customers in 1-2 days. I guess time isn't what it used to be.
I have put some early reviews on my web site. Booklist: "Remarkable." Kirkus Reviews: "Absorbing and thought-provoking." Publisher's Weekly: "Illuminating."
Reminder: seven days left till Christmas.


Here's a new development in the search I described last week for the genes that make us uniquely human. Science's Michael Balter reports on a new study about a gene that's crucial for making big brains. Mutant versions of the gene produce people with tiny brains--about the size that Lucy had 3.5 million years ago. Comparisons of the human version of the gene with other mammals shows that it has undergone intense natural selection in our own lineage.
Size is far from everything, however. While humans have huge brains compared to other mammals, new kinds of wiring may have been more important in the transformation of the hominid brain into something that could be called truly human.


Darwin's spirit lives on in everything from the Human Genome Project to medicine to conservation biology--the three topics I covered in my post on Friday. It also lives on in brain scans.
While Darwin is best known for The Origin of Species, he also wrote a lot of books in later years, most of which explored some aspect of nature that he showed revealed the workings of evolution. His examples ranged from orchids to peacock tails. In his 1872 book, The Expression of the Emotions in Man and Animals, he proposed that the expressions we humans use--our smiles, our frowns, and so on--are part of a heritage that dates back millions of years, a heritage shared by other mammals. He pointed out that our face and bodies are partly controlled by reflex responses, controlled by nerves that are similar in man and beast. (Darwin here was building on the work of Thomas Willis, a personal hero of mine and the chief subject of my book Soul Made Flesh.) Darwin explored the baring of teeth, the widening of the eyes, and other the expressions made by people as well as cats, dogs, and other animals. Evolution, in other words, was as plain as the face in the mirror.
Expressions convey information, and the human brain is exquisitely sensitive to them. In recent years, neuroscientists have begun to map out the circuits dedicated to this information processing. The sight of a face, for example, activates a tiny region on the brain's underside called the facial fusiform area. Another region high on the side of the brain, the superior temporal sulcus, is sensitive to moving lips and eyes. And most interestingly of all, an almond-shaped clump of neurons called the amygdala buried deep within the brain, becomes active at the sight of emotions on faces--particularly the scowls of anger and bared teeth of fear.
The amygdala is an extraordinary bit of brain, shared by humans, monkeys, rats, fish, and all other vertebrates. In the 1970s, neuroscientists first recognized it as Fear Central. A rat sees a light go on and gets a shock. It sees the light go on again and gets another shock. After a while, the rat becomes terrified at the sight of the light. But if you take out the rat's amygdala, it never makes this fearful connection. Not all fears are learned, though. Monkeys, for example, are scared at the sight of snakes (including fake ones) even when they haven't seen one before. Monkeys without amygdalas will run up to a fake snake and play with it. Later research showed that it becomes active in human brains at fearful sights, such as a pointed gun. (A good place to learn about the amygdala is Joseph LeDoux's web site at NYU.)
When incoming signals activate the amygdala, it sends out signals to other regions of the brain. Hormones start to race through the body, and the brain becomes more alert to sights and sounds. It can even control the brain and body without our awareness. A quick flash of a scary sight is enough to activate it, even if it is too quick to register in the conscious brain. Scientists now believe that the amygdala gets a brief early edition of the news coming into the brain before the same information gets processed through higher regions in the cortex.
The amygdala lets us take quick actions that make the difference between life and death. It also lets us recognize other important signs of danger, such as angry or frightened faces on other people. These faces may be reacting to a threat (a lion bursting out of the brush), or they may belong to a person with malice on his mind. In this work the amygdala joins forces with the facial fusiform area, which responds more strongly when the amygdala responds strongly.
But Darwin had more than faces in mind when he first explored the evolutionary heritage. He pointed out that the body has its own expressions of fear, joy, and anger. They often come together in reflexive packages--think of the hair rising up on a cats back as it bares its teeth. Given what we know now about facial expressions in the brain, another possibility naturally emerges: if the brain has dedicated circuits sensitive to information in the face, perhaps there are also circuits dedicated to picking up emotional information in the body.
In today's issue of the journal Current Biology, two Harvard neuroscientists put this idea to the test. They placed people in an MRI scanner and then showed them a series of pictures. Some of the pictures were of people in fearful positions, with their faces blurred out (like the one at the top of this post). In other pictures, people raised their arms in meaningless ways. The neuroscientists then compared the responses of their subjects' brains to both groups of images and found indeed that certain regions were sensitive only to the sight of fearful bodies. Remarkably, they were the amygdala and the facial fusiform area, the same regions that are sensitive to fearful faces. The scientists conclude that these regions work together to get as much information they can about emotions such as fear, whether in the geometry of widened eyes or cowering shoulders. It's an example--one of many these days--of how Darwin's musing are being mapped onto the anatomy of the thinking brain.


Two decades ago, a neuroscientist named Benjamin Libet published a classic experiment on conscious will. He had his subjects rest a finger on a button as they stared at a specially designed clock. It had only one hand, which swept through a revolution once every 2.5 seconds. Libet would ask his subjects to push the button at their own choosing. In some runs, he asked them to note the position of the clock hand when they actually pushed the button. In other runs they had to note its position when they first began to think about pushing it.
Libet measured the brain activity of his subjects with EEG, and also attached electrodes to their hands to monitor their muscle activity. His subjects turned out to be good at timing the moment when they pushed the button, with an accuracy within just a few milliseconds. But they were not so good with their own intentions.
Near the top of the brain there's a region known as the motor area where neurons fire to make the body move in particular ways. Libet found that EEG recordings from the motor area in his subjects' brains began to shift into a new pattern 1.5 seconds before the subjects pressed the button. Libet interpreted this as the mental preparation that goes into initiating an action. But his subjects consistently claimed that they began thinking about moving their hand about half a second after the EEG recordings began to change. In other words, they had already started preparing to make a voluntary movement for half a second before they felt like they were making a voluntary movement.
A lot of scientists have questioned different parts of Libet's experiments over the years, but the results have held up pretty well. It seems that we only become conscious of our will after we begin to do something.
This week a team of European neuroscientists published a fitting tribute to Libet for the twentieth anniversary of his experiment. They ran Libet's experiment again, but some of the people they chose as their subjects had damage to certain parts of the brain. As they report in Nature Neuroscience, some kinds of brain damage make no difference to people's performance. But something fascinating happened to people who suffered damage to the parietal cortex, located at the back of the head. Like the healthy controls, they could nail the moment they actually pressed the button, to within a few milliseconds. But they also noted that they intended to press the button just around the time they actually did press the button. In other words, they were completely unconscious of their action until the action was already taking place.
What's most fascinating about this experiment is that the subjects with a damaged parietal cortex do not act all that strangely. They are perfectly able to carry out their will--to press buttons when they want to, to say what they want to say, to walk down the street where they want to go. Their very ordinariness offers a hint about the nature of both consciousness and will.
The evidence from many different studies suggests that intentions, plans, and similar thoughts are born within the prefrontal cortex at the front of the brain. These prefrontal neurons send out branches into a number of other regions of the brain, where models of intentions can be created. These models create predictions--if I do this, I should expect this sensory feedback. If I don't get that feedback, I've screwed up.
Some of these models may be completely unconscious--they may offer quick checks between what we expect to feel when we do something and what we really feel. There's also a model in the parietal cortex, but the authors of the new study suggest it does something different. It works at a higher level, seeing whether our actions match up with their desired goals. Normally the intentions formed in the prefrontal cortex trigger a model in the parietal cortex as the brain prepares the action in the motor area. Our conscious experience usually depends on the model, and not the intention itself.


It's too early yet for reviews of Soul Made Flesh to start rolling in (it pubs in January 2004), so I'm still in an anxious state. But this is promising: The Daily Telegraph in London asked several leading writers to name the favorite book they read in 2003. Yesterday it printed the results. Steven Pinker chose Soul Made Flesh.
He writes: "Today the idea that every aspect of human experience consists of activity in the brain is second nature to some people, and an 'astonishing hypothesis'- or even sacrilege - to others. But few are aware of the ancestry of this idea. Soul Made Flesh tells the fascinating story of how people first became aware of one of the most radical thoughts the human mind has ever had to think."
(Pinker has not invented time-travel. He managed to read a 2004 book in 2003 because my publisher sent him the bound galleys this fall. Glad they did.)
I won't overload this space with reviews when they do finally come; instead, I'll be posting them here.


For everyone interested in how their brain works, I'd suggest checking out a book coming out soon called Picturing Personhood, by MIT anthropologist Joseph Dumit. Dumit shows how easy it is for brain scans to become cultural Rorschach tests. Scans of mental activity, such as fMRI or PET, are basically complex graphs that represent the relationships of data gathered in very narrowly defined experiments and which are then statistically massaged with special-purpose software. But for most of us non-scientists (and even some scientists) it's easy to look at these images as objective snapshots of thought. As Dumit points out, it is even easier for us to impose what we want to believe about human nature on those pictures, getting a comforting feeling of certainty from our misconceptions about how neuroimaging really works.
This neuro-Rorscach effect is only going to become more common. That's because neuroscientists are using their scanners to probe the social brain. Most people may not have a lot of preconceptions about how the cerebellum influences motor control, but when you get into the way we feel about one another, everybody's got an opinion. A case in point can be found in the new issue of Nature Neuroscience: Dartmouth scientists looked at how racism affects our ability to think clearly.
This research builds on several years of work by social psychologists, They have found evidence of subtle effects of hidden racial biases with an experiment. they call the Implicit Association Test. The test is remarkably simple. You look at words and names flashing on a computer screen. In one version of the test, if you see a word, you press the left button if it's a positive word (beauty, for example), and if it's negative, you press the right one (filth). If a name appears, you press the left button if it sounds like a white name to you, and the right button if it sounds like a black name. In other words, you're pressing the same button for positive words and white names, and negative words and black names. The researchers then flip the test around, so that black names and positive words use the same button, and negative words and white names. They subjects have to hit the buttons as fast as they can, and the researchers then measure how long it takes for people to press a button.
Tests like these reveal some surprising patterns. Some people show consistent differences in their button speed, depending on how the test is conducted. Some white people, for example, take longer in the black/positive-white/negative version than in the white/positive-black/negative version. Often, these people will claim not to be racist, and yet they show clear (but subtle) biases in the way they take the test.
You can take an online version of the test here.
Neuroscientists saw in this work an opportunity for an elegant use of their scanners. In 2000, Elizabeth Phelps of NYU and her colleagues decided to look for a signature in fMRI scans of these results on the IAT. They showed white people unfamiliar faces of whites and blacks. Whites who showed a big lag on the IAT tended to respond to unfamiliar black faces with higher activity in a region of the brain called the amygdala. This region is associated with fear and vigilance in uncertain situations. The smaller that IAT lag, the smaller difference in the response from the amygdala. And this pattern disappeared altogether when white people saw faces of black people who were either friends or celebrities--i.e., familiar.
Now Dartmouth scientists have found another way that racism may affect the brain. The prefrontal cortex--the front third or so of the surface of the brain--manages the information processing that goes on in the brain. For one thing, it keeps us focused on mental tasks despite all the internal and external distractions we face. But many researchers have argued that this "cognitive control" isn't in infinite supply. Doing some intense cognitive control for one task may leave a person with less control over some other one. Racism, the Dartmouth team has concluded, can drain our cognitive control.
The Dartmouth team first gave a group of white Dartmouth students the IAT test to measure hidden race bias. Then they showed the subjects pictures of black people as they scanned their brains with fMRI. While Phelps had focused her scanning on the amygdala, the Dartmouth team paid close attention to the prefrontal cortex. They found that people who showed strong racial bias had a strong response to the pictures in a patch of the brain just above the temples--the dorsolateral prefrontal cortex. Many researchers have shown that this region is crucial for cognitive control. What's most impressive about this result was that the Dartmouth team was able to draw a graph of the relationship--the stronger the bias, the stronger the cortex response.
The Dartmouth team also ran another test. A black experimenter interviewed the subjects on some hot-button issues, such as racial profiling. The researchers hypothesized that the interview would demand a lot of cognitive control from people with strong racial bias. Immediately after the interview, the subjects had to play a game that also demands a lot of cognitive control. The game is known as a Stroop test. You see words that are in different colors, and you have to identify the color. The tricky part comes when the word is GREEN and the color is red. Now, the dorsolateral prefrontal cortex has to override the normal response to choose green and pick red. As they had predicted, the interview made people with a strong racial bias do worse on the Stroop test. And once again, their results form an impressive trend from low bias to high bias.
So, what does this all mean? Here's one take, from Boston Globe, "To the litany of arguments against prejudice, scientists are now adding a new one: Racism can make you stupid." In other words, biology supports a political stance. But you could use the same sort of argument to make a very different (and very loathsome) argument: it's not racism that makes you stupid, but the social pressure to keep your racism bottled up. If a racist person was comfortable with his racism, then presumably he wouldn't have to use up his cognitive control suppressing stereotypes. If you want to elevate a brain scan to social commentary, the meaning of the scan itself becomes slippery. Is the scan I've pasted above your brain on racism, or liberal guilt?
The very notion that racism can leave a biological signature can mean a lot of things, depending on your preconceptions. Is it therefore a pathology? Race bias is unconscious, after all--even people who say they're not racist can show these consistent lags on IAT. Does that exonerate people who commit hate crimes--it's just how their brains work? Or can we just all retrain our brains to bring an end to hatred? The Southern Poverty Law Center certainly seems to think so.
Before we all start making grand statements about social policy, it's necessary to take the Dartmouth study with a few grains of salt. The Globe's use of the word "stupid" is actually way over the top. Taking an extra tenth of a second to answer a Stroop test question doesn't translate into imbecility. And some neuroscientists from the University of Michigan and Temple University wrote a commentary that will be appearing in the same issue of Nature Neuroscience, and they point out that even the IAT itself may not mean what some researchers claim for it. The black/positive lag may, for example, represent not a personal bias, but a knowledge of widespread stereotypes. And there has been precious little work done on the responses (both on the IAT and in the scanner) of black people.
Still, the results are there. The lags are real, the trends in the brain scans relatively clear-cut. Something is going on. Perhaps before too long, we'll find out what.


The case of Terri Schiavo has moved back into the Bleak House realm of endless trips to the courthouse. As I mentioned in an earlier post, Schiavo lost consciousness thirteen years ago, and her husband has been trying for the past few years to have her feeding tube removed over the objections of her parents. The Florida legislature recently passed a law that gave Governor Jeb Bush the authority to order Schiavo's tube put back in, and now her husband is going to court to challenge the constitutionality of the law.
The frenzy of editorials and TV interviews is beginning to subside, but only a little. Nat Hentoff at the Village Voice has promised a whole series of columns on the subject. In principle, this is a good thing. People with impaired consciousness are for the most part in society's blind spot. We just don't talk about them or their treatment unless some news forces their existence to our attention. Unfortunately, a lot of the stuff I've read has been pretty loud and strident. And even when writers seem to be offering a measured take on the subject, they frequently rely on bait-and-switch rhetoric.
Case in point is "A 'Painless' Death?'--an essay posted today on the Weekly Standard's web site by attorney Wesley J. Smith. Smith takes issue with the claim that removing a feeding tube leads to a painless and gentle death by dehydration and starvation. Smith claims that the pro-removal side blur the difference between removing food and water from cognitively disabled people and conscious people refusing food and water as they die of cancer or some other disease. It's a fair point to consider, but Smith does plenty of blurring in his own essay.
For example, he quotes a neurologist opposed to removing feeding tubes who says that "A conscious [cognitively disabled] person would fell it just as you or I would." The neurologist then offers a gruesome picture of cracked lips and vomiting and seizures. Smith then quotes an advocated of removing tubes who offers even more details--mottled hands and feet, blood shunted to the core of the body.
All shocking stuff, of course, but if someone is not aware of pain, what does it really mean? Is Schiavo "a conscious [cognitively disabled] person" (whatever that creative bit of bracketing means)? Or is she in a persistent vegetative state with no hope of awareness?
To answer that question, Smith offers up the testimony of Kate Adamson, a woman whose feeding tube was removed even though she was fully conscious. Adamson tells of the agony she experienced going without food and water for days. She describes obsessively visualizing a bottle of Gatorade. Smith mentions briefly that Adamson was in what's known as a "locked-in state," but he then glides past the fact that biologically speaking, the brain of a locked-in patient is profoundly different from that of someone in a persistent vegetative state.
In a locked-in state, a person's cerebral cortex--the region where we do most of our complex thinking--escaped major injury. Instead, the damage is restricted to nerve fibers that send commands from the brain to the body. It's like spinal cord paralysis in your brain, and it leads to a nearly total lack of movement. As I described in my recent article for the New York Times Magazine, brain scans of people in persistent vegetative states show--at most--that only isolated fragments of the cortex are active. But the overwhelming majority of the cortex uses as much energy as the cortex of a healthy person under deep anasthesia.
It's certainly true--and awful--that some locked-in patients have been misdiagnosed in the past, although now that the condition is better known, any good neurologist can quickly make the right diagnosis. But to dwell on Adamson's case in the middle of an essay that's supposedly about Terri Schiavo is a case of apples and oranges. And the emotional whiplash of Adamson's experience distracts the reader from the non sequitur of Smith's argument. To those who would question bringing up locked-in patients in a discussion of the vegetative state, Smith answers that "the PVS diagnosis is often mistaken--as indeed it was in Adamson's case." Apple, have you met Orange?
Smith offers up a few vague scraps of evidence that Schiavo can feel pain, but they don't amount to much. I have yet to hear of a real neurologist who has concluded that she is in anything other than a vegetative state. And yet Smith somehow thinks he has fused Schiavo's condition with Adamson's so conclusively that he can end his essay as follows:
"The time has come to face the gut wrenching possibility that conscious cognitively disabled people whose feeding tubes are removed--as opposed to patients who are actively dying and choose to stop eating--may die agonizing deaths. This, of course, has tremendous relevance in the Terri Schiavo case and many others like it. Indeed, the last thing anyone wants is for people to die slowly and agonizingly of thirst, desperately craving a refreshing drink of orange Gatorade they know will never come."
Instead of bringing clarity to the issue, Smith has succeeded in muddling it even more.
Here are two important issues that I think need unmuddling. One is biological, the other ethical.
First, the biological. A person in a vegetative state does not visualize bottles of Gatorade. That would call for activity in the cerebral cortex that's just not there. As for pain, much of our experience of it relies on the cerebral cortex. It makes us aware of the pain, generates fears of future pain, and so on. With a cortex under deep anesthesia, it's hard to argue that such people experience "agonizing deaths." Their fully conscious loved ones may suffer at the sight of a seizure or mottled skin, but they can't project that suffering into the mind of the patient. Furthermore, in the 2002 book The Vegetative State, the leading neurologist Bryan Jennett writes that the medical consensus is that soon after withdrawal of a feeding tube, the body's own pain-killing opioids flood the nervous system.
Second, the ethical. Smith mentions the fact that people sometimes refuse food and water in the final stages of terminal illness, but then moves quickly away from this fact. How is it that people are allowed to refuse medical treatment and even food and water from doctors, despite being fully conscious of their own suffering? Because they have a legal right. They also have a right to make a living will to make their desires clear if they wind up in a state in which they cannot make the decision about whether to refuse care. And when there's no living will, it's up to a legal guardian to make that decision. Despite the fact that he's a lawyer, Smith avoids all these legal realities. That would distract from the specter he raises of homicidal monsters trying to yank out whatever feeding tube they can find. The fact is that if a guardian decides that a patient would want to live in a vegetative state, that's that. (People have raised questions about Terri Schiavo's husband's fitness as a guardian, which I have no way to judge. But that's a separate issue--if he's unfit, someone else can make the decision; if he's fit, then it's his call.)
PS--Brains and evolution are intertwined interest of mine, and so I raised an eyebrow when I saw Smith's bio. In addition to a lawyer, he's a senior fellow at the Discovery Institute. This, some readers may recall, is the outfit that has been leading the fight against evolution in public schools. Sometimes the DI folks like to portray themselves as disinterested scientists searching for objective truth. But their agenda is actually ideological and broad--broad enough to include someone like Smith.
Update: Thanks to Pharyngula for an overview of Smith's other writings on biotech and the like--equally untenable scientifically, and equally in tune with the overall political agenda of the Discovery Institute.


Last week a region of the brain called the insula was in the news. As I described in my post, scientists found that physical pain and social rejection both activate the insula in much the same way. The insula returns now for a disgusting encore that gives a glimpse at how we get inside other people's heads.
European scientists had people sniff vials that gave off different odors while they were being scanned with MRI. Disgusting smells triggered a distinct constellation of neurons in the insula. Then the researcers showed the subjects videos of people smelling vials of their own. In some cases, the people in the videos either smiled with pleasure, showed no reaction, or screw up their face in disgust. When the subjects watched people being grossed out, the pattern of brain activation looked much the same in their insula and elswhere when they were being grossed out themselves. (The happy and neutral faces did produce this distinctive overlap.)
In the new issue of Neuron, the researchers propose that we perceive disgust in other people by using much of the same circuitry that produces our own feeling of disgust. Empathy, in other words, is not a purely intellectual exercise, but an immediate visceral response--at least when it comes to disgust. This reaction is probably an ancient response. Our primate ancestors 50 million years ago would have benefited from a quick response to nasty-tasting food. It's possible that a visceral response to the sight of another primate eating some nasty-tasting food was helpful too--"Note to self: stay away from that fruit tree. It looks good, but Fred didn't like it one bit."
This week's report adds to a growing collection of studies (like this one) now showing that we perceive other people through a sort of empathetic simulation. When we feel sympathy to someone in a sad situation, for example, a network of regions becomes active is responsible for forming mental images of actions we plan to take ourselves. Other mammals--particularly other primates--share some of this in-my-shoes circuitry, but in humans it has gotten extraordinarily elaborate. Only humans, for example, have a theory of mind, which lets us figure out not only that someone else is smelling something foul, but has foul plans for us. I wrote about this earlier this year in Science, pointing out that how our unique theory of mind may have been an essential ingredient for full-blown language.
From such disgusting beginnings...


After years at a slow burn, the controversy over Terri Schiavo has hit the national news. Schiavo lost consciousness in 1990 after a cardiac arrest, and her husband recently won a lawsuit to have her feeding tube removed, over the objection of her family. Then on Tuesday, Governor Jeb Bush ordered that her tube be reattached, using powers given to him by the Florida legislature the day before.
If ever there was an argument for a living will, the Schiavo case is one. She supposedly told her husband she wouldn't want to be kept alive artificially, but never wrote anything down. If she had, the decision to give or withdraw care might have been a simple one. Instead, her husband and her family--and the country by proxy now--is in a muddled shouting match over life, death, the right to die, consciousness, and the soul.
There are several separate debates here, but people have been jumping back and forth between them as if they were all one. One argument is over the right for a surrogate to make a decision about whether someone should refuse not only medicine but even food. This is controversial no matter what state of mind a patient is in. (The Times points out today that the Florida legislature, by taking over the decision about whether Schiavo lives or die, has probably passed on unconstitutional law.)
The state of Schiavo's mind is the source of a second argument. Her family has posted videos on their web site that they claim shows she reacts to loved ones, smiles, and understands what they say. They also say that she could respond to therapy and improve.
The family claims to have testimony from 15 doctors backing up their claiims, but I can't find anything on their web site, so it's hard to know what these doctors are saying. But I do know that Dr. Joseph Giacino of the JFK Medical Center in New Jersey has taken a look at the videos and hasn't found them persuasive. (See his remarks in a story last week in Time.) Giacino is one of the top experts on the rehabilitation of people with impaired consciousness. He also developed an objective way to gauge the level of consciousness in people like Schiavo. When I interviewed him for an Sept. 28 article for the New York Times Magazine, he explained how people in vegetative states not only have their eyes open, but also can assume disconcerting facial expressions. He gave me a tour of the Center for Head Injuries, where he works, and I could see how easy it is to read into a face what we want to see.
Giacino and others have defined an intermediate stage of consciousness, called the minimally conscious state. It's for people who show fleeting signs of awareness. He and his colleagues have shown that people who are diagnosed in a minimally conscious state are more likely a year after their injury to have better functional outcome than those who were diagnosed in a vegetative state. But the longer a person like Schiavo is in a vegetative state, the less likely it is that any recovery will happen.
It can be hard to accept this. I've been surprised to discover this firsthand in the reactions to my article. In it, I wrote about how brain scans of people in minimally conscious states can show surprisingly complex responses to the sounds of voices and other stimuli. People in chronic vegetative states show no such responses. Yet I find my article keeps popping up as an exhibit in arguments that Schiavo is actually responsive and could recover. The latest example is a letter to the editor of the Tampa Tribune. In every case, people want to mix up the results from minimally conscious patients and people in a chronic vegetative state.
Living wills may help avoid future conflicts, but my talks with experts makes me think that more is needed. We need a lot more research on how to make accurate diagnoses for people with serious brain injuries. And then we need to use that research to make sure that patients are carefully observed for more than just a few weeks, with both rigorous bedside exams and with brain scans. All this will be expensive, but it's not as if our current state of neglect is a bargain. Hundreds of thousands of people are in vegetative or minimally conscious states, and their lifetime care can cost over a million dollars a piece. We can do better.
PS--Steve Johnson also muses on the strange paradox of the Schiavo case.


Science is so specialized these days that it's hard for scientists to look up beyond the very narrow confines of their own work. Biologists who study cartilage don't have much to say to biologists who study retinas. Astronomers who study globular clusters probably can't tell you what's new with planetary disks. But sometimes scientists from different specialties can come together and integrate their work into something truly impressive. A case in point comes from some ongoing research into the evolution of language.
No species aside from our own can use language. Chimpanzees and other primates can communicate, but they can't make the subtle sounds that humans can, nor can they turn those sounds into words organized into meaningful sentences. Something happened--or, more likely, many things happened--in the six million years or so since our ancestors split off from the other apes. Fossils offer only a few clues, because the vocal cords, muscles, and nerves that make speech possible are too delicate to turn to fossils. And there's no Pleistocene Napster we can turn to in order to download recordings of what our hominid ancestors sounded like.
Fortunately, there's another record of evolution embedded in the human genome. Unfortunately, it's incredibly hard to figure out what role individual genes have in something as complex as language. In fact, it was only in 2001 that scientists identified a gene involved in acquiring spoken language. They found it by studying a Pakistani family in which half the members suffered from a disorder that interfered with their ability to understand grammar and to speak. The scientists tracked the disorder back to a single mutation to a single gene, which is now known as FOXP2.
FOXP2 belongs to a family of genes found in animals and fungi. They all produce proteins that regulate other genes, giving them a powerful role in the development of the body. FOXP2 in particular exists in other mammals, in slightly different forms. In mice, for example, the part of the gene that actually encodes a protein is 93.5% identical to human FOXP2. And studies on mice show that it plays a crucial role in the developing mouse brain.
Last year another group of scientists compared the the human version of FOXP2 to the sequence in our close primate relatives. They found that chimpanzees have a version of the gene that's hardly different from the gene in mice. But in our own lineage, FOXP2 underwent some fierce natural selection. By comparing the minor differences in FOXP2 carried by different people, the scientists were able to estimate when that natural selection took place--roughly 100,000 years ago. That's about the time when archaeological evidence suggests that humans began using language. (For a good review of all this work, go here.)
How then did FOXP2 pave the way for language? The only way to really get at that question is to understand what the gene does. Some researchers have argued, for example, that it really isn't a "language gene" per se; instead, it screws up the motor control of the mouth, which then makes it very hard for a person to learn language. It has as much to do with language as blindfold, in other words.
Enter brain scanning. Recently, a team of London scientists got a glimpse at the gene by imaging the brains of the original FOXP2 family. As they reported this week in Nature Neuroscience, the researchers split up the family into those who had defective copies of FOXP2 and those who had working copies. They then had the subjects do different language tasks, such as thinking of verbs that go with nouns.
The scientists found that a change to FOXP2 changes the way the brain handles language. Specifically, in people with mutant copies of the gene, a language processing area of the brain called Broca's area is far less active than in people with normal FOXP2.
Broca's area is interesting for a lot of reasons, not least of which is the function of that same patch of tissue in primates. Obviously, they don't use it to talk. But this region is home to some remarkable cells known as "mirror neurons." They fire in the same pattern when a monkey performs some action--turning a lever, for example--and when the monkey sees another monkey performing the same action. These neurons may make imitation possible, and perhaps might have even laid the foundation for a primitive sign language long before our vocal tracts were ready to take over. The natural thing to do now is to measure FOXP2 expression in the Broca's area homolog in other primates. (Harvard's Marc Hauser raises some important reservations about the role of mirror neurons here.)
Of course, FOXP2 will almost certainly not turn out to be the single gene that made human language possible. But thanks to neuroimaging, gene expression profiling, and other new techniques, it can serve as the thin edge of a wedge that scientists can use to split this mystery open.


My book Soul Made Flesh will be coming out in January, but in the meantime, I've posted an excerpt on my web site. You can read it online or print out a pdf.


In the comments to my post yesterday about Nanoarchaeum equitans, an ancient parasite, the discussion took an interesting turn.
Web Webster wrote: "So in a way, N. equitans is both 'smarter' in that it uses more of its total capabilities (versus humans and the old '10% of the brain thing') and 'more efficient' in the way it works."
To which Brent M. Krupp responded: "That 'old "10% of the brain thing' is complete and utter rubbish. Not a grain of truth to it, nor was there ever. Sorry to go off on this pet peeve of mine, but it's unclear if you were serious in your reference to that myth."
I can't remember the first time I heard the claim that we only use 10% of our brain's full potential. It always sounded dicey to me, maybe because I didn't trust the people who pushed it. They'd always add that this "fact" meant that some New Age technique could liberate the other 90% of your brain power. As far as I could tell, they themselves had yet to liberate the first 10%.
Fortunately, we now live in an age when such myths can be torpedoed at DSL speed. The folks at Urban Legends offer a quick history of this bit of neurological misinformation. In a nutshell, in the 1930s neurologists figured out that only 10% of the human cortex becomes active during sensory stimulation or the motor control of the body. So the other 90% was referred to as "silent cortex." This technical term doesn't mean that that 90% is useless, only that it is silent in these particular tasks, like walking and smelling. In fact, these other regions become active in other kinds of thought--such as making decisions and recalling memories. But that didn't stop the 10% figure from taking on a life of its own.
By coincidence, the 10% story has been on my mind again recently. Over the summer I came across a fascinating paper in Current Biology by Peter Lennie of New York University. Lennie takes a look at how much energy the cortex uses to think. First, he calculates the total amount of energy used by the human cortex, based on recent neuroimaging studies. Then he calculates how much energy a single neuron in the cortex uses when it generates an electric impulse. And finally, he uses these figures to estimate how many neurons in the cortex can be active at any one time. His estimate? Around one percent.
In a way, this finding is even more mind-blowing than the old 10% story. Now it seems that a full 99% of the human cortex is quiet at any time. This intense limitation can also help explain many features of the brain. It can account for the way the brain is arranged into specialized networks that can be rapidly adjusted as incoming information changes. It can account for the way neurons can jam-pack their signals with information. The cortex is a scarce resource, as Lennie's paper makes clear, and evolution has found various ways to make the most of it.
It would be a mistake for anyone to turn Lennie's results into a new urban legend about how we have yet to unlock 99% of our brain potential. For one thing, his work doesn't mean that 99% of our cortex is permanently shut down--only that relatively few neurons are active at any moment. And getting the other 99%of the cortex to be active at the same time would be no easy task. Even at rest, our brains use 20% of the oxygen we take in, and we rely on an intricate mesh of blood vessels to cool off our brains as they use up all this energy. If you reached the full potential of your brain, it seems, you would burn it up in the process.


There's been a fair amount of press about a new paper in Science that shows how the brain responds to social rejection. The kicker is that a region of the brain known as the insula becomes active. As I mentioned yesterday, that's the same area that responds to pain and physical distress. It's an interesting paper with historical dimensions that are missing from the news reports--historical in both the human and evolutionary sense. There's a lot of back-story behind the word "heartache."
A common theme in evolution is the way a structure or a system takes on new functions over time. In our "reptilian" ancestors, bones in the jaw were co--opted for conducting sound to our brains; over a couple hundred million years they've evolved into our middle ear bones. A lot of evidence now suggests that human feelings were built in a similar fashion on top of more ancient systems for sensing the state of the body--pain and distress for states that are dangerous to an animal, pleasant ones for states that are good. Disgust is an ancient response that keeps many animals away from bad-smelling food; some evidence suggests that we are morally disgusted by bad behavior, we borrow some of the same circuitry. The new paper on rejection suggests how the basic pain response took on a social dimension in humans.
This overlapping circuitry may have helped produce the strange concept of heartache. It may be metaphorical now, but originally it was supposed to be a purely physical description. From ancient Greece to the Renaissance, a strong tradition held that the heart contained a soul of its own that could perceive the outside world and produce feelings. Great thinkers from Aristotle to Thomas Hobbes were convinced that nerves delivered their signals to the heart rather than the brain. With the birth of neurology in the 1600s, the brain came to take a central place in the body and was the site of emotions and perceptions. Meanwhile, the heart was de-souled, transformed into a mechanical pump.
But the tradition of the heart lives on, and not just in words like heartache. Think for a moment of all the images of Jesus with an open heart. You never see him pictured with an open brain.


One reason that I'm so riveted by neuroscience is the way it can blow the lid off of philosophical conundrums that have dogged Western thought for centuries. Case in point: in a recent study, scientists at Dartmouth asked subjects about something that was on their mind--an exam, a girlfirend, and so on. Then, while scanning their brains with an MRI machine, they told their subjects NOT to think about that thing.
We're all pretty comfortable with the idea that thoughts are the product of neurons, electrical impluses, and neurotransmitters. But if that's all that thought is, then what (or who) suppresses those thoughts?
This is a paradox that has bedeviled Western thought for centuries. Neuroscience has its roots in the scientific revolution in the 1600s, when natural philosophers set out to reinterpret the world as a machine. Rene Descartes saw the brain as a set of tubes, strings, and pulleys. His anatomy was still stuck in medieval misconceptions, and it would be for another 30 years until it was set right by the English physician Thomas Willis, who's generally considered the founder of modern neurology. But both Descartes and Willis got tangled up in a paradox. On the one hand, they saw movement, memory, and lots of other faculties of the mind as mechanical changes in particles of the brain. But when it came to something like the will--the ability to control your own thoughts, for example--they hung up their scientific spurs. That had to be the result of an immaterial, rational soul above the laws of nature. Only a soul could be the source of something like will; nature, after all, was purely passive. (This is one of the themes of my upcoming book, Soul Made Flesh.)
Research like this study from Dartmouth (in press at Neuropsychologia) can help resolve this material/immaterial paradox. They found that one key region in the brain becomes more active when you're trying to keep a particular thought out of your mind as opposed to just thinking in an ordinary way. It's known as the anterior cingulate cortex (ACC), located in the cleft of the brain. A lot of earlier research has suggested that the ACC is a conflict monitor, seeing how well the brain's outputs match up with its goals. If the goal is to not think of a girlfriend, thinking about her rouses the ACC.
Earl Miller at MIT and Jon Cohen at Princeton have proposed a model for what happens once the ACC sense a conflict. They argue that it sends signals to the front of the brain, known as the prefrontal cortex. The prefrontal cortex is basically a very complicated set of track switches. Signals coming into the brain activate neurons in pathways leading from input to output--some kind of action, in other words. The neurons in the prefrontal cortex mingle their nerve endings with every step in these pathways, and they can boost some signals while squelching others. In the process, they can produce a particular output for any given input.
If everything is working smoothly--if inputs produce the right outputs--the prefrontal cortex is pretty quiet. But when the ACC kicks in, the prefrontal cortex neurons responds by boosting and squelching signals in new ways until they get rid of the conflict. In the case of the Dartmouth study, unwanted thoughts appear to switch on the ACC, triggering a rearrangement of the prefrontal cortex until the thought goes away.
The Dartmouth researchers put a twist on this study that was very interesting: they then told their subjects not to think of anything at all. The ACC became active again, but so did other regions, most notable a strip on the side of the brain called the insula. The insula is known to be associated with pain and distress. It's possible that suppressing thoughts so strongly is hard work, and the insula represents the struggle.
This study may help in a practical, psychiatric way, by shedding some light on what happens when unwanted thoughts intrude on our minds uncontrollably. But it also has a side benefit of a more philosophical streak. When the Dartmouth scientists put their subjects in a scanner and told them not to think about an exam or a girlfriend, the images they took didn't show some homunculus crouching in a ventricle of the brain, issuing commands. ("You there--you thought! Disappear immediately!") Instead, they saw evidence of a network continually reconfiguring itself.


It's never pretty to see journalism transformed into propaganda, especially when you're the one who wrote the journalism. I recently did an article for the New York Times Magazine about the grey zone between coma and consciousness. The National Right to Life web site then posted a long "News & Views" piece by one Dave Andrusko that pretended to recount my article. It was annoying enough to see careless mistakes--adding quotation marks to a passage from the article, so as to put it into the mouth of a doctor, for example. But it was really unpleasant to see my article distorted to serve a political purpose.
Here's some background. The National Right to Life Coalition is opposed to witholding care from people with chronic, severely impaired consciousness. Karen Ann Quinlan's parents went to court for the right to take her off a ventilator in 1976; since then, families have won the right to have feeding tubes withdrawn from patients. Currently there's a case in Florida over a woman named Terri Schindler-Schiavo, whose husband wants her care terminated.
My article was primarily about scientists who are doing basic research on the biology of consciousness, trying to figure out what's going on in the brains of people who cannot tell them anything about their own inner life. In some cases, people with seriously impaired consciousness--who cannot talk, who can barely follow a command to blink an eye--turn out to respond to language and other stimuli with brain patterns that are surprisingly like those of conscious people. It might even be possible some day to give them drugs or electrodes to help them become a little more conscious.
In retelling this story, Andrukso of the National Right To Life Coalition snipped up the story and turned it into a collage to suit their agenda. And in the process, he spread some serious misinformation that could confuse families and cause them unnecessary grief.
In the Times article, I describe the spectrum of states that exist below the level of full consciousness. If you get in a car crash, you may come into a hospital in a coma. In other words, your eyes are closed and you have no signs of consciousness at all. If you survive for a few weeks, you may come out of the coma but still not be conscious. Your eyes open and close in a wake-sleep cycle, you may move your eyes, you may squeeze someone's hand. But that's about it. This is known as a vegetative state. (VS) You may stay in this state for the rest of your life--in a chronic vegetative state, in other words. Or you may begin to show more signs of consciousness. If you show signs of consciousness that are unreliable--touching your nose once on command and then not touching it in later examinations, for example--then you may be diagnosed as being in a minimally conscious state (MCS). From MCS, you may gradually show more reliable signs of consciousness until you recover fully, or you may stop at MCS for the rest of your life.
The doctors I portray have done two different studies. One they did on chronically vegetative patients, some of whom showed strange behaviors--like shouting a curse word every couple days in one case. In those studies they found that isolated parts of the cortex are still active in some patients, like fragments of mind. The more recent study (still ongoing, actually) is on MCS patients. In response to the sound of a familiar voice or a scratch on the arm, the brains of these patients become active in much the same way conscious brains do. And yet, between these responses, their brains use less energy than someone under full anesthesia. It suggests a real biological difference between chronic VS and MCS, and hints at some ways to bring more awareness to people with MCS (a few hundred thousand Americans are in MCS, by the way).
Here's what you get in the National Right to Life piece: MCS supposedly "identifies people who are in a condition somewhere between a pvs [persistent vegetative state] and a coma."
That's like saying 3 is somewhere between 1 and 2.
By mixing up the differences between MCS and VS, Andrusko performs a sleight of hand with the results of the scanning work, making it seem as if vegetative patients actually have these remarkable reserves of mental activity. And then comes the political twist: "With the Zimmer article," he, "we can hope that the rag-tag army of family, friends, and volunteers who have stood by Terri Schindler-Schiavo has been provided with some reinforcements." And he then adds, "How ironic that it should come from the New York Times!"
I see another irony in all this. It is absolutely untrue that people accurately diagnosed in chronic vegetative states--who have not shown signs of consciousness for years--have shown any evidence of consciousness in brain scans. Only those in MCS do. The distinction is crucial--particularly because in the past some people with MCS may have been misdiagosed as being in a vegetative state.
I'm not writing this to attack the National Right to Life Coalition for their political positions. Frankly, in the narrow confines of a blog, I don't even want to get into the vastly complicated issue of withdrawing care from these patients. (Except to point out that so far, the courts have refused to come down in favor of withdrawing care from MCS patients when their wishes were not clearly stated before they lost consciousness.) But shuffling facts to suit your political needs--particularly when it concerns the most agonizing experience a family could ever go through--is pretty awful.
I'm reminded of Mark Twain's reaction to seeing a ridiculous translation of one of his stories into French. "I think it is the worst I ever saw," he wrote, "and yet the French are called a polished nation. If I had a boy that put sentences together as they do, I would polish him to some purpose."


I wrote an article for this Sunday's New York Times Magazine about the grey zone between coma and consciousness. Stories like this one are always hard, because there are so many crucial dimensions to the subject and so little room to do justice to them all. For example, I couldn't even begin to explain how the research I describe in the article--using PET and MRI scans to measure the brain activity in people with traumatic brain injuries--is a beautiful reverse twist on some of the most famous research ever done on the brain: the nineteenth century doctor Pierre Paul Broca's discovery of a region of the brain dedicated to speech.
In 1861 Broca (1824-1880) treated a man who had suffered a stroke that robbed him of his ability to say anything except the word "Tan." (He said it so much that he was nicknamed Tan.) Despite this devastating blow to his faculty of language, he could still understand the speech of other people. After Tan's death, Broca autopsied his brain to find exactly what part of the brain had been damaged. It turned out that the stroke had destroyed part of Tan's left frontal lobe. Broca looked at other patients with the same condition (known as aphasia), and found that they too suffered damage in the same area--what came to be known as "Broca's area."
It's hard to convey just how important this discovery was. As I explain in my book Soul Made Flesh, the idea that different faculties were the responsibility of different parts of the brain was first proposed in 1664 by the English physician Thomas Willis. Willis did some crude experiments to show that this was true (destroying nerves and bits of brain in dogs and waiting to see if they died), but there was a lot of resistance to the idea well into the 1800s. If you took this idea too far, you'd wind up with a brain that carried out all of the operations of the mind in different regions, like the gears in a clock. This was a threat to the traditional concept of the rational soul. To divide the soul was to deny it.
In the late 1700s Franz Gall put a new spin on Willis's ideas with a theory that came to be known as phrenology. Gall had some pretty wild ideas about how you could tell which faculties a given area of the brain was responsible for--such as studying the bumps on people's heads. But Gall was also an excellent anatomist, and Broca respected his ideas. And in 1861 Broca found the real evidence for the "modularity" of the mind that Gall had been searching for. Tan's stroke was like a natural experiment, shedding light on the precise function of Broca's area by knocking it out. A few years later, the German neurologist Karl Wernicke used the same approach to find a second region of the brain that played a different role in language. He studied a stroke victim who could talk freely, but made little sense, and couldn't understand what other people said. That area is now known as Wernicke's area.
Last year Nicholas Schiff of Cornell Weill Medical School and his colleagues, whom I profile in my NYTM piece, offered a modern twist on Broca's work. As they reported in the journal Brain, they took PET scans of people in chronic vegetative states. That means that after some devastating brain injury or disease, these people stopped showing any of awareness--although they might undergo some disconcerting reflex actions, such as opening their eyes and closing them in a sleep-wake cycle, or even making noises or moving their limbs.
One of the most astonishing cases Schiff's team studied was a woman who had been in a vegetative state for 25 years. Every few days she let out a curse. The scientists used PET to measure how much energy her brain was using, and where that energy was being used. It turned out that practically all her cerebral cortex--the outer layer of the brain that is required for complex human thought--was shut down cold. You see the same readings in the PET scan of a healthy person under anesthesia. But in a few small regions of her cortex, the neurons were burning energy at levels much closer to that of a conscious person. One of those areas was Broca's.
Broca discovered what happened to the mind if one part of the brain was destroyed. You get a conscious man who cannot speak. Schiff discovered what happens if that one part is just about all that's left. You get an unconscious woman who utters words.