\n"; echo $styleSheet; ?>
include("http://www.corante.com/admin/header.html"); ?>

The University Illinois at Chicago recently announced the installation of the most powerful human brain imaging system to date. While most fMRI systems in use today are powered 1.5-tesla or 3.0-telsa magnets, this new high resolution fMRI system has a 9.4-tesla magnet, built by GE Healthcare (a tesla is a large measuring unit of magnetic strength).
As I've mentioned many times, advances in neuroimaging are critically important in order to understand the workings of the human brain, detect diseases before their clinical signs appear, develop targeted drug therapies for illnesses and to provide a better understanding of learning disabilities. While I might not go as far as Dr. Keith Thulborn, director of the UIC Center for Magnetic Resonance Research, who claimed that this technological leap forward is as revolutionary to the medical community as the transition from radio to television was for society, I would suggest that this definitely a step toward our emerging neurosociety. Also, it looks like the neuroimaging group at University College London will now have some real competition.
Correction Update 10/10: Thanks to a reader a India for pointing out that in my haste to post this piece named the correct university in the first sentence. It is not the University of Chicago, but The University Illinois at Chicago.


IBM Researchers have made a radical breakthrough in imaging sensitivity. The method is called magnetic resonance force microscopy (MRFM) and improves MRI sensitivity by some 10 million times compared to the medical MRI devices used to visualize organs in the human body. It is so sensitive that it can detect the faint magnetic signal from a single electron buried inside a solid sample. While applications on live human tissues, like the brain, are still speculative, this imaging breakthrough is an important step in non-invasive single neuron brain imaging. (read about other brain imaging breakthroughs, like those at MIT.


This is for you neuroimaging fanatics. I highly recommend this recent NeuroImage paper by Andreas Bartels and Semir Zeki on the Chronoarchitecture of the Brain- natural viewing conditions reveal a time-based anatomy of the brain. This reminds me of that talk Rodolfo Llinas gave at NBIC, Faster than Thought.


The Human Brain Project (HBP) turned 10 years old this past week and neuroscientists gathered to celebrate recent advances and speculate about what is to come. While the field of cognitive neuroscience took a while to realize the importance of data sharing and neuroinformatics, it is now working to archive and openly disseminate data from neuroimaging studies of brain function from across the globe.
One of the results of this decade long effort has been the development of the fMRI Data Center (fMRIDC) which provides computerized analysis of neuroimages, providing the ground work for a neuropsychiatric image database that could be used for clinical assessment.
As I've written previously, neurotechnology will be used to define mental disorders in the coming years. Indeed, the DSM-V, due for publication in 2010, will most likely contain neuroimaging and genetic analysis information to more accurately diagnose and treat mental disorders.


This is very interesting (from Strategy and Business)
There is substantial evidence that overutilization and misuse of technology leads to spending that exceeds its value for patients. In the diagnostic imaging technology category — which has grown to nearly a $100 billion business — spending increases are driven to a large extent by the growth in the number of machines installed in hospitals, as well as in doctors’ offices and at imaging centers. This has led in turn to overcapacity in many areas and has created incentives for doctors to prescribe unnecessary procedures. Duplication of procedures (i.e., a patient receives an MRI, then a PET scan, even though doing both procedures does not help doctors get closer to a diagnosis) and overuse of high-end procedures in situations where they add little value has also driven up technology spending unnecessarily.
This needs more research.


You know neurotechnology is emerging when people start putting hundreds of millions of dollars into building new centers of brain imaging excellence.
Last week, London Imperial College and GlaxoSmithKline announced plans to build a £76 million medical imaging research center in London that will staff 400 researchers. The center will focus on improving treatments for diseases like stroke and cancer, while at the same time driving new developments in imaging technology.
Even for a research-intensive university like Imperial College, the new center is on a huge scale. “This is by a long way the largest such investment internationally that I know of in imaging science,” said Leszek Borysiewicz, principal of the faculty of medicine.
The site at the college's Hammersmith Hospital campus was chosen over several top US and European institutions. It will house 400 researchers and support staff from industry and academia, half of which will be new positions. The company is contributing £28 million to the construction of the center and £16 million to furbish it with imaging equipment. The rest will be provided by Imperial College and the hosting hospital.
Basic and translational research will take place in the center when it stands finished in 2006. This will initially focus on neurological diseases and cancer; however, the publicly funded research is likely to also branch out into a number of other disorders. Borysiewicz said that there are likely to be developments in the diagnostics and therapeutics of diseases, but that the college at the same time will be able to push the technology with its strong base in engineering,
computing, and chemistry.
Fuelling basic science is a welcome side effect for the drug company. “If we can increase the science output of a major university, it's good for us because you can't discover drugs without understanding disease,” said John Brown, who oversees imaging research at GSK.
Mark my words, this is only the beginning. Wait until Wall Street realizes the power of neurofinance.


Alexander Pines and his colleagues at UC Berkeley have discovered a remarkable new way to improve the versatility and sensitivity of magnetic resonance imaging and the technology upon which it is based, nuclear magnetic resonance (NMR).
"NMR encoding is exceptional at recovering chemical, biological, and physical information from samples, including living organisms, without disrupting them," says Pines, noting that MRI, a closely related technology, is equally adept at nondestructively picturing the insides of things. "The problem with this versatile technique is low sensitivity."
In their soon to be released paper in the Journal of Magnetic Resonance Imaging they explain how encoding and detecting NMR/MRI signals separately makes many otherwise difficult or impossible applications possible.
"For example, xenon can be dissolved in chemical solutions or in the metabolic pathways of biological systems, then concentrated for more sensitive detection. Other signal carriers can also be used for remote detection, including hyperpolarized helium gas for medical imaging or liquid oil or water for geological analysis. Since only the carrier reaches the detector, alternate detection methods, incompatible with the sample because they may be intrusive or require transparency, can also be used -- for example, optical methods that can detect the miniscule NMR signals from living cells."
Like Randall Parker, I too believe that the most interesting developments to watch are analysis instruments. While still in basic research mode, this latest laboratory breakthrough will slowly make its way into corporate and academic labs, greatly refining our basic understanding of human biology in years to come.
[Thanks to Kevin Keck and the Bay Area Futurist Salon for bring this to my attention]


This week's "The Scientist" contains several relevant articles for neurotechnology.
Here are the highlights:
1. Numbers on the Brain breaks down the public and private funding initiatives supporting the $60B neuroscience/pharmacology market.
2. Cutting Neurons Down to Size details the latest research into how and why connections among neurons go through a process of self-pruning in early child development. Neuroscientists have known about neural pruning for decades, where synaptic density peaks from ages 1 to 2, declines until age 16, and then levels off. Experts predict that sorting out how pruning works might eventually help in understanding epilepsy, neurodegenerative diseases, mental retardation, autism, and schizophrenia.
3. fMRI The Perfect Imperfect Instrument covers how most investigators rely on the fMRI method that uses a blood oxygenation level-dependent (BOLD) contrast. The signal arises from changes in magnetic characteristics of blood related to differences in the relative amounts of oxygenated and deoxygenated hemoglobin. Though many researchers correlate blood flow to neural activity, the connection hasn't been solidly determined.
4. Caution: Brain Working further deconstructs fMRI. This has important implications for cognitive related experiments that depend on fMRI. fMRI suffers from poor temporal resolution which means it is impossible to segregate the different stages of how during conversions words and their meaning are differentiated. For this reason, language experts like Peter Hagoort use fMRI in combination with electroencephalography (EEG) and magnetoencephalography (MEG) in his efforts to identify those stages.
5. It's Neuron's Time describes how scientists are taking the first stabs at answering at least one part of the question, how the brain perceives time. A recent University of Washington study was the first to document how neurons in primates track time from one instant to the next. Timing is a subject of increasing interest, because it's important in learning. Learning skilled movements, for instance, involves internalizing their sequences and timing.
The Take Away: All these articles show that we are suffering from a brain imaging bottleneck.
Peter Hagoort's quote sums it up nicely, "Many neuroscientists dream of a "more perfect" instrument, one that will combine the spatial sensitivity of fMRI with the millisecond temporal acuity of EEG or MEG, but it is difficult to predict what such an already rapidly changing technology will look like in 10 or 20 years."
That timing seems just about right to me.


Recognizing the importance of brain imaging technologies, the Nobel Assembly has awarded the 2003 Nobel in Medicine to American Paul Lauterbur and Britain's Peter Mansfield for their discoveries on magnetic resonance imaging (MRI), a painless diagnostic method used by doctors to look inside the bodies of millions of patients every year.
Update: 22,000 MRI's are in use worldwide, and more than 60 million scans had been performed. More.


Caltech neuroscientist Christof Koch is interviewed by The Scientist this week on his decade long discussion with Francis Crick about the nature of consciousness:
Koch states that he and Crick have revised their earlier proposition that synchronous neuronal oscillations might be at the heart of consciousness. They originally believed that this theory might be the solution to the so-called binding problem: How do differently processed aspects of an object bind together into one percept--red + round + shiny = apple, for example. "Unfortunately, the evidence is slim for a direct relationship," Koch says. "What's much more plausible now is that synchronized firing activity in the 40-Hz range may be necessary to resolve competition (among separate neural circuits competing for conscious attention)... There's quite a bit of evidence that oscillations might be involved in biasing the selection, but once I'm fully conscious of [the percept], it's unclear whether [the oscillations are really needed.]"
As visual scientists, Koch and Crick are primarily defining consciousness as differences in visual attentiveness. Although this reductionist approach may be moving the ball forward a bit, consciousness will remain an elusive concept for years to come.


Using fMRI brain scanners, Yale scientists report in the NYTimes that two types of brain problems cause dyslexia. This new information should lead to more effective treatment for both types. The two types are divided into those whose dyslexia is, either:
As I've written previously, neurotechnology will continue to define mental disorders more accurately as we understand how the brain operates at increasingly refined scales. Dyslexia is just the latest example. What's next?


San Diego-based Neurome is racing to chart the brain's neural circuitry in the hope of creating breakthroughs treatments for mental illnesses.
"All this information about the function of the brain has to somehow be stored in a database that is standardized and can accurately depict the molecular, cellular and circuitry patterns of brain activity so that researchers can look at it and determine what's normal and what's not, section by section, circuit by circuit. And that is the function that Neurome intends to provide to drug discovery companies, said Dr. Floyd Bloom, Neurome's chairman and one of its founders.
Neurome scientists have improved the technology so that now it takes about 35 minutes to collect the volume of data it previously took about seven hours to record, Bloom said.
They are trying to measure and record how over time the disease affects the connection and communication, or electrical charges, between the neurons and cells in the brain." (more)
Backed up by an all-star team and $13m in funding, Neurome is initially focused on Alzheimer's disease. Although I expect valuable results from their work, they will have to solve the animal mental health model problem at some point, as human neural circuitry doesn't correlate precisely with mice neural circuitry.


This year's annual Human Brain Project meeting will be held in Bethesda, Maryland, May 12-13. Because understanding brain function requires the integration of information from the level of the gene to the level of behavior, neuroinformatics is the primary area of focus for the National Institute of Mental Health who sponsors most of government grants in this area. This meeting always has a few outstanding presentations.


Developing safe and effective neurotechnology will depend on continued advances in biochips and brain imaging technologies. This week's Science reports good progress on the imaging front: (article links require subscription).
Brain science still has a long way to go but these efforts show that we are making headway on many fronts.


Current brain imaging technologies constrain our ability to understand how the brain functions. To develop next-generation cogniceuticals we will need to move beyond today's three brain imaging technologies to the level of neuron and intra-neuron scanning.
fMRI's (functional magnetic resonance imaging) have a resolution limit of about a cubic millimeter, this volume can still contain tens of thousands of neurons. PET (positron emission tomography) scans are more accurate in determining where in the brain neurons are being activated but have poor temporal resolution, while EEG's (electro-encephalogram) are more accurate in precisely timing events, they are unable to track important biochemical attributes.
Update 5/20: Current brain imaging still provides only a crude snapshot of brain activity. Neural processes are thought to occur on a 0.1 millimeter scale in 100 milliseconds (msec), but the spatial and temporal resolution of a typical scanner is only 3 millimeters and about two seconds.


How many ways can our brains be molded? Researchers at Oxford believe they have zeroed in on the brain region involved in foreign accent syndrome, which causes patients' accents to shift suddenly.
Listen to a recent example of an English woman reporter who has foreign accent syndrome: before and after.
The first known case was reported in 1941 and involved a Norwegian woman who was ostracized when she developed what her neighbors thought was a German accent after she recovered from shrapnel injuries.