About this site
Here we'll review recent developments in drug discovery and medicine and the IP issues and financial implications they have, along with general thoughts about research. Also likely to make an appearance: occasional digressions into useful topics like which lab reagents smell the worst.
About this author
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly.
There are some other interesting approaches to treating HIV that relate to the ones I spoke about yesterday. I went into the topic of using RNA interference to go after the CCR5 receptor, which seems to be a very important cell-surface protein involved in infection by most strains of HIV. As I mentioned, the researchers had to administer heaps of the siRNA ("small interfering RNA") to accomplish this, presumably because it gets degraded pretty quickly and isn't taken up well by cells.
But there's a way around this problem. Back in January, a Cal Tech/UCLA team showed the results of using a far more certain way to get basically the same siRNA into cells - make the cells make the stuff themselves. These RNA sequences can be produced by an enzyme called RNA polymerase III under the right conditions, and this team took advantage of that to hijack the cell's own systems into producing the siRNA.
How do you do that? There's already a well-known delivery system for bringing in RNA and DNA sequences into cells in a way that causes them to be taken up and used very efficiently - a virus. In this case, they turned around and used a lentivirus derived from HIV itself (after all, it's a pretty damn effective virus.) The engineered virus was loaded with the DNA needed for the siRNA molecule (and another gene, for the workhorse Green Fluorescent Protein.) GFP is an invaluable marker, because it doesn't interfere with many other processes, and as the name implies, it glows bright green under the right conditions. It gives you a tremendous way to follow which cells actually took up your new DNA package and expressed it.
Infecting human lymphocytes with this virus worked - a good percentage of them took up the new DNA and expressed it - they glowed green under the proper wavelength of light, and they produced an siRNA that shut down expression of the CCR5 gene. Exposure of these cells to HIV led to a 3- to 7-fold reduction in their total virus load as compared to controls. That's not too large, in one way, but it could well be large enough to have a substantial clinical effect. And these are early days.
Meanwhile, there's a small company called Virxsys that's using HIV-derived lentivirus vectors as well. And they're going to be the first people to try them out in humans - just don't ask me how the company's name is pronounced. Their plan is to enlist HIV-infected volunteers that are failing current treatment regimens, but don't yet have opportunistic infections. They'll isolate CD4 T cells from the patients, allow their engineered virus to infect them, and send them back in. These cells will now be expressing an antisense DNA that's targeted to bollix up the expression of a key part of the HIV protein envelope - without that, the virus can't get a foothold. Any wild-type HIV that tries to infect these cells will be stopped in its tracks, and it's going to be very difficult indeed for the virus to mutate around this sort of attack.
Basically, these people will have been infected with a helpful form of HIV, giving them an artificially enhanced immune system which will attack wild-type HIV by a method (antisense) that is beyond anything used by living organisms. This is a pretty audacious strategy, but the company done a lot of groundwork to convince the FDA that it's safe. (For the hard-core molecular biology types, here's their presentation to the FDA.) No ill effects have been seen in any of their animal models, and the antisense sequence doesn't start showing up in any other tissues. Virxsys announced recently that they're going into man within the next three months. It's going to be interesting. . .wish them luck.
|The Latest RNA News
Regular readers will know that to me, that's an exciting headline. Yep, I live in my own world, a fact that my wife will cheerfully corroborate.
But this could be worth getting excited about; at the very least, it's the beginning of something that eventually will be worth it. I've mentioned RNA interference (RNAi) before on the site (see January 21,) as well as my conviction that it's going to be the source of a shower of Nobel prizes in the reasonably near future. As well as standing ideas about gene regulation on their heads and illuminating fundamental processes that no one had a clue about, these latest RNA tricks have clear therapeutic implications. The ability to turn off a particular gene's expression at will could lead to a number of new approaches to intractable disease, if the (many) difficulties can be ironed out.
Said ironing has commenced. In the latest issue of Nature Medicine, a team from Harvard report that they can use RNAi to protect mice against an experimentally-induced form of hepatitis. The gene in question is Fas, which codes for a protein involved in an apoptosis (cell suicide) pathway. Fas has been a target for some time, because it's expressed at relatively high levels in the liver, but not in other tissues, and its untimely activation can cause all sorts of havoc. The standard method to affect gene transcription in adult cells, antisense DNA, has been used to attack this gene for use in hepatitis before, so it was a natural proving ground for RNAi.
And it seems to work. Infusion of a small "silencing RNA" for Fas protected the mice against later treatments that activate the protein. Control mice succumbed to liver failure, but 82% of the treated mice survived. Another mechanistic hepatitis challenge showed similar results. It appears that at least 80% of their liver cells took up the siRNA, which is pretty impressive.
Well, there is a catch. To achieve these results, they had to infuse a raving, gibbering sledgehammer excess of siRNA - in a dose equivalent to half-again the rodents's blood volume. That's fine for a proof-of-principle study, but there's obviously a lot of work to be done to deliver these RNA molecules more effectively. (The same problem has been driving the antisense DNA people insane for many years now; let's hope that RNAi is more tractable.)
This sort of therapy immediately suggests antiviral applications, which (to paraphrase Watson and Crick) has not escaped anyone's notice. The same Harvard group has reported at a conference that siRNA infusion against the CCR5 receptor (which I wrote about exactly a year ago) seem to protect cells against HIV infection. In this case, there are already small molecules targeting this receptor, and they can be administered much more easily. But the promise is there, and this field is attracting swarms of good researchers. It's going to be interesting to see how far it can go.
|More On Vaxgen
Greeting to the Instapundit referral hordes. I have a few more things to say about this trial, actually.
For one thing, Vaxgen is also trying out this immunization in Thailand, with results expected in the next few months. This trial is targeted on transmission of HIV in intravenous drug abusers. It'll be interesting to see if the effects in their Asian subgroup translate into a more robust response in the Thai population - frankly, that's about the best hope they have of seeing it work well at all.
But (as some correspondents correctly pointed out today) there have been many clinical trials that showed an unusual effect the first time which didn't repeat. Generally trials get larger as the testing process goes on, and the greater statistical power tends to wash out the stuff that just happened by chance. (Also, later trials tend to get designed more specifically to look at the effects seen in earlier ones, and some of this stuff just doesn't stand up to a direct test.)
Vaxgen's talking about how they plan to seek approval for this therapy - but I really don't see how that's ever going to happen if these ethnic differences don't pan out. Otherwise, as far as I can see, they have nothing. And an ineffective vaccine would be worse than no vaccine at all, because it would breed a false sense of security among its recipients. No, Emerson's quote that "When you strike at a king, you must kill him" applies here. If you're going to vaccinate against HIV, it had better work.
My other thoughts come from the NPR comment I mentioned last night. The story of how Vaxgen ended up with the ethnic mix they did seems plausible. As the commentator pointed out, there's a terrible (and unfortunate) suspicion of drug testing and medical experimentation among some parts of the black population - another legacy of the infamous and indefensible Tuskegee syphilis experiment. Vaxgen was trying to get as many motivated people entered as quickly as possible, and they ended up with an ethnic mix that recapitulates neither the population as a whole, nor the HIV treatment population.
Usually, this doesn't matter much. There really aren't many medicines that show a pronounced difference between ethnic groups (some classes of high blood pressure therapy are a well-known exception, although people argue about that one as well.) That's probably one reason that Vaxgen didn't make more of an effort, and another one is that this was a pretty long trial. Three years, with multiple immunizations along the way, requires a lot of motivation and staying power. They probably felt that they were better off with volunteers that came to them, rather than ones that required more outreach and persuasion.
That's true whether or not there's an ethnic or cultural factor at work or not. The nightmare form of a clinical trial is one where people start steadily dropping out as it progresses. You can just watch the statistics unravel right in front of you; your chances for a meaningful answer are walking out the door with your patients.
Well, one way or another, we'll know more when Vaxgen's Thai data come in. If that shows the same effect, I would expect them to concentrate on separate ethnic groups in further clinical studies. They'd probably still need more evidence to show that that effect is real to convince the FDA. As it stands now, they have very little chance. I think we're going to have to pin our hopes on DNA-based vaccines and their variations, as I mentioned in last spring's TechCentral article, and other companies (like Merck) are in the lead there.
|Fun With Clinical Data
Regarding Vaxgen's data, reader Ted Arrowsmith points out:
"The analysis by racial group appears to be post-hoc; if not the researchers should have stratified by race (I don't know if they did or not) and performed separate sample-size calculations and accrual totals (which they clearly did not). We don't know how many subgroup analyses the investigators did, but experience suggests that one-product biotech firms massage their data pretty aggressively. If they looked at 20 groups, we would expect one to be statistically significant at the p<0.05 level. . .A useful exercise for anyone who reads trials is to get a dataset from a large negative trial and run bogus subset analyses -- zodiac sign, odd or even year of birth, study number, etc. -- until you find so striking p<0.001 result in a subgroup."
These are good points. As I mentioned in the post below, even if these effects that Vaxgen saw are real, it's for sure that their trial wasn't designed to study them, and it's certainly "underpowered," in the clinical trial lingo, when it comes to addressing them. If they're serious about trying to get this approved, they're going to have to show it under more stringent conditions.
And the point about finding subgroups is completely correct. A large enough data set can provide all sorts of oddities if you slice it thin enough - think of the brouhaha a few years ago about power lines and cancer. One study would find a tenuous link between power line exposure and kidney cancer. . .another would show a faint correlation with brain cancer. . .a third would suggest a possible connection with still another disease.
But none of them ever pointed at the same thing, and the effect seen in one study would disappear when you tried to focus in on it with another. Even things with good p-values disappear on you, as Arrowsmith says: the odds are against a given striking correlation being by chance, but if you look for enough of them, the odds are you can find the one that really did occur by chance.
The power-line data was statistical noise. Of course, it got reported in the more fear-mongering outlets as "Studies have shown a link between power lines and many cancers, including kidney, brain. . ."
|HIV Vaccines - One Down?
Today's Vaxgen news of a failed vaccine trial is discouraging, but I don't think that it's quite as bad a blow as some of the press coverage makes it seem. It's certainly bad news, don't get me wrong - bad for everyone at risk for infection, and bad, on a smaller human scale, for the Vaxgen researchers (and shareholders.) But the good news is that this isn't the most advanced sort of vaccine out there, and there was (to be frank about it) not a tremendous amount of hope that this one was going to work at all.
Back in April of last year, I did an article for TechCentral Station on HIV vaccines. In it, I referred to the Vaxgen trial as "the best shot remaining" with the whole-protein approach to vaccination, and I think that view still holds. The rap on these has long been that HIV changes its coat proteins quite easily, so an immune response might not be effective for long. Vaxgen tried to include two variations of the gp120 coat protein from different strains of the virus, but this doesn't seem to have been enough.
There's something a bit odd about the numbers as reported, actually: 5,009 patients went through at least three immunizations (out of seven.) Of those, 3,330 got the vaccine and the other 1,679 got placebo. Looking over Vaxgen's press release, it states that "reduction of infection among the entire sample of volunteers" was 3.8% (p-value = 0.76, n = 5,009.)" A p of 0.76 is pretty awful, and indicates that there's no meaningful statistical difference between the treatment group and control (placebo) group at all.
Vaxgen states that "Trial volunteers received regular counseling to avoid risks that could lead to HIV infection and were advised to assume that they may have received a placebo. . .Vaxgen's preliminary analysis of the trial data indicates that risk behavior was reduced in both the placebo and vaccine groups." Looking at that, you might assume that the entire response was due to the counseling, with no effect of the vaccine at all.
But that's probably not the case. The most intriguing thing to come out of this whole trial was the difference in efficacy between the ethnic groups among the volunteers. Of the 5,009 total, 326 of them were Hispanic, 314 were black, and 184 were Asian or "other." The reduction in infection among the total non-white/non-Hispanic group was 67% versus placebo, and the reduction in the black group was 78%. That's attention-getting, all right, even though the samples are small. The statistics are good, but not great - p-values of 0.01 and 0.02 suggest that there's very likely something real there, but you really catch people's attention with values that are tenfold smaller (although, in all fairness, it can be hard to realize that level of significance in the clinic.)
Vaxgen states that the annual infection rate was about 2.7%. You can see why there are so many people involved; with a small group you'd have no chance to see any effect even if it was pretty robust. With 5,009 starters you'd expect about 395 people to have become infected over the three-year course of the trial. (That's taking it annually, subtracting out each time because you're not going to get counted as being infected twice.) A 3.8% reduction across the entire trial would mean that there were fifteen fewer infections than expected, which isn't too meaningful.
If the infection rate among the "non-white" cohort was the same (which isn't stated, but let's assume that it is,) then of those 498 starters, about 39 would have been infected in the three-year course. A 67% reduction in that group would mean that about 13 people actually got infected, 26 fewer than expected.
That alone would have given the whole trial a better than 3.8% reduction. There's obviously some noise in the data, and I don't have all the information I'd need to do this right (like infection rate by group.) But just from this back-of-the-envelope stuff, you can see that basically all the responders were in the black and Asian groups. Vaxgen does mention that "black and Asian volunteers appeared to produce higher levels of antibodies against HIV" compared to the white and Hispanic groups, which supports the hypothesis that these groups really did respond better to the vaccine.
|A Tough One, One of Many
Now, why different ethnic groups should respond so differently to Vaxgen's immunization is a complete mystery. I'm no immunologist, but nothing I know suggests an immediate good explanation for this effect. At the same time, it's certainly not an impossible thing to see, either.
NPR had someone on this evening who worked in Vaxgen's patient enrollment for this trial, and he said that they were trying to get as many people enrolled as quickly as possible. From what I've seen of clinical trials, I have no trouble believing that. They ended up with a lower minority-group enrollment than representative statistics would have given, partly because of suspicions among some groups about drug trials. Now this has come back to haunt them, because it may be that the only people who respond are the ones that are hardest to enroll in a trial.
The NPR speaker talked about how important it was to bridge racial boundaries, and how this was an example of what happened when they weren't. Well. . .that's one lesson, but it's an interesting one to take home from data that seem to show a racial boundary about as clearly as you can show one. Now, growing up where and when I did, I can only agree about the racial bridging part, believe me. But this study is going to raise some interesting philosophical and political questions. Won't many people find something a bit disquieting about a therapy that only works on one ethnic group?
Well, I'm getting into territory that can only bring me hate mail (as Charles Murtaugh mentions in his recent "Apologia" post.) I think, just as in research on intelligence (subject of that link,) male/female differences, criminal and anti-social behavior and other areas, that we're slowly coming up on some data that will make all of us uncomfortable for one reason or another.
Today's NY Times has an article on Novartis, pointing out how they seem to be working to make Basel a one-pharma town. I remember those long-ago pre-1996 days when there were three large companies there. In fact, I had an interview in Basel with Ciba-Geigy back in 1989, and a good meal in their executive dining room, too. Last time I've ever had buendnerfleisch. (If you order it from those guys, let me know how it is. And no, I don't get a cut. Now, that Amazon stuff over on the left-hand side, that's different, but last time I checked, they didn't sell food.)
I knew several people at both companies when Ciba-Geigy and Sandoz merged, and for the most part, people here in the US seemed to feel that the final mix ended up being more Sandoz. Some felt that that was a good thing, and some didn't. As for Novartis now looking at buying Roche, I think that it's going be a slow process, unless some dramatic good or bad news happens to change the minds of the people who really matter: the Swiss families who own the real Roche stock. I'm not sure that there's ever been a hostile takeover in Swiss business history; this wouldn't be a good time to start.
What throws me is that Daniel Vasella, the Novartis CEO, still seems to be worried that they're too small to compete. I've detected a small (but real) shift away from the "must-get-bigger-no-BIGGER" mindset that's been gripping the drug industry for years now. Maybe it's the lack of impressive results from the large mergers that have taken place (including, if you ask me, the one that produced Novartis in the first place.)
But Vasella will always have a place in my heart for a quote of his from about four years ago: "If you don't want to spend the big money and take the big risks," he said, "then you shouldn't be in the pharmaceutical industry." Now, remember, this is a Swiss native saying this. That's a big piece of what you need to know about the drug industry, that it can make a Swiss businessman sound like an oil-rig wildcatter.
I've had several messages from around the pharmaceutical industry (and several notable silences from companies that I know I have readers in.) Overall, it looks like several places do indeed use (and like) the microwave reactors. No word on explosions, so I'm assuming that one-per-week is an exaggeration. They're far from universal, but they aren't unknown.
I suspect the learning curve is sometimes a bit steep on them, though. If you have some sort of standard reaction that you can optimize for the microwave conditions, then you can zip right along. But if you're using it for all sorts of different chemistry in different solvents, then it might turn into a time sink.
These objections apply to any new lab technology, of course. Back in my grad school days, I was about the only person in the group who used the Chromatotron, then a fairly new instrument. This was an early-80s invention that did chromatography, but not down the time-honored column or across the (similarly time-honored) flat plate. This was "radial chromatography," accomplished with a spinning disk. Sample and solvent were pumped in near the center after the thing got up to speed, and the bands of compound eventually went flinging off the edges, to be collected and dripped out the bottom of the apparatus for collection.
One advantage was that you chould shine a UV light through the front of the thing and see how you compounds were cleaning up. This gave you a chance to change solvents or conditions on the fly, and to know when your stuff was actually coming out. We had a model with a quartz plate - nice, but expensive. The last time I saw one was about five years ago, and it had a cheaper plastic housing on it and wasn't being used.
The problem was that it took a bit of practice to get the technique right, and many people didn't want to go to the trouble. If you had something precious, you didn't use that stuff to learn the Chromatotron. And if you had something that wasn't so important, then you just didn't bother learning it. The end result was the same; you never used the thing. But every lab that owned one seemed to have one or two people who were really into it (I served in that role,) along with a bunch of people who didn't care (or were actively hostile, in some cases.)
In the end, it never really caught on. I'm sure there are scores of the things gathering dust in chemistry departments around the world, with pockets of fanatical users scattered here and there. Harrison Research, the manufacturer, is probably still in business, but you wouldn't know from what seems to be their web site. It looks completely fossilized. You can still buy supplies for the thing, but I'll bet if you called and ordered twenty of the rotors they'd back-order you with a vengeance.
Microwave heating has already proven its worth in industrial applications, so it's not going to end up in this situation. But benchtop microwave reactors will, I think, be a specialty item for some time to come. Not that I would mind messing around with one. . .
|Microwaves - Got One?
Another item from the same issue of Nature is on the use of microwave ovens in chemistry. This has been kicking around for years; I remember the odd paper showing up in the mid-1980s. Back then, as the article correctly points out, people just used kitchen microwave units. You'd seal up your flask, toss it in, and blast away on the "reheat" or "cook" settings. Needless to say, that led to a lot of interesting reactions, and a lot of downright intruiging explosions when things superheated. (The sealed flasks were necessary because otherwise things tended to violently boil over.)
This latest bulletin points out that companies are now making microwave reactors specifically for organic synthesis (and it's true, I've seen the ads.) "Chemists working in the pharmaceutical industry were the first to benefit," it says. But I'm not so sure about that. . .I mean, it's true that we have a lot of nice gear lying around. But at least at my company, there are no microwave reactors, nor are there any at a couple of other places I'm fairly familiar with. Here's a quiz for my readership in the industry: does anyone have one of these things, or is this just wishful thinking on the part of some equipment manufacturer?
I'm still not sure that these things are ready for the big time, frankly. One chemist quoted in this article says "we still have about one explosion per week," which opinion must have really made their day over at the (various)microwave companies. I can tell you what one explosion / week would do to your career over where I work - remember the last hot day when you stepped on some old bubble gum on the sidewalk?
The advantages of microwaves do seem to be real, though, even if the technology still has a couple of whoopee cushions in it. One key feature is rapid and selective heating. DIfferent substances absorb microwaves to different degrees (depending on their frequency, of course,) and that means that some solvents heat up a lot faster than others. Mixtures of solids can have one component heat up to its melting point while the other one just sits there; that's something you'd never accomplish with any standard heating. And never underestimate the power of heat to get your chemistry to work. One of my Laws of the Lab is that "A slow reaction at room temperature is Nature's way of telling you to reflux that sucker."
Time for a mercifully brief detour into thermodynamics, beloved of organic chemists like me as "thermogoddamnics." My law of the lab, restated more technically by the folks who really worked it out in the 1800s, says that the free energy change (delta-G) of a reaction is a combination of the enthalpy change (delta-H,) minus a term of temperature times the change in entropy (delta-S.) The free energy change has to be negative for a reaction to happen - you can think of it as rolling downhill. Either of those terms can make the reaction favorable overall - a negative change in H, or a negative T-times-delta-S.
Enthalpy is a measure of the heat absorbed or given off in a reaction; one key component of it is of how stable the bonds you're making and breaking are. When you're breaking weak bonds and forming strong ones, delta-H is strongly negative and that means it's on your side. Notable examples of that include things like the thermite reaction, where iron oxide and aluminum powder get converted to aluminum oxide and iron. You'd expect me to say "iron powder" there, which I guess I would if it weren't flying through the air as red-hot globules. There's a lot of spare energy given off in that reaction, since aluminum-oxygen bonds are much stronger than iron-oxygen bonds. (That stability is why you don't find lumps of pure aluminum lying around like you do with gold or silver. And that's also why aluminum was so expensive for so long, because running the process in reverse to make plain aluminum metal out of ore was a high-energy pain in the rear until they figured out ways to contain it.)
(I should mention is that the thermite reaction doesn't happen without a push, because there's a certain amount of energy that has to be put into the system to prime the pump and get the reaction started. That's the kinetic aspect, as opposed to the totaling of initial state and final state that thermodynamics is concerned with. But once you get the thermite over the hump, there's no stopping it - the excess heat provides plenty of oomph to make the remaining reactants take off over the energy barrier.)
So if bond energy is on your side, you're way ahead. But the other term can be enlisted, too. Entropy comes in, for example, when you're combining solid reagents and making a liquid, or liquid reagents and generating a gas. Both generate more disordered substances - higher entropy - and will give your reaction a boost all by themselves, even if everything else is equal. (Remember, there's a minus sign in front of the entropy term, so a positive change in entropy is good for giving you an overall negative delta-G.)
If enthalpy and entropy are both heavy toward the reaction products, then head for the exits, because it's the Fourth of July. Good example of this are reactions like the violent decomposition of azides or diazonium salts. These energetic nitrogen compounds fall apart to produce nitrogen gas, which is full of ferociously stable bonds and has a big net gain of entropy to boot. What are cautiously called "energetic" materials generally are things that turn into gaseous products.
The only thing that'll save you in these cases is if there's a high energy hill that the reaction has to climb over before it can slide down to that happy new state of being. There are indeed reactions that should (thermodynamically) be happening all the time, but just have trouble getting there from here over that kinetic barrier. TNT comes to mind - the free energy change when it blows up is pretty impressive, but you have to bang on it a bit to make that happen. Contact explosives, like the hair-raising mercury fulminate, have a similarly big free energy change waiting to happen, but they hardly have an energy barrier to the reaction at all. They just roll out of bed, and down they go.
But even if enthalpy is against you and entropy isn't much help, you can still get things to go by enlisting the last variable, temperature. Just crank up the heat, and you can get most anything to happen. Of course, if you crank it up too much, the problem is that everything starts to happen - for one thing, you're provided the push to get a lot of slow processes over their kinetic barriers. So this doesn't always make things go the way you want: other reactions you hadn't considered might start showing up, and these are usually the ones that lead to dark stuff that you have to scrape out of the bottom of your flask.
Note that turning up the heat isn't going to help you if the entropy change is against you (that is, positive.) Since it's temperature times entropy in that second term, all you're going to do is magnify the effect of the unfavorable entropy change if you turn up the "T" part. If both enthalpy and entropy are against you, the reaction just isn't going to happen. It would be like eggs spontaneously unscrambling themselves.
Different combinations of these three variables can lead to some odd effects. I still remember preparing a solution of potassium triiodide when I was an undergraduate. You combine iodine with a solution of potassium iodide, and the flask immediately becomes so cold that frost starts to appear on it. There's a positive enthalpy for you - heat's being sucked out of the system right before your eyes. But the increase in entropy is enough to counteract it, and so the reaction goes spontaneously once all the votes are counted.
I can't leave without referring back to microwaves for a minute. For some most interesting pictures of what happens when you microwave the wrong objects, let me refer you here. There are some more ill-advised experiments here, and this page probably has the biggest collection of all.
Edited the next morning to clear up some cavalier treatment of minus signs, clarify kinetic versus thermodynamic factors, and fix some typos.
|The Dose Makes the Poison, And How
There's an extraordinary article in the February 13th issue of Nature (p. 691) from two toxicologists, Edward Calabrese and Linda Baldwin, at UMass-Amherst. It'll be the start of some very interesting arguments. It's about dose-response, which I've written about several times on this site. That's because the subject is never far from a medicinal chemist's mind: we spend part of our time trying to figure out such relationships for a compounds' beneficial effects, and the rest trying to understand it for the toxic ones. These authors have been working in this area for years; this Nature publication seems to be their way of trying to reach a wider scientific audience. Wait until they start reaching the nonscientific one. . .but I'm getting ahead of myself.
Experience suggests that every compound will show toxicity eventually, if you just dose it high enough and/or long enough. The big question has been at the other end of the scale: is every compound harmless at a low enough (or short enough) dose? These two statements immediately start arguments about the definitions of "toxic" and "harmless," of course, as they should. But those discussions often end up sidetracking the main question, which is how linear dose-response is, especially at low doses.
It's not an academic question. How much do you have to clean a chemical spill site before it's clean? How much occupational exposure to a given substance should be allowed? How about food additives? Radiation exposure? You can start affecting your gross national product with the way you answer questions like this.
The authors throw down the gauntlet in their third paragraph:
"We believe the predictive models that all regulatory agencies use are based on a fallacy. . .here, we clarify the basis of this fallacy and advocate a more predictive model that will revolutionize public attitudes towards risk."
They should try for something a little more forthright next time; it's not healthy to keep things like this bottled up inside you, y'know. But, reading the rest of the piece, it seems that they have a point. We may well have been thinking about such risks the wrong way, and the implications could be huge.
The only reason I can think that this hasn't gotten more press coverage is that it's not a concept that's easy to express quickly. Well, there's another reason: if these folks are right, then we have much less to worry about from low concentrations of pollutants than we thought. That doesn't make for a catchy teaser for the late local news, unfortunately. (Nor does it immediately suggest new fields for liability suits - ah, but to say things that is to impugn the motives of tort lawyers, and making them look bad is their job. Onward.)
Here's their contention: To start with, there are two main schools of thought on low-dose toxicity. One just takes the dose-response down in a linear fashion: if 10 milligrams of X is bad for you, then 1 milligram is one-tenth as bad for you, and so on down. The other model is also linear, but features a threshold, a "no-effect" level below which nothing seems to happen at all. Biologically, that can be explained by doses that are easily handled by the body's standard detoxification mechanisms (for a discussion of these, see my post on acrylamide from last April 30.)
Traditionally, acutely toxic compounds are treated by the threshold model, and carcinogens are treated by the linear model. Wrong, say Calabrese and Baldwin. They point out that in most cases there are odd effects in the low-dose range, which are often the opposite of what happens at higher doses. The phenomenon is called hormesis.
This means what you think it means, all right. Low levels of nasty stuff like cadmium and dioxin actually reduce tumors in some animal models. Low doses of X-rays actually increase the life span of rodents, and small amounts of lead can stimulate the growth of plants. To pick an example that's been in the news, small amounts of alcohol intake seem to be beneficial to humans, while larger amounts are clearly not.
There's not a single mechanism underlying all these effects, of course - just a general principle that complex biological systems respond to things in complex ways. In some cases, a small amount of stress on the system causes beneficial repair effects that outweight any damage from the original agent, for example. They refer to further literature on the molecular mechanisms for some of the effects, but I can think of a few off the top of my head.
Nuclear receptors, for example, which regulate the transcription of many genes, have notoriously complex signaling. I can easily picture a situation where small amounts of a compound could cause a nuclear receptor pathway to go off in a different direction than larger amounts would. Another example is the behavior of CNS drugs, which are famous for having "U-shaped" dose-response curves. Typically, they do nothing at low doses, have the desired effect at medium ones, and go back to doing nothing (or causing active harm) at slightly higher ones. This is probably due, at least in part, to complex crosstalk between various receptor systems, altering signaling as they respond to different amounts of stimulation.
The impact? The authors state:
"(The hormetic perspective) challenges the belief and use of low-dose linearity in estimating cancer risks , and emphasizes that there are thresholds for carcinogens. The economic implicatiosn of this conclusion are substantial. . .(it) also turns upside down the strategies and tactics used for risk communciation of toxic substances for the public. For the past 30 years, regulatory and/or public health agencies in many countries have "educated" - and in the process frightened - the public to expect that there may be no safe exposure level to many toxic agents (but) changing a dominant risk-communication paradigm is not as simple as flicking on a light switch."
No, indeed. This is going to take some doing. Try convincing some of the people at, say, Greenpeace or the National Resources Defense Council. Over that way, even advocating the threshold model is often seen as pernicious; the usual assumption is that there is no safe amount of any man-made contaminant. And Calabrese and Baldwin are now saying that even the threshold model is too pessimistic, which is going to make some folks just completely turn purple. As they say, this idea
. . .would certainly be resisted by many regulatory and public-health agencies as an industrial-influenced, self-serving scheme. . .
"Resisted" is one way of putting it. They can expect the treatment that Bjorn Lomborg (The Skeptical Environmentalist) has received, and worse. It'll be interesting to watch - and if they have the facts on their side (which they very may well,) then I wish them the best of luck. Which they'll need.
(You can find a concise discussion of all this from Calabrese himself at this U Mass - Amherst site. A PDF file of a more detailed 2001 article that these same authors published in Trends in Pharmacological Sciences is here.)
|It's Just This Chromium Switch Here. . .
I've been wrestling with a complicated, rather expensive piece of equipment at work the last few days. I know I won't get much sympathy from physics types, who have to build all their own stuff for the most part. How can I complain, they're asking, about shiny boxes in matching color schemes, set up by a factory representative, complete with volumes of glossy documentation? But believe me, even the commercial stuff will waste vast stretches of your time.
Nope, no matter how smooth the finish on the cabinets, scientific equipment is the same the whole world over. The software glitchs, the hardware crashes. Connections loosen if you glare at them, and how can you not? The pumps get air bubbles in them, if they're supposed to be pumping fluid, or they get fluid in them if they're supposed to be pumping air. Mysterious substances start dripping and spraying from what are supposed to be smooth expanses of solid metal - if it happened in a religious setting, there would be pilgrims lining up within the week. And at the other end, the detectors let everything slip through their equivalent of fingers, and when you try to fix them, they peg out and start reacting hysterically to Brownian motion and thought waves. The printer issues blank sheets of paper in response to requests to record this precious data, when it isn't sending the whole print job to a sales office in Kuala Lumpur. The vacuum lines are often the only things about the entire apparatus that doesn't suck.
But I'll be back at the thing tomorrow. Only a few dozen more variable to check out. Piece of cake. Anchovy cake, with cinnamon-garlic icing, by the look of it.
|Is It Time to Buy the Stuff?
The CEO of Transkaryotics announced this week that he's stepping down. I've written several times about TKTX; they're an interesting company in many ways. But they've had some rough going recently, with their hammering at the FDA over their Fabry's disease treatment, and the protracted lawsuit with Amgen. Even if they eventually win that one (a big if,) it might turn out to be a case of "a few more victories like that and we'll be completely undone."
Biotech in general has been an awful place to have your money for some months now. I should know; I have some stock positions that'd frizz your hair. But that makes me think that maybe now is the time to buy into the area - after all, the real time to buy some group of stocks is when nobody wants them. Now, if nobody wants them because the whole industry is going down the tubes - the proverbial buggy-whip companies - that's one thing. But biotechnology is far from going down any tubes that I can see. If anything, it's getting more powerful and capable every year.
And the demand for medical/pharmaceutical technology isn't going to go away, either. In fact, as the wealthier nations of the world age (which, because of their low birth rates, they're all doing,) the demand should do nothing but increase. Long term, the entire medical field looks like the place to be.
Admittedly, there are some mudholes out there. The biggest one has been summed up by one of my correspondents in the industry: "We're in danger of having two kinds of customers - those who can't pay, and those who won't." The first group are clearly populations like those in Africa suffering from HIV, and the same goes for many tropical diseases as well. (That's why I keep beating the drum for economic progress - I'd like to see people in poor countries have a better standard of living just on principle, but I'd also like to see it happen so companies (like the one I work for) can find new markets. Everybody wins.)
In the second group are countries with strict pharmaceutical price controls. Canada, much of Europe - you all know who you are. It's particularly irritating to see countries erect these pricing structures when they have to import all their drugs to start with - you can't even wave the ragged flag of protectionism to cover that behavior. It's greed, dressed up as a war against greed. And, hey, guys - if you've decided that the "proper" price of Drug X is 30 dollars instead of 70, what's to keep you from deciding - come next election - that it should be 20? 10? Think of all the votes you could get! And you're not even hurting an industry from your own country - there's no downside!
So, it's true, there are trends out there that could scupper all my medical optimism. But although I can't rule it out, I think that the promise of what's coming is going to be enough to keep us from doing that to ourselves. It'll be a bumpy ride, but we'll get there.
|Serves Me Right
No sooner do I get through talking about how I don't use quantum mechanics much then up pops an interesting chemistry application. I know, all of chemistry is an application of quantum mechanics, but this is a bit more direct. It's a paper in the latest issue of Science (Feb 7, p. 867) on the rearrangement of a substituted four-membered ring, a cyclobutane. There's a carbene right off the ring - in order not to shed my non-chemical readership immediately, I'll postpone details of what a carbene is for now. It's sufficient to say that they're very reactive species - basically a carbon that's two bonds short of its usual four - and they don't last too long.
But what if you have them at 10 degrees above absolute zero? A lot of things will last under those conditions, frozen in a lump of solid argon, and carbenes are no exception. Normally, in a system like this, the four-membered ring breaks and the molecule rearranges to a five-membered species - there's nothing odd about that in this new work. What's odd is that they find that the reaction moves right along at these insanely low temperatures, when it really shouldn't. Looking at the energetics of the reaction, it should be stopped in its tracks down there.
For the chemists, there's about a 6 kcal/mole energy barrier - doesn't sound like too much, but try climbing that at 10 degrees Kelvin! No, there really should be no reaction at all, but there is. The answer is quantum mechanical tunneling. That happens because atomic-sized particles aren't particles, of course. And they're not waves, either. They're things that act a bit like one and a bit like another, something totally outside our macroscopic experience. (The famous two-slit experiment, which makes you think that things can switch back and forth between the two, probably misleads a lot of people. There's no switching involved, I'd say, because they were neither one to start with.)
And whatever we want to call them, there's a small (but real) chance that they can just end up on the other side of a barrier, just by chance. The smaller the entitity, and the smaller the distance that we're talking about, the better the chances that it'll happen. Electrons do it all the time; many electronic devices depend on it (and others take great pains to guard against it.) Hydrogen atoms have a tougher time, but it's still pretty common. In this case, it's an entire carbon atom. That's pushing it a bit, but chemists (like me) need to realize that it's nowhere near impossible. And in some cases, it may be a key part of a reaction.
It sure is here. The calculation is that at those temperatures, tunneling accelerates the rate of this rearrangement by an unfathomable 10 to the 152nd power. That's a staggering number - consider that the number of electrons in the universe is thought to be on the order of 10 to the 80th. Keep in mind that you'd never be able to find a piddling little number like that in a pile of 10 to the 150-something. The mind boggles, audibly - at least mine does. Or maybe that's the oil burner downstairs again. I digress.
The commentary on this paper in the front of the issue makes a good point. Chemists generally don't think of tunneling except at very low temperatures, because that's when the usual thermal motion is damped down and we can see the quantum stuff happening. But that doesn't mean that tunneling is unimportant at room temperature - in fact, it's still going along at virtually the same rate, making the same contribution it always does. In some systems, it may even be enhanced. It's pointed out that there are probably molecules that can only get into a position to do serious tunneling when they're warmed up, and the atoms can flop around to take the proper positions.
And this, right after talking about how molecules that curl back on themselves give me the willies, too. I should be more careful about what I complain about!
|Something New Under the Sun
Well, under the fluorescent lights, anyway. There's a very interesting experiment reported in the January 29th issue of the Journal of the American Chemical Society (my biologist wife took advantage of a pause in my speech when I announced this, to ask if that was the news all by itself.)
Peter Schultz's group at Scripps has been working for some years now on engineering protein synthesis, appropriating the mechanisms of transcription for their own uses. (For those who don't brood about this stuff every day, recall that DNA gets read off into strands of messenger RNA. These get fed through the ribosome, which reads off the RNA message in groups of three letters at a time (codons.) These code for specific amino acids, brought to the ribosome by transfer RNAs, which are spliced together at insane speeds to yield the desired protein, which comes spooling out the back. The ribosome is an amazing device indeed.)
Schultz's group has been investigating the use of different amino acids than the standard twenty. Of course, there are some others that crop up now and then - specific odd organisms use some rare amino acids for their own purposes. They generally use one of the lesser codons, the so-called "amber stop codon" as the code for these outliers, since that one can be appropriated without messing up anything else. In a previous paper, this group managed to do the same thing, developing a bacterium that was engineering to use an unnatural phenylalanine derivative. If you grew them on a medium containing this weird amino acid, they put it into proteins whenever the amber codon came up.
The latest work takes this a step further. They picked another phenylalanine derivative, para-amino, which is found in nature (but not used in proteins.) As in the previous work, they set out to find some sort of mutated transfer RNA that would specifically pick up this amino acid and deliver it to the ribosome (which takes whatever it's given, for the most part.) After screening a number of deliberate mutants, they isolated an enzyme that would produce the desired tRNA.
Now the new wrinkle: this time, they not only set things up where the organism would use this new amino acid in protein synthesis - they gave it the ability to make the stuff on its own. A couple of Streptomyces species produce para-aminophenylalanine for other purposes, and the enzyme pathway they use to do it has been worked out. (It's from the same intermediate that's used in the synthetic pathway for plain phenylalanine - some of it is diverted down another path.) The genes for these enzymes were inserted into E. coli along with the gene for the mutated transfer RNA, and the whole machinery was now in place.
This, then, is a bacterium that has been engineered to make and use 21 amino acids rather than the 20 that most life on Earth gets by with. This is the first time such a thing has been done, and it's quite a step. There were several things that could have gone wrong: hijacking the amber codon for this use could have been problematic (although they'd gotten away with it before.) Or the organism could have been harmed by having some of its usual stock of phenylalanine diverted to make this new amino acid (but they seem to have enough overhead to deal with it.)
So, what good is having para-aminophenylalanine? No one knows, but we may soon find out. The plan is to subject this new strain to various sorts of stress and selection pressure, in the hopes that it'll find a use for the new amino acid and the new proteins that it'll produce. That'll be very interesting indeed, but it won't be easy. For one thing, there has to be an amber codon in the DNA for the new residue to even be dealt in (they're getting around that by shotgunning it across the bacterial genome, seeding it with chances for the new amino acid to be used.) And finding the new proteins, even if they're produced, won't be trivial.
We could be closer to answering some pretty difficult questions: why twenty amino acids? Why these twenty? Are they the best that life could have used? Would an organism that uses more of them, with a wider range of properties, do better? If so, why haven't we seen more of them doing that already? It could be that as life developed, we ended up with the amino acid choices we did by a first-to-market effect. Perhaps the current regime is the protein equivalent of a Microsoft operating system: not the best, just the most.
The next installment will talk about some plans that go well beyond even these, to the preparation of some reallyalien life forms. Stay tuned.
|Update - The Shapes of Large Molecules
In response to my recent statements about how the structures of proteins don't exactly leap into one's memory, I received the following from Beth Skwarecki over at Loxosceles:
. . .the *exact* structure maybe not, but the simplified diagrams are pretty memorable. I can tell you without looking that TBP looks like a saddle, reverse transcriptase looks like a hand, RNA polymerase looks a little like a lobster claw, and (my favorite, from a very recent lecture) gonadotropins like luteinizing hormone look like two Gumby dolls dirty-dancing!
Y'know, she may have a point. Whether she has a point about those exact proteins, well, I can't tell you. But I will say that she sounds a lot like some molecular modelers that I've known. Note that the structures are complex enough so that they have to be related to fairly complicated objects (like the two Gumby dolls, which sounds a bit more complicated than usual.) Whereas small molecules, of the kind that pay my salary (well, they don't write the checks themselves; you know what I mean) aren't usually spoken of in those terms.
They're usually spoken in terms of each other: "That looks like morphine," or "That bottom part is like the left-hand side of whatevermycin." Or we stitch the structures together, saying things like "It has a quinoline, then a chain off the 2-position with a piperazine in it, and that's got a 3-chlorophenyl with a. . ." Of course, it's easier just to draw the structure at that point, which is something I don't think the protein folks can do. Can anyone draw reverse transcriptase (just the ribbon diagram) on a board, freehand, so that someone walking down the hall could recognize it?
|Back in the Fold
This week, there's an encouraging report of an entirely new therapeutic class of molecules. And they deal with a disease mechanism that's almost never been successfully attacked, so this is good news on several levels.
I mentioned the other day that when large molecules (like proteins) start to fold back on themselves, organic chemists like me start to head for the exits. That's when things start to become quite tricky, and it's when our usual tools begin to stop working. The problem is that we're used to thinking in very detailed terms about our molecules. We like to know (or at least believe that we know) where everything is and what it's doing. This molecule spends most of its time looking like this shape, and this reagent will approach from over here, hit this exact part and convert it to a group like this. . .we try to reach in, turn the individual screws and adjust things piece by piece. But proteins are so huge, and their dynamic behavior is so complex, that we have a hard time getting traction. Not even the most enthusiastic synthetic chemist can pretend to know where everything is or what it's up to. But we're getting better at it, although protein behavior can still be mysterious. X-ray crystallography has long been the standard for protein structure - but first crystallize your protein, though, a process which sometimes seems to involve black robes, goats, and full moons. While there's no other technique that can give you that level of detail, the limitation has been that the protein is sometimes locked into an unnatural shape when it crystallizes out into a solid phase. Beyond that, we're really interested in protein-protein interactions, too (and whether we can get small molecules to mess with them,) and getting those sorts of mixtures to crystallize is even more difficult.
Help is coming over the horizon, though. NMR techniques can (sometimes) let us treat proteins almost the way that we treat small molecules. If you can get a meaningful NMR, you can do organic chemistry - we're lost without it. On another front, attaching fluorescent probes to proteins has long been a major tool in biology. That's getting more sophisticated all the time as well, giving us real information about what interacts with what. All these techniques work in solution, under much more physiological conditions. And they're dynamic, too, in contrast to the frozen snapshot of X-ray data: we can actually watch things moving around, coming on and off.
It's become increasingly clear that there are many diseases that are the result of misfolded proteins. These diseases were known long before the molecular basis for them was understood, especially the amyloidoses, of which there are scores. Some classes of proteins seem to be more prone to this problem than others - as they wind up into their three-dimensional forms, some shapes that they can assume represent alternate low-energy forms, holes that they can't get back out of. The infamous examples are the spongiform encephalopathies, like kuru, scrapie, BSE and Creutzfeld-Jacob disease. (The Nobel-worthy insight in this area was Prusiner's idea that in some cases, small abnormally folded proteins could actually catalyze the misfolding of others like them, spreading and magnifying the disease.)
One group of misfolding diseases comes from trouble with a protein called transthyretin. There are scores of mutations that have been shown to lead to similar abnormal states, which tells you that the protein's normal balance is rather delicate. It's normally a tetramer, four identical subunits, and all the known mutations that lead to misfolding actually start by causing this tetramer to fall apart. On its own, the individual subunit is prone to misfolding and aggregation.
Some small molecules for these disease have been discovered in the last few years, which work by shifting the aggregated form back toward the native one. This new work, from a team at Scripps, identified compounds that bind to the tetrameric form and make it much harder for it to dissociate. In theory, this should stop the disease before it even starts. This effect had been seen in some double-mutated forms of the protein, which have one change that usually destabilizes, and one that cancels the other one out. They had to do a lot of careful protein assay work to characterize this, and its safe to say that by the end of it they knew the transthyretin countryside as well as medicinal chemists know their (much smaller) systems.
This is the first successful attempt to change the "energy landscape" of protein folding with a small molecule. What's even more interesting is that the Scripps group were careful to do most of their work with known drug molecules, so that anything they found had already been through safety testing and could go into human trials that much faster. In fact, the trials are set to start within the next few months, which is blazing speed by the standards of drug discovery.
Now that we have proof of concept, look for other researchers to get on the hunt for other misfolding inhibitors. There's sure to be drug company interest as well. It sure does look easier once someone's shown that it's possible. . .
|How Dare They Buy Low?
Yesterday's Wall St. Journal had an update on Merck's attempt to buy its Japanese partner, Banyu. Japan's a very big market for pharmaceuticals, and all the major companies want to have a presence - Merck has owned 51% of Banyu for a long time now. There are several deals like this between Japanese companies and outside firms - Abbott and Takeda, Roche and Chugai, etc. You can either partner with an existing firm, or run a Japanese branch of your own.
Neither strategy is clearly a winner: the former suffers from the usual partnership woes, exacerbated by distance and language difficulties. The latter runs into trouble because the best Japanese hires available are generally working for the top Japanese companies. (It's just as hierarchical as the university system, or the financial industry.) So the foreign labs end up low on the food chain, and don't get the sorts of employees that they're accustomed to back in the US or Europe. (Note: I'm not saying that these branches are incompetent; some of them do pretty good work. But they could be better than they are.)
It'll be interesting to see how that system reacts to a Japanese firm being completely bought out by a foreign company. Merck's attempt to find out is being frustrated, says the Journal, by unexpectedly activist shareholders - it's unexpected because they've usually just taken whatever the companies dish out, not matter how brutal or implausible. But Japan's long attack of economic malaria seems to have pushed a few folks over the edge, and not before time.
(On a personal note, I wrote about the Japanese economy back in February 2001, which was my first post that ever got "Instapundited." I then engaged in a canny bait-and-switch and started writing about things like patent infringements and lab explosions, which has led to the fame and fortune I now enjoy. At any rate, my points about the Japanese economy still apply: it's still awful, and they still haven't done a damn thing to fix it.)
Banyu's sturdy shareholders have watched their stock lose about 60% of its value from its 2000 levels, and they got a nasty surprise in the company's most recent financials. There was an unexpected payout of 2.8 billion yen to Merck, which item Banyu described helpfully as a "company secret." They was only supposed to earn 8.8 billion yen during the period, so this really hurt. Merck says only that it was a payment for a "difference of opinion" over royalties, which must have made for some lively e-mails and plenty of frequent-flyer miles.
The Journal has a wonderful quote from a Morgan Stanley analyst in Tokyo: "A lot of people feel quite frustrated that Merck has decided to do the deal when the stock was at historical lows." Well, a lot of people shouldn't be surprised that Merck has decided to buy when the goods are marked down. Believe me, Merck shareholders would feel a bit frustrated if they tried to do the deal at Banyu's historical highs. (Come to think of it, we don't hear a lot about those, do we? Stocks with high prices are always going up even more, of course, so few people talk as if they've reached some unscalable peak. That's as opposed to lows - I mean, those are obviously aberrations, right? Worth noting, in a historical way, because they won't be ever be down that far again? Investor psychology at work.)
So Merck is being jawboned to raise its offering price, but they say they're staying where they are. The tender deadline is March 6, and they have to go from 51% to 80% of Banyu's shares. At that point, the stock gets delisted from the TSE. Some Banyu shareholders are apparently playing chicken, waiting until the last minute to see if Merck will go up on their offer. Personally, I wouldn't bet on it - and I wouldn't want to be carrying a bunch of hard-to-unload stock on March 7, either. I can't imagine how badly you can get ripped off on delisted Japanese shares, considering how the Tokyo market treats people even under normal circumstances. They'll make Merck look like a bunch of philanthropists. Take the money and run.
|In the Details
Charles Murtaugh discusses Texas Tech's Michael Dini, the professor who's refusing to recommend students unless they believe in the theory of evolution. I've stayed away from this story, but my views match up very well with the ones in his post. I think it's scientifically unwise to refuse to credit evolution; there's just too much evidence. But in many medical fields, your results won't be affected by whether you believe in it or not. Medpundit is right: at the applied level, medicine isn't a science. (It's a lot of other difficult stuff, though, don't get me wrong.)
Now, as you get closer to the lab bench, things start to shift on you. In some biological fields, if you don't believe in evolution you just won't be able to explain your results well, or fit them into a broader framework. In which case, you're shooting yourself in the foot. If you've found something really interesting that calls out for the evolutionary tie-in you're refusing to give it, well, then someone else will take the ball and run with it. And when you run with the ball, you get to run away with a lot of the credit, too. A word to the wise - but if you want to tie your hands behind your back to do your research, feel free. Just don't expect me to hire you.
But evolutionary theory only impinges that directly in some cases. You can work in cell biology, for example, without asking where cells and their mechanisms come from, or even while believing that they were all created ex nihilo. These habits of mind might or might not bode well for your career as an outstanding cell biologist, but it can certainly be done.
In many cases, as Charles points out:
. . . the level on which one needs to appeal to evolution is so extremely deep that it is somewhat like a chemist appealing to quantum mechanics. I hope Derek Lowe will correct me if I'm wrong, but I suspect that in some fields, a chemist can get pretty far without having to worry about quantum mechanics.
Yep, it's true. I've got a lot more quantum in my head than someone picked randomly out of the phone book, but by no definition am I a quantum mechanics guy. I can count the times I've actually worried about a quantum-mechanical point on one hand (if we exclude when I was taking the course in grad school, that is; I did my share of worrying then!) In fact, I was quoting a line at work the other day, to the effect tha the best way to make an organic chemist immediately turn the page of a journal article is to use the phrase "consider the Hamiltonian." No, thank you.
But, still, it's that we all think in terms of a quantum world. I don't consider the darn Hamiltonians much, but delocalized electrons, molecular orbitals - these things are part of the mental furniture of any working organic chemist. (And if you're a molecular modeler, then they're your livelihood.) I don't often have to invoke these things - most of the time we can just push electron pairs around and be happy - but I'm sure my thought processes would be different if they weren't there.
The other day, for example, I was doing some work involving fluorescence, which is not a process that's easily explained or pictured without some recourse to elementary quantum principles. You could just think of it as a black box - shine this kind of light in, and this kind of light comes back out. But as I alluded to in the biological example above, black-boxing things all the time is probably not a mental habit that a good scientist should want to cultivate.
|Why is the Shuttle Wreckage Toxic?
NASA has been warning people not to mess with debris from Columbia, citing potential toxic residue. This refers to the propellant used by various attitude systems (the Reaction Control System, or RCS in NASA-speak, and the Orbital Maneuvering Subsystem, OMS.) It's actually two substances, a bipropellant, since that can give you more power per weight than the available monopropellants.
The shuttle uses nitrogen tetroxide and monomethyl hydrazine, which is a standard recipe that's been around for many years. This brew has several advantages - both substances are liquids at room temperature, for starters, although the N2O4 is pretty low-boiling (about 70 degrees F, 21 C) and needs some special handling for that reason. And the mixture is hypergolic, which just means that the two will ignite spontaneously - and how - without need for some separate ignition system (which from the engineer's point of view is just another system waiting to fail.)
A well-known disadvantage of liquid-fuel systems in general is the complexity of the hardware needed to use them. I'm not really an aerospace expert - although I can pass for one in mixed company - so I'll defer to those that are for more details. It's a large subject, with many subtle points having to do with flow characteristics, pump and turbine engineering, and so on.
Of course, you can't have everything. Any substance that's energetic enough to be used in rocketry is going to be dangerous to handle. In general, if you go through the chemical safety literature and look for the mixtures marked "avoid at all cost," you're getting into the territory that rocketry buffs inhabit. There's the fire-and-explosion hazard, and the exposure hazard as well: both halves of this mix are quite toxic. (I wouldn't want to be exposed to either, but if I had to pick, I'd avoid the N2O4.) I'd think, though, that exposed surfaces would by now be pretty free of contamination, since the stuff would have evaporated by now. But there's no telling what could be left inside pipes and the like, though, so I think that the NASA advice is certainly sound overall.
|Pharma Patents, Pharma Prices
There's a very interesting essay ("Animal Pharm") by Thomas Fuller over at his site, covering the topic of pharmaceutical patents and how they're viewed.
An excerpt: I believe that one reason we find it easy to swallow the idea of forced licensing of patents is that they constitute ‘merely’ intellectual property. As I mentioned, we are not taking their drugs or their factories, just their right to exclusively make and sell the drugs, a right that is in any event ephemeral, expiring just a handful of years after the product is invented. Is this important? Yes. I consider this a mistake that could prove fatal.
Welcome to the club! It's not a popular place to be, sometimes, out here on this particular limb. But I think it's the right one.
I've been thinking about the implications of Saturday's disaster, and I have to say that I'm lining up with Rand Simberg on this. The Shuttle program is broken, and has been broken for many years now. (Jay Manifold has been on the same wavelength for a long time.) See also Gregg Easterbrook's piece in Time magazine (but see also the critique of it over at Pathetic Earthlings.) Easterbrook's been a critic of NASA's approach for a long time, starting with his prescient Washington Monthyly story in the mid-1980s of what would happen during a shuttle launch malfunction.
Enough linking; on to the thinking. As I said, I think that Rand is on the right track. As has been said by others, what if the airline industry ran like the space industry? Then you'd get on a New York to LA flight, which would be in a full-size 747 that could hold 3 to 7 people in it. You'd fly to LA, get out, and set the plane on fire. Your ticket would cover the cost of a new plane for the trip back.
Think of the launch vehicles that look so much safer than the Shuttle at the moment, the expendable ones like Ariane (well, not Ariane 5, but you know what I mean,) Titan, etc. It's instructive to look at the amount of carefully constructed hardware that's tossed into the sea or left to burn up on re-entry: most of it. Thus, "burning the plane." Now, the Shuttle is supposed to be reusable, but that requires a very literal-minded definition of "reusable." When you consider the amount of handwork that's required for every refurbishment between every flight, you can see why its launch costs are as huge as they are.
So what do we do about it? Well, I spent part of last week going on about technological optimism, so I say we invent out way out of this hole. Maybe this terrible event will be the spur for getting out of the Shuttle rut, which I fear has become a giant exercise in inertia. Of course, developing a truly reusable launch vehicle (which probably means single-stage-to-orbit) is hard. But my mind revolts at the suggestion that it's impossible.
People have tried to get such vehicles to work. But have they tried hard enough (or had the backing to do so?) I think that by continuing the space program as we have, that we've just been holding back any real progress in that area. A true effort to get cheap access to space is, by definition, Operation "Ditch the Shuttle" - and, as a corollary, Operation "Nasa Has It Wrong." There's a lot of resistance to those positions.
It pains me to write all this down, both because of what's just happened and because I've been a space enthusiast since I was about four years old. I made models of the solar system in kindergarten, and I'm of the generation that stayed up past their bedtime in a room with the National Geographic moon map taped up on the wall, waiting to see Neil Armstrong step out of the lunar module. I celebrate every successful new mission, manned or unmanned: just the other day, when my 4-year-old was asking me questions about Saturn, I was deeply happy to tell him that we had a spaceship headed out there right now.
I'm about as sympathetic to space exploration as it's possible to be. And I think we've been doing things wrong.
Archives, RSS & Email
Click here for access to our archives.
Dictionary of Scientific Quotations
If the Universe is Teeming with Aliens
(Then where is everybody?)
The Night is Large
A Martin Gardner collection
Eurekas and Euphorias