


One of the other incorrect lessons that people might take away from the press accounts of the antipsychotic trial is that drug companies have been comparing their medications to placebo too often. And why would you do that unless you were scared that you wouldn't be better than the competition? What's with these people, anyway?
Well, there are fields where placebo-controlled trials take place, and fields where they don't. It depends on the disease and options available to treat it. Cancer trials, for example, are very rarely run against placebo, unless there's just nothing left to do. (You'll see this with drugs that are meant for late-stage patients or those who have failed existing therapies.)
Antipsychotics are generally compared to an existing standard of care, because it's unethical to leave someone untreated when they've already been diagnosed as schizophrenic. The problem that the CATIE trial uncovered, though, is that many trials are run against haloperidol (known as Haldol). That's a typical older drug, and companies have been showing that they have better efficacy and fewer side effects than it does. (It's known to have significant problems with tardive dyskinesia, among other things).
But now we know that perphenazine is a better standard among the older drugs, mostly because of fewer side effects. I don't think that anyone is going to be able to run a haloperidol-controlled trial for a new antipsychotic. Now you're going to have to beat perphenazine, which will be a higher standard. The newer drugs have been able to get rid of the so-called extrapyramidal side effects, like tardive dyskinesia, but they haven't been able to increase their efficacy that much. That's not going to be enough any more - the ante has gone up in the field of schizophrenia therapy.
Now, if you think that your new drug is really going to cream the competition, running a trial against them is a smart move. There's no better way to persuade people to prescribe your drug than to show that it's clearly better than what's out there now. Another time you see head-to-head trials is when a company is making a run at the leader in a given category. The various attempts to out-do Lipitor are good examples, not that any of them have succeeded. But there really wasn't a clear leader in the antipsychotic area, and thus no real target to try to knock down. I'd bet that the companies involved strongly suspected that their own drugs weren't head and shoulders above everything else, either. This is the perfect situation for an outside agency like the NIH to do a comparison study, because if you're waiting for the companies involved to do it, you're going to have a pretty long wait.


You've probably seen the headlines about the recent NIH-sponsored "CATIE" study comparing five anti-psychotic medications. The result, which is what made the whole thing newsworthy to the popular press, was that it was hard to distinguish among them, with the oldest generic working as well as (or better than) the newer drugs.
But I think that people outside of the medical world are going to learn the wrong lessons from all this. Does this study mean that everyone taking anti-schizophrenia medication should switch to the old generic? Not at all, although if they need to try a different medication, they should definitely consider it. Does it mean that all these newer drugs are unnecessary? No, again. There's an awful lot of patient-to-patient variation in central nervous system drugs. Says the study's principal investigator, Dr. Jeffrey Lieberman of Columbia:
"There is considerable variation in the therapeutic and side effects of antipsychotic medications. Doctors and patients must carefully evaluate the tradeoffs between efficacy and side effects in choosing an appropriate medication. What works for one person may not work for another."
But I think that this study does make clear that the newer antipsychotics aren't as good as they should be. The field is a tough one, as I know from personal experience, having played a small role in helping a company spend I've-no-idea-how-many millions of dollars to find out that a potential schizophrenia medication didn't do squat. There's a lot of room for improvement, and we haven't been able to improve things very much.
It's important to emphasize that this was a surprising result. No one expected the side effect profiles of the four "second-generation" drugs to be so similar to the older one (perphenazine), and so similar to each other. That's one reason that a study like this is so valuable - huge clinical trials that tell you something that you already knew aren't too wonderful. I think that this is an excellent thing for the NIH to be doing. Tomorrow: what this says about head-to-head trials in general.


There's a lot of metabolic disease news this week from the FDA. We'll get to the inhaled insulin decision next week, but I thought I'd try to catch the next one before it happens. On Friday they're reviewing the first PPAR alpha-gamma ligand to make it to the regulatory approval stage, Bristol-Meyers Squibb's unmelodious "Pargluva" (muraglitazar), which sounds more like a disease than a drug. This is a therapeutic class that everyone had great hopes for a few years ago, with most of the big players competing at full speed. In theory, this combination should help with insulin sensitivity, cholesterol, and triglycerides all at the same time, which you'd think would be just what an overweight type II diabetic patient (and there are many) might need.
But development of these compounds has been a nightmare, with bad and unexpected toxicity cropping up deep in the late-phase work. BMS (and their late-arriving partner Merck) managed to get past those rapids and through clinical trials. But their drug shows a side effect that all PPAR-gamma drug programs have had to worry about, namely edema.
They also seem to have some (perhaps related) worries about cardiovascular events, which are broken out into completely separate categories in the FDA briefing document (big PDF). That document, whopper thought it is, is worth a look if you want to see what it's like to decide whether to approve a new drug or not. I wouldn't like to have to explain it all to a lay jury, that's for sure. No doubt a few whoops and hollers, along with the occasional choked tearful expression, would help.
By my reading, the cardiovascular event profile of the drug subjects looks slightly but noticeably worse than that of the placebo group. There are plenty of possible extenuating factors, and the number of patients involved is small, but I think that this is going to be a problem for the companies during the FDA hearing. Here's the list of questions the FDA has proposed for discussion (PDF again), and you can see that edema and cardiovascular safety loom large. I can't predict which way this one is going to go, and neither can anyone else. But post-COX-2 is a bad time to be coming to the FDA with possible low-level cardiac risks in your clinical data. . .
By the way, with thousands of people involved in the clinical studies, there are bound to be some. . .unplanned adverse events. I quote without comment from the briefing document linked to above, just in case you thought (for some odd reason) that running clinical trials was easy. . .
"Subject CV168021-29-21 was a 44-year old white maile with a 3-year history of diabetes and history of overweight, hypercholesterolemia and impotence. On study day 29 the subject died as the result of a gun shot wound.
Subject CV-168006-5-3 was a 62-year old white female with a history of hypertension, smoking, and alcohol use. On study day 112 she died in a motor vehicle accident. Her car was stopped at a light when struck by a truck. The investigator considered the event not likely related to study drug."
Yes, one would, on the whole, conclude that it wasn't . . .


I'll tell you a company that's been watching what's happened to Merck and thinking hard about it: Sanofi. Well, OK, everyone in the industry has been looking at Merck's situation and shuddering, but I suspect the people at Sanofi(-Aventis) are especially jumpy. Why? Rimonabant.
Rimonabant, which will come to the market next year (most likely) under the name Acomplia, is one everyone's short list of potential multibillion dollar drugs. It'll be the first new drug treatment for obesity in years, and it's the first one ever with its mechanism of action (antagonism of the CB(1) receptor). It has potential for many sorts of addiction therapy as well. Although there's room to argue about just how effective it is compared to existing therapies, and there's some concern about how many HMOs will pay for it, there's little doubt that it's going to sell like crazy.
And there's the worry. There is absolutely no way that large enough clinical trials could be run on a drug like this to predict everything that might happen when millions of people start taking it. Can't be done. You can get down to a margin of safety that will get you past the FDA, but that isn't enough, now is it? No, if one person out of a hundred thousand has a nasty side effect, that's enough to bring the sky down on your head. And we can't test down to the level of one-per-hundred-thousand effects.
A fine situation, isn't it? This same argument applies to every new drug, naturally, but especially to a groundbreaking compound like rimonabant. That's just what we needed, an incentive not to be first in class with a new drug. What, exactly, are we doing to ourselves?


Gruntdoc wonders about why a particular combination therapy isn't available yet. Skin infections with methacillin-resistant staphylococcus aureus (MRSA), which I hope I never come any closer to experiencing, are treated with one of several antibiotic combinations, but they're all administered as separate drugs.
The answer is what you might suspect: the FDA would want clinical trials of the single-dose combination, just to make sure that things work the way that they're supposed to. Any company developing the combo would have to recoup those costs, not to mention the costs of then beating the drum for the idea that the new combination is a better idea. But the antibiotics in question are generics, which means that there could be some real cost-containment issues over the use of a more expensive combination.
But we have a rather close example at hand: the recently approve BiDil. (Here's the package insert, in PDF format.) That's a combination of two generics, too, which (famously) shows far better effects in the black population than it did in general clinical trials. Nitromed, the developer of the therapy, had to run some pretty reasonable-sized ones, and they spent a lot of money in the process.
They started by establishing that the blood levels of the two drugs were reasonable when given in combination, and went on to a group of 186 male patients. That trial (with 273 in the placebo group) didn't show a benefit, but hinted at one in the black subjects. The company also ran an 804-patient trial against enalapril, and saw the same trend, which led to the definitive 18-month trial in 518 black patients (with a roughly equal number in the placebo arm.) Keep in mind, this is all for two drugs whose individual efficacy was well-studied.
Note added after original post: Nitromed was after something more than the individual efficacy of each drug. Their hypothesis was that the combination would make the blood-pressure-lowering effect much more pronounced, and that this would translate into clinical benefit as seen in eventual mortality. Why this only seems to be the case in the black population is a head-scratcher. The situation for combination antibiotics would be simpler. So. . .
A combination antibiotic trial wouldn't be as long, or as expensive. But it wouldn't be negligible, either, and it's likely that some companies have run the numbers and decided that the investment would be unlikely to pay off.


Speaking of cancer trials, I mentioned the other day how they tend to be smaller than those for many other diseases. But that doesn't mean that they're always easy to run, as a search for "clinical trial design oncology" will show. Note the number of people offering to help you out, via seminars, consulting visits, books, and entire journals devoted to the topic.
The problems start early. Patient recruitment is a big problem for many of the less common types of cancer, and it's getting to be a problem for the better-known ones, too. If you look at all the therapies that are being aimed at breast cancer, for example, and run the numbers, it looks like there aren't enough breast cancer patients in the US to fill out all the trials that would be needed. Cost is, of course, a big reason why a lot of clinical trial work is being done overseas these days, but access to a new pool of patients is a factor, too.
Which brings up another complication - do you want patients who've tried other drugs? That depends on where you're targeting your therapy. If you hope for it to be a first-line drug, you probably want patients that are newly diagnosed. There's a steady supply of those, but not everyone who's newly diagnosed is going to be willing to participate in a clinical trial, not when there might be more proven treatments available. The worst case is when you're looking for drug-naÔve patients with advanced types of cancer. That's feasible (in theory) for some of the ones that creep up on you (like colorectal cancer), but next to impossible for some others.
But if your drug is going to be a second-line therapy, then you should go ahead and see how it performs in patients who've already been through the first-line stuff. There is, unfortunately, a steady supply of those people, too, and they're often more willing to take a chance.
Your clinical trial design will also be influenced by the kind of cancer you're hoping to treat. If you're looking at a very specific type or two, as is the case with Novartis's Gleevec, you may have to cast the net pretty widely to round up enough people. (We'll ignore the fact, for now, that Gleevec sells a billion dollars a year, which means that a lot of people are getting it when it has very little chance of doing anything for them.) If you have a new mechanism that hits all kinds of cancer cells, then you may want to dip into all sorts of different patient populations to see if one of them looks like a good place to take your stand in later Phase II and III trials. The danger in doing that is that your patients may be such a mixed bag that you can't get good statistics on anything.
Ah, statistics. You'll have noticed that I'm referring to cancer patients as if they were so many terms in an equation, which from the standpoint of drug development is exactly what they are. That comes across, to those outside the medical and scientific areas, as a pretty cold way to talk. Guilty as charged - but keep this in mind: people who work for drug companies get cancer, too, as do our friends and relatives. And we're just as upset as anyone else when that happens. But without the icy numbers, and lots of them, we're not going to be able to do anything to help.


OK, I couldn't resist. Let me reiterate that I completely admire the NIH's commitment to basic research; it's one of the real drivers of science in this country. But they're not a huge factor in clinical trials. Academia does more basic research than pharma; pharma does more clinical work than academia. Here are some statistics from a reader e-mail:
"As a person who was an NIH staffer (funding clinical trials, no less) and is now on the pharma side (mostly spending on manufacturing development; we will spend more on clinical trials as we get bigger), I have seen both sides.
Most of NIH spending is very far from clinical utility. Last time I checked (and it has been a while), more than 90% of NIH funds went to what most people would consider non-clinical research, e.g., studies of animals and cells, etc. (If the NIH was named by its major function, it would probably be called the National Institutes of Molecular Biology ;-) The reason NIH is able to claim that half of its money goes to clinical research' is that any study that involves a human or *human tissues* counts. So a bench study looking at receptors on human renal cells counts as 'clinical research.' The number of studies examining 'whole' humans is in the 5% range.
On the other hand, pharma, as you know, spends a lot of money on research with legal (protecting patent claims), manufacturing (cGMP issues, etc.) and marketing goals that don't necessarily help anyone's health.
Regarding the clinicaltrials.gov numbers, by my reckoning the 8000 NIH studies and the 2400 'industry' studies probably represent about the same investment in *therapeutic* clinical trials. If you break down the NIH trials, about 1800 (22%) are Phase I, 3000 (37%) are Phase II, 1100 (14%) are Phase III, and the rest (2150, 27%) are observational and other. (If you want to check, I did a search within the results for the appropriate phrases and subtracted from the total for the remainder). Figures for industry are 460 (19%) Phase I, 1060 (44%) Phase II, 770 (32%) Phase III, and 133 (5%) other.
In my experience each phase of clinical trials multiplies costs by about 10 times (e.g., Phase I = X; Phase II = 10X, Phase III = 100X), so the clinicaltrials.gov figures imply that the costs of Phase I, II, and III trials funded by industry are over 80% of those funded by NIH (costs are overwhelmingly driven by Phase III trials). And this is despite the close to 100% capture of NIH trials versus the unknown percentage capture of industry trials that you noted in your post."


OK, one more on this topic before moving on to other things for a while. The Bedside Matters medblog has a better roundup of the reactions to my post than I could have done myself. And "Encephalon" there also has one of the longer replies I've seen to my initial post, worth reading in full.
I wanted to address a few of the issues that it raises. Encephalon says:
"Dr. Lowe makes his point with the sort of persuasive skill one suspects is borne of practice - I shouldn't be surprised if he has had to make his case to the unbelieving on a very regular basis. And that case is this: that pharmaceutical companies do in fact spend enormous sums of money in developing the basic science breakthroughs first made in academic labs to the point where meaningful therapeutic products (ie, '$800 mil' pills) can be held in the palms of our doctors' hands, ready to be dispensed to the next ailing patient.
So far as that claim goes, I don't think any reasonably informed individual would dispute it. . ."
It tickles me to be called "Doctor" by someone with a medical degree. On the flip side, though, it's a nearly infallible sign of personality problems when a PhD insists on the honorific. And I appreciate the compliment, but it's only fairly recently that I've had to defend this point at all; I didn't even know it was a matter of debate. The thing is, you'd expect that a former editor of the New England Journal of Medicine would be a "reasonably informed individual", wouldn't you? I don't think we can take anything for granted here. . .
He then spends a lot of time on the next point:
"It is a myth, and I would argue a more prevalent one than the myth that Big Pharma simply leaches off government-funded research, that the NIH does little to bring scientific breakthroughs to the bedside (once they have made them at the bench). . .Using arguably one of the best (databases) we've got (the NIH's ClinicalTrials.gov**) we get the following figures: of the 15,466 trials currently in the database, 8008 are registered as sponsored by NIH, 380 by 'other federal agency', 4656 by 'University/Organization', and 2422 by Industry. While I am suspicious that the designation 'university/organization' is not wholly accurate, and may represent funding from diverse sources, and while the clinical trials in the registry are by no stretch of the imagination only pharmaceutical studies, the 8388 recent trials sponsored by Federal agencies are no negligeable matter. I think Dr. Lowe will agree.""
I agree that NIH has a real role in clinical trials, but I don't think it's a large as these figures would make you think. Clinicaltrials.gov, since it's an NIH initiative, is sure to include everything with NIH funding, but there are many industry studies that have never shown up there. (And I share the scepticism about the "University" designation.) When the Grand Clinical Trial Registry finally gets going, in whatever form it takes, we can get a better idea of what's going on. I also think that if we could somehow compare the size and expense of these various trials, the Pharma share would loom larger than the absolute number of trials would indicate.
Encephalon goes on to worry that I'm denigrating basic research: "The impression a lay person would get reading Dr. Lowe's 'How it really works' is that basic science work done by the NIH is really quite trivial. I don't think he meant this. . ."
Believe me, I certainly didn't. Without basic biological studies, there would be nothing for us to get our teeth into in the drug industry. If we had to do them all ourselves, the cost of the drugs we make would be vastly greater than it is now. It's like the joking arguments that chemist and pharmacologists have in industry: "Hey, you guys wouldn't have anything to work on if it weren't for us chemists!" "Well, you'd never know if anything worked if it weren't for us, y'know!" Academia and industry are like that: we need each other.


So is this the attitude we're up against? Here's a thread on Slashdot on the clinical trial disclosure issue - titled, I note in light of yesterday's post, "Medical Journals Fight Burying of Inconvenient Research". My favorite verb again! The comments range from the insightful to the insipid (for another good reaction to the clinical trial controversy, go here.)
A comment to the original Slashdot item disparages the idea that NIH is the immediate source of all drugs, and recommends reading my site, both of which actions I appreciate. But the first response to that was:
"No, (NIH-funded labs) just do the basic research that results in the drug leads. The companies then do the expensive but scientifically easy trials and rake in all the money (and now it seems, the credit as well)."
Wrong as can be, and in several directions at once. In a comment below, blogger Sebastian Holsclaw urges that we take this kind of talk seriously because it's more widespread than we think. I'm afraid that he might be right. The problem is that many people don't seem to understand what it is that people like me do for a living. I think that there must be plenty who don't even grasp how science works in general. Allow me to go on for a while to explain the process - I'd appreciate any help readers can provide in herding the sceptics over to read it.
Try this: If Lab C discovers that the DooDah kinase (a name I whose actual use I expect any day now) is important in the cell cycle, and Lab D then profiles its over-expression in various cancer cell lines, you can expect that drug companies will take a look at it as a target. Now, the first thing we'll do is try to replicate some of the data to see if we believe it. I hope that I'm not going to shock anyone by noting that not all of these literature reports pan out.
But let's assume that they do this time, making DooDah a possible cancer target. What then? If we decide that the heavy lifting has been done by the NIH-funded labs C and D, then what do we have so far? We have a couple of papers in the Journal of Biological Chemistry (or, if the authors are really lucky, Cell) that, put together, say that DooDah kinase is a possible cancer target. How many terminally ill patients will be helped by this, would you say? Perhaps they can read about these interesting in vitro results on their deathbeds?
What will happen from this point? Labs C or D may go on to try to see what else the kinase interacts with and how it might be regulated. What they will not do is try to provide a drug lead, by which I mean a lead compound, a chemical starting point for something that might one day be a drug. That's not the business these labs are in. They're not equipped to do it and they don't know how.
(Note added after original post): This is where the drug industry comes in. We will try to find such a lead and see if we can turn it into a drug. If you believe that all of what follows still belongs to the NIH because they funded the original work on the kinase, then ask yourself this: who funded the work that led to the tools that Labs C and D used? What about Lab B, who refined the way to look at the tumor cell lines for kinase activity and expression? Or Lab A, the folks that discovered DooDah kinase in the first place twenty-five years ago, but didn't know what it could possibly be doing? These things end up scattered across countries and companies. And all of these built on still earlier work, as all the work that comes after what I describe will build on it in turn. That's science, and it's all connected.
Here in a drug company, we will express the kinase protein - and likely as not we'll have to figure out on our own how to produce active enzyme in a reasonably pure form - and we'll screen it against millions of our own compounds in our files. We'll develop the assay for doing that, and as you can imagine, it's usually quite different than what you'd do by hand on the benchtop. Then we'll evaluate the chemical structures that seemed to inhibit the kinase and see what we can make of them.
Sometimes nothing hits. Sometimes a host of unrelated garbage hits. For kinases, these days, these usually aren't the case - owing to medicinal chemistry breakthroughs achieved by various drug companies, let me add. So if we get some usable chemical matter, then I and my fellow med-chemists take over, modifying the initial lead to make it more potent, to increase its blood levels and plasma half-life when dosed in animal models, to optimize its clearance (metabolism by the liver, etc.), and make it selective for only the target (or targets) we want it to hit. Often there are toxic effects for reasons we don't understand, so we have to feel our way out of those with new structures, while preserving all the other good qualities. It would help a great deal if the compounds exist in a form that's suitable for making into a tablet, and if they're stable to heat, air, and light. They need to be something that can be produced by the ton, if need be. And at the same time, these all have to be structures that no one else has ever described in the history of organic chemistry. To put it very delicately, not all of these goals are necessarily compatible.
I would love to be told how any of this comes from the NIH.
Now the real work begins. If we manage to produce a compound that does everything we want, which is something we only can be sure of after trying it in every model of the disease that you trust, then we put it into two-week toxicity testing in animals. Then we test in more (and larger) animals. Then we dose them for about three months. Large whopping batchs of the compound have to be prepared for all this, and every one of them has to be exactly the same, which is no small feat. If we still haven't found toxicity problems, which is a decision based on gross observations, blood chemistry, and careful microscopic examination of every tissue we can think of, then the compound gets considered for human trials. We're a year or two past the time we've picked the compound by now, depending on how difficult the synthesis was and how tricky the animal work turned out to be. No sign of the NIH.
The regulatory filing for an Investigational New Drug needs to be seen to be appreciated. It's nothing compared to the final filing (NDA) for approval to market (we're still years and years away from that at this point), but it's substantial. The clinical trials start, cautiously, in normal volunteers at low doses, just to see if the blood levels of the compound are what we think, and to make sure that there's no crazy effect that only shows up in humans. Then we move up in dose, bit by bit, hoping that nothing really bad happens. If we make it through that, then it's time to spend some real time and money in Phase II.
Sick patients now take the drug, in small groups at first, then larger ones. Designing a study like this is not easy, because you want to be damn sure that you're going to be able to answer the question you set out to. (And you'd better be asking the right question, too!) Rounding up the patients isn't trivial, either - at the moment, for example, there are not enough breast cancer patients in the entire country to fill out all the clinical trials for the cancer drugs in development to treat it. Phase II goes on for years.
If we make it through that, then we go on to Phase III: much, much larger trials under much more real-world conditions (different kinds of patients who may be undergoing other therapy, etc.) The amount of money spent here outclasses everything that came before. You can lose a few years here and never feel them go by - the money that you're spending, though, you can feel. And then, finally, there's regulatory approval and its truckload of paperwork and months/years of further wrangling and waiting. The NIH does not assist us here, either.
None of this is the province of academic labs. None of it is easy, none of it is obvious, none of it is trivial, and not one bit of it comes cheap. We're spending our own money on the whole thing, betting that we can make it through. And if the idea doesn't work? If the drug dies in Phase II, or, God help us all, in Phase III? What do we do? We eat the expense, is what we do. That's our cost of doing business. We do not bill the NIH for our time.
And then we go do it again.


I haven't been covering all the twists of the clinical-trial-disclosure story, because there have been so many of them. The drug industry is proposing its own plan, various companies are jumping out with theirs, the big medical journals have another one, and it won't be long before Congress sticks its oar in, too. Clearly there's still some wrangling to come - but equally clearly, we're going to get some sort of meaningful clinical trial data repository.
And as I've blogged here, I don't necessarily have a problem with that, although some of the ">details concern me. My problem, speaking as someone who pays his mortgage with ill-gotten loot from the rapacious drug industry, is with how we've handled the whole thing: poorly.
The verb that almost every story has used is "bury." The drug makers will no longer be able to bury their failed trials, the buried data will now have to be made public, and so on and so on. That's right, we take the data and stick it in a hollow tree stump. You would never know that every clinical trial in the US has to be registered with the FDA (or the equivalent authority in the case of offshore studies.) And you'd never guess that if we want the FDA to act, we have to submit all our clinical data, bad and good.
(Now, a situation where we could indeed use more transparency is when a trial is run, but the company decides that the results weren't good enough to support some new FDA action (a labeling extension, most of the time.) Then the results don't see the light of day, although I think that they should. But even then, the FDA knows that a trial was run.)
Where has my industry been while we've been pummeled in the press? Issuing press releases that nobody believes or even reads? Our industry organization's home page is a sinkhole of grinning publicity head shots and soft-focus stock pictures of cute babies. Find someone who can stand to look at it for two minutes, and I'll show you someone with a stronger stomach than I have. Why isn't our side of the story getting out?


As came up in the comments to the previous post, there's not as much price competition inside a given drug category as you'd think. That's not because we're Evil Price Gougers, at least not necessarily. As I was pointed out yesterday, "me-too" type drugs aren't as equivalent as some people think. The main reason we go ahead with a drug in a category where there's already competition is because we think we have some advantage that we can use to gain market share.
This is a constant worry in every drug development effort where there's already a compound out there. I've personally, many times, been in drug project meetings where we've looked at the best competing compound (one that's either already marketed or well into clinical trials) and said "We haven't beaten them yet. We're not going to make it without some kind of unique selling point." The best of those, naturally, would be superior efficacy or a superior safety profile. Then you have easier dosing, fewer interactions with other drugs, and so on. I need to emphasize this: I have seen drug projects killed because no case for an advantage could be made.
Now, there's room to argue about how much better efficacy a drug needs to be a real advance in the field, or at least a bigger seller. You can argue about any of those possible advantages I listed, and it's true that drug companies push some compounds that aren't exactly huge leaps over the previous state of the art. (You see more of that when there's a case of shriveled pipeline in progress.) But there has to be something, and the bigger the difference, the better it is for us. We're motivated, by market forces, to come up with the biggest advances we can. The sales force would much, much rather be out there with data to show that the new drug beats the competition in a clean fight, as opposed to saying that it beats the old one on points, in a subset of patients, if you massage the data enough and squint hard, and besides it tastes better, too. . .
And as I've pointed out before, we often find out things about compounds long after they've reached the market. Lipitor, as discussed yesterday, is a case in point. I have not been a Lipitor fan in the past. The statin field seemed already pretty well served to me (as it did to a number of people inside Warner-Lambert during the drug's development, frankly.) The drug made its way forward based on efficacy in the clinic: it seemed to do a better job lowering cholesterol and improving the LDL/HDL ratio. How much advantage that is in the long term is another question, but those are the best markers we have.
The whole antiinflammatory c-reactive-protein story about the drug only came up after it was already on the market. The marked differences between it and the other statins, which I have to assume at this point are real, are a pleasant surprise to everyone involved. Warner-Lambert (and then Pfizer) thought it was a better compound, but not to this degree or for these reasons, I'l bet. I'd say that this is another argument for having multiple drugs in the same category. We don't, and can't, know everything that they'll do.


Allow me to get a little defensive. If I understand some of the critics of my industry, we spend most of our time making "me-too" ripoff drugs rather than doing something that provides any clinical benefit to patients. And, if I have this right, here's how we determine efficacy: we run clinical studies until we get the answer we want, and then we bury all the other ones. (Mind you, we bury the data by giving it to the FDA, but stay with me here.)
OK, now let's try to explain this. Merck has just released a study on its statin drug, Zocor. Following in the footsteps of two other studies with Pfizer's statin, the market-leading Lipitor, Merck dosed patients who had just suffered heart attacks. Lipitor treatment seemed to show a real benefit in these situations, lowering the rate of later cardiovascular trouble, and Merck was hoping for (and no doubt expecting) the same thing.
But they were rudely surprised. At the lower doses of Zocor, they failed to show any benefit at all. And at the highest dose, while they managed to show a lower rate of second heart attacks, they still didn't reach significance versus the placebo group. Worst of all, several of the high-doses patients showed the muscle-weakening condition rhabdomyolosis. That's the bane of statin drugs, and the reason why Bayer pulled their compound (Baycol) from the market. (Just to complicate things, one of Merck's placebo patients showed rhabdomyolosis, too, which is food for thought and should give you an idea of how much fun it is to interpret clinical trial data.)
So what's going on here? Zocor and Lipitor both work by inhibiting HMG-CoA reductase. They hit the same mechanism. Were the patients different? The study's authors say it's possible. The patients in the Lipitor studies seem to have been receiving more aggressive therapy in addition to the drug. Are the drugs different? That's possible, too. Lipitor, as it turns out, seems to lower the inflammation marker C-reactive protein much more than Zocor, and that could potentially make a difference.
But if the drugs are really different, what happens to the idea that Lipitor is just a me-too, yet another statin piling on the profits? If we in the industry hadn't kept banging away at these drugs, we wouldn't have ever known that better ones could be found. Would we? As I've pointed out in the past, if you're going to market a drug in a category where the competition is ahead of you, you'd better have some improvement to point at or set about finding one. Lipitor came into the market under the banner of "lower dose / higher efficacy", and it may be picking up more advantages as time goes on.
Now, if we believe that the drugs aren't different, which will be an interesting thing to try to prove at this point, then we have to figure out how much weight to put on this study. How does it go into the Great Clinical Trial Repository? With an asterisk? Then shouldn't the earlier two studies with Lipitor have one, too? This is the same situation I spoke of before.
And what about this clinical trial data in general? Isn't this the sort of bad news that we're supposed to be sweeping under the rug over here? A full article in JAMA complete with vigorous editorial commentary. . .some rug. Oh, and one other thing: those two earlier Lipitor studies that showed a benefit. One of them was from Pfizer(/Pharmacia), as you'd expect. But the other one was from their competition. Bristol-Meyers Squibb has been trying to prove that their statin, Pravachor, is better than Lipitor, and failing. Where's that damn rug when you need it?


The New York Times has a good article this week on a trend in clinical trials that's been developing for several years - small intensive trials in humans, run before giving the go-ahead for the real thing.
It makes a lot of sense, but only when you can use it to ask (and answer) the right questions. That's where technologies like functional NMR imaging or PET scans come in, because they allow you access to in vivo data that's otherwise unobtainable. Take, for example, the studies mentioned in the Times article, where they look at glucose uptake in a solid tumor. That's a reasonable proxy for its metabolic activity, as you'd guess, and it'll give you a quick read on whether your targeted cytotoxic compound is having the effect you want.
What you'd do, normally, is dose the compound for days or weeks, then use NMR or another imaging technique to see if the tumor has changed size. That's clearly a more convincing answer, but it takes a more convincing amount of time and money to get it. And if your compound isn't having an effect on a fast marker like the tumor's metabolic rate, it's probably not going to have any effect after you dose it for two months, either. You're better off trying something else.
But if your new cancer therapy is, say, a compound that interferes with cell division, then you're not going to have that clear an answer through that glucose uptake technique. Same problem if the cancer you're treating is a more diffuse one like leukemia, because there's not such a clear tissue to image. (There are other approaches to each of those problems, naturally, but I just wanted to emphasize that each clinical trial is its own set of new problems, even inside the same general therapeutic area.)
And even when you get to the traditional large-scale trials, there's a huge need for surrogate markers that can show progress against slow-moving diseases. Glycosylated hemoglobin as a measure of efficacy in diabetes is a good validated example. It still takes quite a while to establish (weeks or months of dosing), but that's like lightning compared to the progress of diabetes complications themselves. You can do a quick assay in this field - the oral glucose tolerance test - but the improvement in that assay isn't so quick to come on.
The CNS diseases are a real clinical challenge, which is why their trials are so brutally expensive. There are hardly any markers at all for most of them. Everyone would love to have a short-term noninvasive readout for Alzheimer's, but despite years of effort, no one has quite made it. (And that's despite the definition of "short-term" in Alzheimer's is rather permissive.) Similarly, it would be good to be able to get a faster readout on depression, whose therapies are notorious slow starters.
There's a bigger problem, though, looming over some of the generally accepted markers - what effect do they really have on long-term mortality and morbidity? Glycosylated hemoglobin has been pretty well correlated in diabetes over the long term, so that one's pretty safe. But the question is worth asking, for example, about HDL and LDL levels. Yes, things do line up well, up to a point. But does long-term administration of statin drugs, say, help as much as we'd like to hope it does over, say, twenty years? The jury's still out on that one.


The placebo effect is a real problem in some clinical trials. It varies, but in things like antidepressants it's a major factor (while with, say, pancreatic cancer it doesn't change the results too much.) In a given sample of depressed patients, there are a fair number of people (20 or thirty percent) who will respond if you give them 50 milligrams of confectioner's sugar which they truly believe to be an efficacious drug.
Of course, the majority will respond as if you'd given them, well, confectioner's sugar, but that group of placebo responders will blow your statistical workup to pieces. This is one of the reasons that you see multiple trials for antidepressants, because the trials themselves often just produce noisy data. Of course, one way to interpret this is that the antidepressants themselves are fairly worthless. That's a tempting conclusion, and for some people, they clearly don't do much good. But you can find others that truly appear to have been helped. Depressed patients, even ones who may look and act similarly, are clearly a heterogeneous population.
What if those strong placebo-responders could be weeded out of the patient population before you even started the clinical trial? This question is a good test of a person's attitude toward the drug industry. Many folks will hear that idea and cry "Fraud! Stacking the deck!" But think about it. If you could find the people who will improve when given a sugar pill, then you could pull them aside and just go ahead and give 'em the sugar pill. Hey, it's effective therapy, and that's what counts, right? And they'll miss out on the side effects of the antidepressant drugs themselves, and every drug has side effects at some level - every single one.
Meanwhile, once those folks have been sorted out, you're left with a cohort of patients who need all the help they can get, and now you're in a statistical position to see if you can really provide any. As far as I can see, everyone comes out ahead.
It turns out that there may be ways to see who's a strong placebo effect candidate and who isn't. There have been several studies in the last few years that show some real correlations in brain activity during placebo situations, and this has lead to the idea of a test for it.
If this goes on, though, there could be some interesting developments. What if everyone becomes aware of the test to see if you're going to get a placebo? Will the responders still respond if they thing there's a reasonable chance that they didn't get a "real" drug? I think that what we'll need to do is present the test as a standard procedure, to help figure out which therapy would work the best - not a method to see if you get a drug or not, but a method to see which drug you should get. That should keep things working.


So it turns out that the major medical journals have their own plan for bringing on a clinical trial database: they're going to require companies to register trials before they'll allow publication of their results. I was taken aback at not having heard anything about this idea, until I saw that no one else in the drug industry seems to have, either.
I don't really have a problem with this at all. For one thing, it's better than having the state sue you into doing something - this is a good old free-market fight. Most of the major medical journals need revenue from pharmaceutical advertising, and the companies need the prestige of publishing in them. Come, then, let us reason together.
And the first step here, merely registering the fact of a trial, will sidestep some of the issues I brought up the other day with how to report the final data. I know that there will be pressure to include that data as well, and if we can find a way to deal with those reporting issues, we should. But even a registry of trials would show that something had been tried, naturally leading to questions about how things came out. (That's important for the medical editors' side of this dispute, because the studies that companies don't want to talk about aren't going to be submitted for publication, anyway - the journals have no other leverage at that point.)
Now, one way around this would be for companies to forsake publication in the journals involved (a tough thing to do, mind you) and just present the data with a big splash at a prestigious meeting or two. If you see more professional societies joining this trial-registry movement, especially ones that don't publish their own journals but still sponsor large meetings, then I think the outcome will have become clear.
I think, though, that people have some odd ideas about how clinical trials work and how many of them there are. Consider columnist Michelle Malkin, who wrote about this story today:
From Statistics 101 we know that if a product is as effective as a placebo, 1 in 20 trials will produce a statistically significant finding due to random chance. Since companies run dozens of trials on each major compound, it is not too hard to produce at least one positive, statistically significant finding suitable for publication. The rest are buried in the "circular file." This is great marketing but it is not science.
Um, we don't actually run "dozens" of trials on every major compound. We don't have enough money to do that, as hard as that may be to believe, and in many cases there just aren't enough patients to go around. So we just don't get to play with the statistics in this way. It would be irresponsible, she's right about that, but we don't do it.
And that argument would only hold if all 20 trials were run the exact same way (Statistics 101, you know.) Twenty different trials, each run a different way on different patient groups, can produce results all over the map. Trying to do metastatistics over the whole group is not a job you want; it's often not even possible. And besides, even if they were all the same, the level of statistical significance that Malkin's talking about (1 in 20 by random chance) isn't very high at all. A clinical trial has to be a lot more significant than that to convince anyone at either the FDA or the company itself.


There have been plans, over the years, for some sort of data repository for clinical trials. Nothing's ever worked out. The only place that all of this is collected is at the FDA, and they only have the ones that companies have submitted because they were requesting a new approval or a new indication. If companies run studies but they give up on regulatory filing, the data can never see the outside world at all.
That's the heart of the New York - GSK suit, as I was discussing yesterday (although, as I pointed out, in this case the data were made public, although nowhere near to the extent that the more positive study was). Presumably, the ideal that Eliot Spitzer seeks would be a central database of all clinical studies conducted on marketed drugs - along with, it seems, a requirement to go into the results of all of them in marketing presentations. (Actually, I think the ideal that Eliot Spitzer seeks is a world in which he is a senator from or the governor of New York, but that's another story. . .)
This sounds like a reasonably clear mandate, but in practice it's quite tricky. It's worth thinking about what a clinical data repository would look like. You'd have to include the statistical workup from the end of the trial, that's for sure. The raw data makes for quite a heap, and extracting the useful conclusions from it is not the work of a moment. You have to be well informed about how and why the trial was designed to even know where to start, and you have to be well informed about statistics to know when to stop.
Even with all the conclusions attached, an open raw-data repository would be a real invitation to cranks of all kinds to go in and massage the data. I've spoken about this issue before, because companies themselves can be guilty of trying to extract more conclusions than the data will support. Imagine the ax-grinding subgroup analysis and selective data mining that would go on - for one thing, the trial lawyers would be adding statisticians to their staffs to do nothing but comb through the numbers all day, looking for tort-worthy tangles.
Even if you just have the worked-up data in the repository, you still face the problem of data overload. Heavily studied drugs can have a long list of differently designed trials attached to them, all of which are either asking different questions or asking the same one in different ways. Digging through them is not something you can do on your lunch break.
An even tougher problem is what to do about poorly designed or poorly executed studies. That seems to be the case with the Paxil 377 data I spoke about yesterday, which is why one of the study's co-authors wanted to publicize it in the first place. Who gets to decide if a particular study is valid? Whose comments and conclusions will be attached to the results? Who gets to weight them against the other results collected on the same drug?
These are the sorts of issues that are wrangled about in the regulatory approval process, and the disagreements can be heated, even in a roomful of people who all know what they're doing. How many physicians would be willing to consult a Central Clinical Trial Database and do the wrestling themselves? How many would even have the time? For the most part, practioners have as their default setting to trust the FDA, since they've analyzed the data already.
As for what companies can say to doctors, limits in this area have banged right into free-speech considerations in the courts. Attorney General Spitzer's on-message response to this is that you can't use a First Amendment argument to justify fraud, and I'll let that one go by without swinging at it. But what would he have disclosure look like? Should it be verbal (and in that case, how would it be enforced?) Should it be a written handout on the total clinical data generated for a new drug? That makes more sense, but then we get back to the question of how summarized the results should be, and who gets to write the summaries.
The thing is, I think that a clinical data repository would be useful. I know that I'd like to go data-mining through previous studies, looking for things that are relevant to my current projects. And I'd like to see what happened in failed trials so we can be sure not to run ours in the same fashion (which was Dr. Miner's point about the 377 Paxil study). It could be worth trying, but I worry that it might require the world to be a little better than it really is to work. We'll see.


New York Attorney General Eliot Spitzer has found what must look like another target-rich environment: the pharmaceutical industry. As many readers will have seen, he's initiated a lawsuit against GlaxoSmithKline for their handling of clinical trial data for the antidepressant Paxil (paroxetine). As far as anyone can tell, this suit is the first of its kind.
There's a specific side to this story, and a there's general one about the handling of all clinical trial data. I think I'm going to end up splitting the difference, but first things first: in this case, SmithKline (as it was at the time) ran different studies on the effectiveness of Paxil in adolescent patients. One study (#329) had positive results, and another (#377, slightly later) showed no benefit versus placebo. Spitzer points out that the successful first study was widely publicized, presented at several scientific meetings, and eventually published. SmithKline (and later GSK) made it part of their sales pitch to physicians.
Meanwhile, the 377 study was presented once, at the annual meeting of that same academy, and never showed up as a full paper in the literature. The presentation wasn't SmithKline's idea; they weren't going to publish or present at all. It was suggested by two of their academic collaborators (Robert Milin and Jovan Simeon). And as you can imagine, it has not been a feature of GSK's promotional literature.
All this, in the eyes of Attorney General Spitzer, adds up to an indictment for fraud - and yes, that's exactly the word he uses. Here we have all the elements of a great case: buried information that would have been harmful to a large corporation, and a whistleblower who brought it to light. It sounds more like a screenplay - as you read about it, you can start mentally casting the movie.
But there are complications. For one thing, SmithKline made no objection when Dr. Milin told them of his plans to present the 377 study. I don't know what the terms of the research agreement were in this case, but often enough the company can exercise a veto in such cases, since they paid for the study. And second, Milin himself is, according to Barry Meier's story in the New York Times last week, a strong believer in the use of Paxil in adolescents. He considers the 377 study to have failed because of a flawed design, not because the drug isn't useful. And as for publishing the results in a journal, that would have actually been quite difficult. Inconclusive or negative results are very hard to publish in general, and in this case even the positive study wasn't the easiest thing to get into the literature. According the Times article, the paper probably bounced around a couple of times before finding a home. It ended up in the Journal of the American Academy of Child and Adolescent Psychiatry, and appropriate venue but hardly the highest-impact journal in the world. And finally, GSK provided details of both studies to the FDA, as it is required to do.
So hiding information, which is the basis of the fraud allegation, lies in the way that GSK detailed physicians. I wouldn't expect them to go out of their way to present data showing that the drug didn't work, but if one of the study's own authors felt that it was flawed, then I really wouldn't expect them to talk about it much. I can see what Spitzer's trying to do, all right, and I can see what he thinks he has. But I don't think that's what's really there.
All this, presumably, is supposed to further the cause of releasing clinical trial data. Under the current system, the company can show it only to the FDA (or other regulatory agencies) if it chooses, and if they give up on the compound, no one has to see it at all. There have been calls over the years to establish a clinical trial database, but nothing's ever come together.
And you know, I actually think that a general trial database could be a good idea. (It could also be a disaster, and the industry has chosen to avoid the latter rather than seek the former - we'll go into some of the complications tomorrow.) But I think that Eliot Spitzer may have picked the wrong grandstand to make a speech from, and should have thought twice before striking up the band. Then again, that's not the sort of behavior that got him to where he is now. . .


In my March 11 piece below, I mentioned the possibility of Pravachol competing on price with Lipitor. But over at Forbes, Matthew Herper has pointed out that it's currently more expensive. What BMS is going to do with this drug, I can't imagine.
There's also a good story in the Newark Star-Ledger about the whole comparative-trial situation. (That paper does a pretty good job with the drug industry, since so many of the big players are right in its back yard.)


Just a brief note today about the "PROVE-IT" study that Bristol-Meyers Squibb ran and has now reported on. This was their big shot at Pfizer's Lipitor, their chance to show that their own statin, Pravachor, was just as good or better. The study was big, it was long, and man, was it expensive. It's just the sort of thing that I was talking about when I wrote recently about comparative drug trials.
And it shows why more of them aren't done. Because, as is well known, when you strike at a king, you have to kill him. BMS found, no doubt to their dismay, that Lipitor is actually a better drug. It's not a gigantic difference, and you can still argue about the dosages, but BMS's drug definitely failed to realize the hopes they had for it. Here are two competing views on the issue, one from DB's Medical Rants (keep scrolling up) and one from Medpundit.
Now what? How do they promote it? The question that BMS is going to get is "Why should anyone take your drug instead of Lipitor?" The only thing I can think of is for them to compete on price. "Take Pravachor - it's proven to be sort of, you know, inferior, but it's sure cheaper!" Doesn't quite have that compelling zing, does it?
If comparative drug trials are going to be done, they're either going to have to be required by law - in which case, as I pointed out, we in the industry will pass along those costs to the consumer, thanks - or they'll have to be done by a third party. (In which case it'll be paid for by everyone who pays taxes, not just the eventual users of the drugs involved.) If you're waiting for more companies to do them on their own, you're going to have a long wait. Especially after something like this happens.
I'll leave everyone with a homework question: Can anyone think of another case - I can't - where a company sponsored a study of their product against a competitor, found that theirs fell short, and publicized it? UPDATE: I mean, outside the drug industry. It's happened several times to us (Zyprexa!) I'm talking Ford / Honda, Dell / Gateway examples, and I can' think of one. Admittedly, as I've said before, health care is different, but still. . .


Some interesting mail has come in after last week's post on comparative clinical trials. Reader C.B. that I spoke about here some time ago, but should have raised again:
"It seems to me that something else is being left out: not all patients respond the same way to any particular drug. . . Suppose that drugs X and Y are equally efficacious when given to the appropriate patient, but the population more responsive to X is smaller than that benefiting from Y. A simple comparative trial would suggest that Y was more effective because it assumes a single type of patient. On the basis of the results, people who should get X would only be allowed Y. . ."
It's true, there are a number of cases like this, and this is one of the traditional arguments for multiple drugs in a given class. I've made it myself. Given the state of the art, it's nearly impossible to untangle these things. In almost all cases, we have no idea why some people respond better to a particular therapy; it's trial and error. Clinically, these things are bottomless pits, so I think that comparative trials are going to be most useful in areas where a large number of patients respond to both drugs under study.
But we're in the process of inventing ourselves out of this situation. That's why all that money is being poured into pharmacogenomics - and quite rightly, although the end result is that many drugs are going to have their potential market size whacked into a rather more compact shape. The great thing about pharmacogenomics is that we're finally going to know who should take our latest drug, and we'll be able to find them and sell it to them. The terrifying thing, from the marketing standpoint, is that we're simultaneously going to find another group of patients, a potentially larger group with the same disease, who will never take that drug at all. It's going to be a better world, but one in which some business models (cancer therapy!) are going to have to change.
And in a similar vein, reader R. D. writes:
"I have yet to see someone make a rational case for why me-toos are bad. At most, the argument seems to be that if pharma would just stop spending all its time coming up with me-toos, we could get around to curing cancer and parkinsons and stuff. I think that's bunk. You and I both know that any pharma that could come up with cures for things like cancer or parkinsons could start their own mint. The reason they haven't is because it's HARD, not because they prefer to make less money by painting their old pills purple and trying to convince everyone that they're new and improved."
Purple? What on earth can you be talking about? No, the argument he's talking about is one that (in this form) I don't have too much time for, either. The me-too drugs are there to keep the coffers full to pay for the research that doesn't work out, and to tide companies over the dry spells. I can see the objections to the areas where there are six and eight therapies all piled up on top of each other (for example, does the world really need Crestor?) But if Crestor makes money, some of that's going to pay for something new.
And the reason for that touches on another favorite whipping boy: marketing and promotion costs. Keep in mind the inverse relationships between advertising costs, novelty, and the chances of success. A new drug that does something no one's ever seen for a major disease previously thought untreatable - isn't that what makes everyone happy? How much, comparatively, would have to be spent to market such a therapy? There's no competition - it would sell itself! But what are the chances that any of us are going to find and develop such a wonder?
(OK, some of you are saying "Viagra! First on the market, first in the category, promotion out the wazoo!" But keep in mind: no one was sure that men would actually go to their doctor and admit their symptoms - thus the advertising blitz. And Prizer knew, with all the other companies working on PDE subtypes, that competition would be coming soon. They needed all the brand recognition that they could buy.)
Meanwhile, contrast a first-ever wonder drug with, say, the umpteenth statin. It's a crowded field, and you have to spend like crazy to make headway. The thing was a bit lower-risk to develop, since you knew that the rationale was there. But your cost-of-sales figures are going to be uglier, and nothing's ever going to help them.
My point is that a company needs both of these kinds of drugs. You can't hope to live only on the first kind, because they happen so seldom and so unpredictably. And no one's trying to live only on the second kind, either, because you've traded higher costs their for relative security. Everybody developing one of the first class wishes they had some of the second to tide them over. And everyone with drugs in the second class is looking for one from the first.


I've already had some reader mail (see here) about this article in today's New York Times. It starts out looking like a real pharma-bashing exercise. Up to a point, it is - and up to a point, it's deserved, too. But in the end it's a more subtle piece, not that you'd guess that from the opening paragraphs. (I have my own solution to the problem the article raises, and it will bring joy to no one. Read on.)
The issue is comparability of drugs, especially drugs with the same broad mechanism of action. Look at all the statins or antiinflammatories on the market: is there one that's better than the others? Of course, if you listen to the companies that make them and promote them, the answer is clear. Their product is best! But, as in any other industry, that's not the most reliable guide.
The article uses the example of two marketed forms of the protein erythropoetin, one from Amgen, and one from Johnson and Johnson. J&J's product is about one-third the cost of Amgen's. Is there any reason to pay for the more expensive option? Medicare has asked the National Cancer Institute to run a study to answer that question, but (as the Times points out early and often) there is a provision in the latest Medicare legislation that keeps the program from even using such evidence of functional equivalance in its payment decisions. As you'd imagine, Amgen is arguing that this provision makes the planned Medicare/NCI comparison study a moot point. Why compare?
This would seem like an easy call: the drug companies are slamming the door on something that might cut into profits. Hey, I work here, and I'm sure that that was the motivation, too. But I should add the standard comparisons to other industries at this point, though, and note that car makers are not required to prove that their latest models actually work better than the older ones, or better than the competition's. Nikon doesn't have to run head-to-head trials with Canon, nor Gateway with Dell.
I like those examples, but I realize that there are some other considerations. For one thing, we're talking about public funds here, right? Partly, yes, although the managed-care corporations have a big interest in this, too. I'd add that the government spends a lot of money on goods and services that are not required to be comparison tested (but are selected on the basis of lowest bid.) We'll get back to that topic in a couple of paragraphs. The other big factor is that my car and computer comparisons are discretionary purchases. Health care is treated differently. It's an emotional issue, a life-and-death issue, and it's always going to be held to a different standard than other businesses.
So, let's test! But as the article makes clear, it's not as easy to test these things as you'd think:
. . .Rarely are such studies able to answer all the most important questions. The National Cancer Institute has been mulling the appropriate design for the Aranesp-Procrit trial for nearly two years and will probably need another year before starting the test. . . In the end, more than one trial may be needed, Dr. Feigal (of NCI) said.
Dr. Feigal declined to estimate the cost or size of the eventual trial or trials, but similar tests have cost millions of dollars. Indeed, for comparative trials to be the size needed to measure true differences between drugs, they generally need to be large, lengthy and expensive.
Indeed they do. The article goes on to talk about the hypertension drug comparison study that got such play in the media a few months ago - not least from the New York Times itself. It hasn't settled the question, though. There are still real doubts about which therapy is most effective (for one thing, because patients in the study didn't take more than one type of drug, although in the real world this is a common mode of treatment.) This was a huge study already, and adding arms to assess combination therapies would have bulked it up considerably.
Still, I'm in favor of doing some head-to-head tests, because I think that there are several therapies out there that don't offer much for their price. (I'm looking at you, Nexium!) Here's my proposal - and yes, I'm going to go ahead and treat the drug industry unlike any other. If a company wants to bring out a me-too therapy, it will be required to show evidence of whatever factor differentiates it from the existing agents. The company gets to choose the battlefield: more efficacy? Quicker onset? Fewer follow-up visits to the doctor? Whatever. Pick a reason you're going to promote the drug, and come up with data to back it up. I think we'd end up with fewer me-toos on the market, but we'd lose fewer of them than many critics might think. Many times, drugs that look the same can indeed act differently. Admittedly, it would take some careful clinical work to bring some of the differences out, though.
This change would require a major shift at the FDA. For existing therapeutic modes, you'd need to switch at some point from placebo-controlled trials to competition-controlled trials. Perhaps you could run an initial test-the-water placebo control (after all, these are drugs that have a high chance of working), and from then on you run versus the competition. There are complications - which competitor, for example. But it's possible to do, and it's an idea that has been talked about for a long time.
And who's going to pay for all this? Well, you are (if you're a patient, that is.) Believe me, we're going to pass those costs on, and pronto. Raise the regulatory barrier, pay more money: it's a law of nature. And the lost revenue from the me-too drugs, which have higher chances of success (but still aren't sure things!) will be passed on, too. I think that there are still savings to be realized here - but they're not going to be as big as they seem.


I wanted to take a moment to mention some interesting posts around Blogdom that readers may not have seen. In a response to the news on secretin for autism (see my post below), Dwight Meredith writes on what it was like at its peak of interest:
Human secretin, swine secretin, herbal secretin (which as far as I can tell is an oxymoron) and synthetic secretin were all hawked relentlessly to the parents of autistic children. The price of secretin skyrocketed. People were paying $2,000 for an amount of secretin that before the buzz had cost about $30. It is not an exaggeration to say that parents were mortgaging their homes to purchase secretin for their kids. We now know that a sugar pill would have been equally effective.
Please note that all of that buzz was generated by the fact that a few autistic children had improved after being given secretin for digestive problems. The autism community could not wait for double blind and placebo tested trials. We wanted our miracle and we wanted it now.
This is a man who writes from personal experience, I should note. And I can understand the desperation (well, as much as anyone in my position can - I have two small children, neither of whom have - thus far - shown any neurological abnormalities.) What I have trouble imagining, though, is what goes through the mind of someone who peddles "herbal secretin" to parents who are begging for something to help their autistic child.
Herbal secretin? They didn't even bother making it sound like anything but a heartless scam. Figured the customer base would be too desperate to care, I suppose. I'm ashamed to be in the same phylum with creatures who would do something like this.
There's a larger point about the wait for double-blinded trials, too, of course, which I should save for a longer post. The short form is that I can see the point that some people make, that it would be better to require safety (Phase I) trials, then stand back and let efficacy be sorted out in the marketplace. (SMU's Steve Postrel and I had a long e-mail exchange on that subject a year or so ago.) But then I hear about this sort of thing, and start to think that this is one of those sensible ideas that would only work on some other species than humans.
The other post I wanted to mention is over at Colby Cosh's site. Talking about medical progress, he hits on the idea of looking at the causes of death in the records of ballplayers from the old days, who were in their physical prime. It's an alarming list, and most of the things on it are, fortunately, in the process of disappearing from the world. And good riddance. As Cosh says: "I don't know how anybody kept from just going insane before antibiotics existed, with death lurking around every corner."
One final note - I've forgotten to mention that Charles Murtaugh is back blogging again. There's lots of good new stuff; just start at the top and work your way down.


As I mentioned yesterday, I think the kind of study that compared diuretics with other hypertension medicines was a very good thing. So why don't we see more of these?
There are several reasons. It's worth thinking about the different levels of testing, and what questions they're designed to answer. At the first level, you have questions about specific drugs - is Drug A safe to take, compared with taking nothing? Does Drug A work, compared with taking a placebo? These are the usual subjects of Phase I and II clinical trials.
There's a third question, namely, how good is drug A versus other drugs that work the same way? That one doesn't get answered as often as it should, because the FDA generally only requires testing against placebo. A debate has been going on about when it's appropriate to run head-to-head trials rather than placebo-controlled, and it happens more often than it used to. Drug companies aren't always eager to try this, because they sometimes fear that the advantages of their new compound may turn out to be more subtle than they'd like. But if they think they've got a clear edge, then a trial like this is just the thing. I think we're going to be seeing more and more FDA requests for these sorts of trials, which will definitely make life harder for drug development, but in a good cause.
Beyond specific drug questions, you get to mechanism issues: Does therapy A work better than therapy B? That's what the diuretic study was designed to answer, and it's the rarest kind of all. It's a situation, though, like the old proverb that says when you strike at a king, you have to kill him. If you run one of these trials and your advantage isn't there, you're probably sunk - and if a safety liability shows up versus the existing therapy, you're completely sunk. This is what happened to Bristol-Meyers Squibb when they run Vanlev (omepatrilat) against Vasotec (enalapril) for hypertension. Vanlev's never going to see the light of day, and neither is any other ACE/neutral endopeptidase inhibitor combination.
As one of the interviewees in Wall Street Journal noted:
Duke's Dr. Catliff says it isn't reasonable to expect the pharmaceutical industry to onduct head to head studies needed to answer questions of both science and money. "It's sort of an all or nothing game," he says. There is a potential gain for the winner, but a huge risk for a loswer. Some results could essentialy kill the market for a drug. "The industry can't afford to take that kind of risk."
Well, whether it's reasonable or not, he's right that companies aren't going to line up to do this sort of study. The business is risky enough already, thanks. No one company is going to try it unless they're forced to (like BMS.) That goes double when you're comparing existing therapies, things that are already on the market. But that doesn't mean that I don't think this kind of study should be done - on the contrary. I think that the NIH's model for the ALLHAT hypertension study could be the way to go - let people run the study who won't be cutting their own throats by running it. It'll be interesting to see if they get a general mandate (and funding) to do just that.


I recently mentioned the non-cholesterol effects of HMG-CoA reductase inhibitors (statins,) so I thought I'd follow up on that with a discussion of the recent news (Nature, Nov. 7) that they could be beneficial for multiple sclerosis.
The mechanism of MS is clear, up to a point. (I know, everything is clear, up to a point, but bear with me.) It's an autoimmune disease, a T-cell response to the body's own myelin sheaths around the nerves. This inflammation damages the myelin (a full immune assault damages just about anything,) and thus affects nerve impulse transmission. Over time, the neurons themselves are irreversibly damaged (or so it seems; reversal of neurological damage is a hot topic these days, and no one's sure what might be possible eventually.) The course of the disease varies a great deal from person to person, since immune systems vary, too. Current therapy can slow the progression down a bit, but nothing stops it.
The idea that statins might help in something like MS isn't actually new. The drugs have long been known to have some immunological effects: as far back as 1995 — yep, way back then — a study showed that heart transplant patients had a better outcome when pretreated with pravastatin.) Since then, a number of miscellaneous signaling pathways involved in inflammation have been shown to be affected by one statin or another. (So many, in fact, that it was getting hard to sort out what was going on.)
The latest work is a very nice study using a mouse model of MS called EAE (experimental autoimmune encephalomyelitis.) It's a pretty decent surrogate for the disease, brought on by deliberate (and heavy) immunization with peptides that are close enough to myelin's surface composition to set off the autoimmune response. There are several recipes for doing that, some of which only work in specific strains of mice, which cause different types of impairment (more or less severe, chronic versus repeating, and so on.)
The statin used was atorvastatin (known to the world, and to nearby planets if Pfizer's marketing department has anything to do with it, as Lipitor.) I note without comment that one of the paper's authors was the recipient of an "Atorvastatin Research Award" from Pfizer, but their choice of this particular compound was justified. Two years ago, it was found to be more potent on immune targets in vitro.
Giving the drug before symptoms set in was effective at lessening them. In fact, the statin even helped after waiting until the peak of the illness, which is a pretty severe test. All this was confirmed on the tissue and molecular levels; the results look very solid indeed.
So how does it work? Probably not through cholesterol lowering per se. But the HMG-CoA reductase enzyme that the statins inhibit produces mevalonate, which is a molecule that does seem to have some effects on immune function. Outside of that whole pathway, statins seem to affect production (although it's not clear how) of a regulatory protein called CIITA. That one's involved in presenting antigens to helper T cells, a process very close to presenting a pack of bloodhounds with someone's dirty sock. So it could be that the T-cell attack on myelin is thrown off at the very beginning.
There are other mechanisms, not mutually exclusive. Statins have also been shown to affect a protein called LFA-1, which is known to be important for T-cell migration. Perhaps even if they're on the scent, they get diverted at the last minute by this pathway. (One way to check would be to use pravastain, which doesn't seem to affect LFA-1, interestingly.)
Unraveling all this is going to keep a of people up late in the lab for some time to come. For now, atorvastatin is going into human trials on MS patients. You can bet that as the mechanism comes more into focus that drug companies will be ready to screen their compound banks again, though. Statins are a very good start in this area, but they don't have to be the last word.


I've been meaning to comment on some recent reports in the Wall Street Journal about the lengths that stock analysts have gone to get information on clinical trials. The main example was one David Risk of Sterling Financial (primarily a short-selling outfit, and quite sceptical of official company information.) Back in February, he signed on as a patient in a trial of a sleep-disorder drug from Neurocrine Bioscience, saying that he fit the profile that they were looking for. After his acceptance, he spent his time quizzing everyone he could buttonhole, then bailed and issued a "sell" on the stock. This was based on one verbal report of a bad reaction in one patient.
Other examples in the article had analysts calling the physician in charge of a trial, pretending to be fellow MDs, and asking for details on enrolling patients (while really trolling for inside data.) One Boston outfit, Leerink Swann & Co., pays physicians involved in clinical trials to have "discussions" with analysts (who pay Leerink Swann, of course.) These discussions supposedly don't violate confidentiality agreements, but I'd like to know what useful information could change hands in a conversation that didn't.
This sort of thing strikes me as being over the line. And the thing is, I like selling stocks short. I'm a bear by temperament; my facial expression in the stock market is a permanently raised eyebrow. Investors should view company press releases with suspicion, because most of the time it's fully deserved. Biotech drips with hype and falsely raised expectations. But that doesn't justify this behavior, which is indefensible on several grounds. Legally, the Sterling analyst entered the trial under false pretences, and he had to violate his non-disclosure agreements to write the report he did. If someone wants to make a case out of that, they probably could. I could add that he wasted the time of the administrators of the trial, and that these things are hard enough to run without jokers joining in.
On the scientific side, it's really idiotic to grab onto individual data points the way he did. As it turned out, the patient with the bad reaction to the Neurocrine test drug also tested positive for opiates, and was kicked out of the trial for violating its protocol. His case probably had no bearing on whether the drug was working or not, or how safe it was. It's a recurring pattern, though: the same analyst put out a strongly negative report on a Regeneron clinical candidate for obesity because one patient came down with Guillain-Barre syndrome during the trials. Did this have anything to do with the drug? Causality's a tough question, but the patient had had a recent flu vaccination and an upper-respiratory infection (both of which are risk factors for G-B.) No other patients have had the syndrome. There seems to be no reason to assume a connection between the two.
I can't stress this enough: finding out if a drug is safe is very difficult. Finding out if a drug is effective is very difficult. And that's if you're the one running the clinical trials.The only data that mean anything are those from rigorously controlled studies, done on as many patients as possible. And once the numbers come in, you have to sit down for an extended session of head-banging statistics to be sure that you know what they mean. Sure, you can go around picking out tiny bits of positive news (like some companies do) or tiny bits of negative data (as these examples have done.) But both of these are dangerous, stupid, and irresponsible. The people in the WSJ's article go on about how they're just trying to "uncover the truth." The truth is, they're just as bad as any deceptive PR department.