There have been plans, over the years, for some sort of data repository for clinical trials. Nothing's ever worked out. The only place that all of this is collected is at the FDA, and they only have the ones that companies have submitted because they were requesting a new approval or a new indication. If companies run studies but they give up on regulatory filing, the data can never see the outside world at all.
That's the heart of the New York - GSK suit, as I was discussing yesterday (although, as I pointed out, in this case the data were made public, although nowhere near to the extent that the more positive study was). Presumably, the ideal that Eliot Spitzer seeks would be a central database of all clinical studies conducted on marketed drugs - along with, it seems, a requirement to go into the results of all of them in marketing presentations. (Actually, I think the ideal that Eliot Spitzer seeks is a world in which he is a senator from or the governor of New York, but that's another story. . .)
This sounds like a reasonably clear mandate, but in practice it's quite tricky. It's worth thinking about what a clinical data repository would look like. You'd have to include the statistical workup from the end of the trial, that's for sure. The raw data makes for quite a heap, and extracting the useful conclusions from it is not the work of a moment. You have to be well informed about how and why the trial was designed to even know where to start, and you have to be well informed about statistics to know when to stop.
Even with all the conclusions attached, an open raw-data repository would be a real invitation to cranks of all kinds to go in and massage the data. I've spoken about this issue before, because companies themselves can be guilty of trying to extract more conclusions than the data will support. Imagine the ax-grinding subgroup analysis and selective data mining that would go on - for one thing, the trial lawyers would be adding statisticians to their staffs to do nothing but comb through the numbers all day, looking for tort-worthy tangles.
Even if you just have the worked-up data in the repository, you still face the problem of data overload. Heavily studied drugs can have a long list of differently designed trials attached to them, all of which are either asking different questions or asking the same one in different ways. Digging through them is not something you can do on your lunch break.
An even tougher problem is what to do about poorly designed or poorly executed studies. That seems to be the case with the Paxil 377 data I spoke about yesterday, which is why one of the study's co-authors wanted to publicize it in the first place. Who gets to decide if a particular study is valid? Whose comments and conclusions will be attached to the results? Who gets to weight them against the other results collected on the same drug?
These are the sorts of issues that are wrangled about in the regulatory approval process, and the disagreements can be heated, even in a roomful of people who all know what they're doing. How many physicians would be willing to consult a Central Clinical Trial Database and do the wrestling themselves? How many would even have the time? For the most part, practioners have as their default setting to trust the FDA, since they've analyzed the data already.
As for what companies can say to doctors, limits in this area have banged right into free-speech considerations in the courts. Attorney General Spitzer's on-message response to this is that you can't use a First Amendment argument to justify fraud, and I'll let that one go by without swinging at it. But what would he have disclosure look like? Should it be verbal (and in that case, how would it be enforced?) Should it be a written handout on the total clinical data generated for a new drug? That makes more sense, but then we get back to the question of how summarized the results should be, and who gets to write the summaries.
The thing is, I think that a clinical data repository would be useful. I know that I'd like to go data-mining through previous studies, looking for things that are relevant to my current projects. And I'd like to see what happened in failed trials so we can be sure not to run ours in the same fashion (which was Dr. Miner's point about the 377 Paxil study). It could be worth trying, but I worry that it might require the world to be a little better than it really is to work. We'll see.
1. The Un-Candidate on June 9, 2004 9:58 PM writes...
Hey, uh, just to get calibrated, how much paper is in the final approval process? I'm guessing that the final application measured in reams, right?
Permalink to Comment2. Derek Lowe on June 10, 2004 9:21 AM writes...
Oh, yeah. There's more electronic submission going on now, but there's still plenty of paper. Most of the time, the NDA (New Drug Application) goes off to the FDA in its own truck, sometimes with a celebratory send-off.
You can get the idea from this page at the FDA, which has helpful hints like not packing more than 50 pounds of stuff into each box (hard for the staff to handle). And remember:
"There is no loading dock available for document delivery at the 1401 Rockville Pike address. The DCC (Document Control Center) is located on the second floor of the building. Pallets are not permitted on the elevator; therefore boxes must be unloaded on ground level of the rear entrance and placed on a hand-truck for delivery to the second floor. Hand trucks are not permitted through the front lobby"
Permalink to Comment3. Linkmeister on June 10, 2004 4:22 PM writes...
Oh my. That admonition is priceless.
Permalink to Comment