So it turns out that the major medical journals have their own plan for bringing on a clinical trial database: they're going to require companies to register trials before they'll allow publication of their results. I was taken aback at not having heard anything about this idea, until I saw that no one else in the drug industry seems to have, either.
I don't really have a problem with this at all. For one thing, it's better than having the state sue you into doing something - this is a good old free-market fight. Most of the major medical journals need revenue from pharmaceutical advertising, and the companies need the prestige of publishing in them. Come, then, let us reason together.
And the first step here, merely registering the fact of a trial, will sidestep some of the issues I brought up the other day with how to report the final data. I know that there will be pressure to include that data as well, and if we can find a way to deal with those reporting issues, we should. But even a registry of trials would show that something had been tried, naturally leading to questions about how things came out. (That's important for the medical editors' side of this dispute, because the studies that companies don't want to talk about aren't going to be submitted for publication, anyway - the journals have no other leverage at that point.)
Now, one way around this would be for companies to forsake publication in the journals involved (a tough thing to do, mind you) and just present the data with a big splash at a prestigious meeting or two. If you see more professional societies joining this trial-registry movement, especially ones that don't publish their own journals but still sponsor large meetings, then I think the outcome will have become clear.
I think, though, that people have some odd ideas about how clinical trials work and how many of them there are. Consider columnist Michelle Malkin, who wrote about this story today:
From Statistics 101 we know that if a product is as effective as a placebo, 1 in 20 trials will produce a statistically significant finding due to random chance. Since companies run dozens of trials on each major compound, it is not too hard to produce at least one positive, statistically significant finding suitable for publication. The rest are buried in the "circular file." This is great marketing but it is not science.
Um, we don't actually run "dozens" of trials on every major compound. We don't have enough money to do that, as hard as that may be to believe, and in many cases there just aren't enough patients to go around. So we just don't get to play with the statistics in this way. It would be irresponsible, she's right about that, but we don't do it.
And that argument would only hold if all 20 trials were run the exact same way (Statistics 101, you know.) Twenty different trials, each run a different way on different patient groups, can produce results all over the map. Trying to do metastatistics over the whole group is not a job you want; it's often not even possible. And besides, even if they were all the same, the level of statistical significance that Malkin's talking about (1 in 20 by random chance) isn't very high at all. A clinical trial has to be a lot more significant than that to convince anyone at either the FDA or the company itself.