About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: firstname.lastname@example.org
In the Pipeline:
Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline
November 25, 2014
Here's a really nice example of high-throughput reaction discovery/condition scouting from a team at Merck. They certainly state the problem correctly:
Modern organic synthesis and especially transition metal catalysis is redefining the rules with which new bonds can be forged, however, it is important to recognize that many “solved” synthetic transformations are far from universal, performing well on simple model substrates yet often failing when applied to complex substrates in real-world synthesis. A recent analysis of 2149 metal catalyzed C-N couplings run in the final steps of the synthesis of highly functionalized drug leads reveals that 55% of reactions failed to deliver any product at all. The missed opportunity represented by these unsuccessful syntheses haunts contemporary drug discovery, and there is a growing recognition that the tendency of polar, highly functionalized compounds to fail in catalysis may actually enrich compound sets in greasy molecules that are less likely to become successful drug candidates. . .
That "recent analysis" they mention turns out to be an internal study of Merck's own electronic lab notebooks, and it sounds very believable. That's a problem of organic chemistry: we can do a lot, but rarely can we do it in a general fashion. The paper details an effort to look for Pd-catalyzed coupling reactions in DMSO or NMP, which are (as the authors point out) not the usual solvents that people choose. But they has many advantages for high-throughput experimentation, not least the solubility of more complicated substrates. They started off by screening bases and catalysts in glass microvials in a 96-well array, but then tried those conditions (and more) in a 1536-well plastic plate.
On that level of miniaturization, you can really start clearing some brush. And they uncovered a range of reaction conditions that have not been reported before, using a very real-world set of coupling partners (shown). Applying one of the more general-looking protocols to the whole set, though, still showed about a 50% failure rate, so they turned around and took 32 of the failures and ran new arrays with them of 48 reaction conditions each. (That's what I mean by clearing things out quickly!) Those 48 reactions consume less than 1 mg of substate in total. By careful mass-based encoding of the array, they could analyze the 1536-well plate in under three hours by LC/MS.
That led to optimized conditions for 21 of the 32, but they took 6 of the remaining recalcitrant combinations and tried another array on them, this time varying catalyst loading, amount of nucleophile, and amount of base. 5 of the 6 yielded to that optimization, which confirms the usual belief that just about any metal-catalyzed coupling will work, if you're just willing to devote enough of your life to optimizing it. And this automated system significantly changes the value of "enough of your life".
This is different from Design-of-Experiments setups, in that those are modeled in a way to minimize the number of experimental runs by identifying (or trying to identify) the key variables. But with very small, highly automated experiments, that's not really as big a concern. You can just let it rip; try a bunch of stuff and look for granularity in the reaction condition space that you'd miss by trying to get more efficient. The Merck team winds up by saying "In biomedical research, chemical synthesis should not limit access to any molecule that is designed to answer a biological question", and that really is the ideal we should be working towards.
+ TrackBacks (0) | Category: Chemical News
November 24, 2014
I'm doing research today and tomorrow, but after that I'll be taking the rest of the week off, so blogging will be intermittent. Wednesday I'll be home making the traditional chocolate pecan pie (recipe here, with many helpful suggestions in the comment section), along with some of the other Thanksgiving food that can be prepared ahead of time. The menu this year will be the same as last year, by popular demand.
After consulting my wife and her mother, I'll see if I can post a recipe for the Iranian rice dish (javaher polow) that we serve along with everything. Problem is, it seems that Iranian food is one of those cuisines that varies so much from region to region (and household to household) that it's hard to put up a recipe without causing a fight of some sort. The closest situation to that in America is with barbecue - what one part of the country considers the pinnacle of the art would be rated as the next thing to cannibalism somewhere else. And so it is with Iranians. Common phrases include "Oh, well, so-and-so doesn't know how to cook (Dish X) the right way", or "They don't know how to make any decent (insert whole swath of cuisine) in (insert Iranian city or region), anyway". Add in some "Well, you used to be able to get good (type of food), but you can't any more", and I can see how my Southern upbringing blends with my wife's Iranian one pretty smoothly. But about the food we make at home, ourselves, I have no dispute at all!
+ TrackBacks (0) | Category: Blog Housekeeping
I've written several times here about reaction discovery techniques, going back to 2011. And I've been meaning to link to this recent review in Nature Chemistry, because it's an excellent summary of the field and the relevant literature. If you're at all interested in the topic of combinatorial reaction-searching and new methods development, you really should have a look - as far as I can see, it's comprehensive.
+ TrackBacks (0) | Category: Chemical News
Here, then, is the bottom of the drug-manufacturing barrel: the recent case in India where women at a sterilization clinic were poisoned by defective ciprofloxacin tablets. They were supposed to be getting 500mg of the antibiotic, but after several deaths, analysis has shown there there was perhaps 300mg of the actual drug present, and some zinc phosphide rat poison as well.
This is horrifying and inexplicable. Zinc phosphide is a smelly grey-to-black powder, and ciprofloxacin is white and odorless. It goes without saying that no facility processing antibiotic tablets should be preparing rodenticide as well, and there is no way that the two could be mixed short of absolutely criminal incompetence. The companies involved are two Indian generic manufacturers, Mahawar Pharmaceutical and Kavita Pharma. There are reports in the Indian press that when authorities raided the companies for samples of the drugs that a significant amount of drug material appeared to have been recently burnt.
India has major problems with corruption in its state-run health care, and there are suspicions in this case as well. (The press is also reporting that at least one of the companies has been fined for substandard or fake drugs in the recent past, which brings up the question of why the government was dealing with them now). And overall, the top end of Indian technology and medicine is something that the country can be proud of - but the bottom is a disgrace, as Indian citizens themselves are well aware.
+ TrackBacks (0) | Category: The Dark Side | Toxicology
The Tufts Center for the Study of Drug Development has a new estimate for the cost of developing a new drug. Past estimates have been greeted with a range of reactions, not all of them favorable. In general, people who actually do drug R&D tend to find the Tufts numbers reasonably credible, and people who are upset with the industry's pricing structures tend to find them egregiously inflated.
Bruce Booth has an excellent breakdown of the latest iteration. Rather than do something similar, I'm going to refer people to his post. As he points out, there are three components to the Tufts numbers: the direct cost of developing a drug, the attrition rate (and paying for past failures), and the time/opportunity cost of the investment. Many of the disagreements about these estimates come from people who only want to talk about the first part. Unfortunately, if you actually want to use real money to develop a real drug, the other two come into play.
Bruce's analysis indicates that the Tufts assumptions for the second and third parts are probably pretty accurate, but he spends some time going into the direct cost section, since in recent years the proportion of orphan/small indication drugs has risen. But overall, his conclusion is that "If the Tufts estimate is off the mark for the entire industry, it doesn’t appear off by a huge amount, and certainly not the order of magnitude implied by the critics." Considering that some of those critics have advanced numbers that are more than two orders of magnitude off, I think he's right.
+ TrackBacks (0) | Category: Business and Markets | Drug Development
November 21, 2014
Covalent drugs have been a big item in R&D over the last few years, and I wrote here about covalent fragments. The whole topic of reactive groups in small molecules and their interaction with living systems and biomolecules is a complicated one, with many interesting twists.
Now the Shoichet group at UCSF has what could be a useful computational approach to the field. They're reporting "DOCKovalent", a virtual screening platform for covalent inhibitors, and illustrate it with examples of cyanoacrylamides and boronic acids across several different enzymes. The calculations are based on the angles needed for an electrophile to react with a residue like cysteine - these reactions, as organic chemists know, can be rather constrained in what approaches the reacting partners have to take. In solution you can see stereoelectronic effects that arise from the structures of the small molecules, but most reactions can find a way. When this process is taking place in the clefts of a protein, though, the number of feasible approaches can get cut down considerably. I'm generally pretty hard to convince when it comes to virtual screening, but the number of constraints needed here gives me more hope than usual for meaningful results.
Running their program retrospectively, with known covalent inhibitors of various enzymes versus decoy molecules, showed that the virtual screening did (in most cases) give hit sets that were well enriched in the real binders. Like any such technique, it's going to fail sometimes, both on individual compounds and on whole runs, but the Shoichet group has made the tool available on the web for anyone to try for free. Here's the link, and I applaud them for putting it out there. The appeal of computational approaches has always been the low barrier to entry, and this lowers it even more. That's the hazard of computational approaches, too, of course, but the results from a virtual screen like this should at least be easy to subject to a real-world test.
+ TrackBacks (0) | Category: In Silico
There's plenty of excitement about PCSK9, the latest LDL-lowering pathway to make it deep into the clinic. You can tell that companies (and investors) have high hopes for it, since it's heading right into a market that's dominated by generic statins. The optimism may well be justified - for example, Sanofi and Regeneron recently presented some impressive data comparing their antibody to Zetia in patients who can't take higher statin doses, and Amgen has shown similar numbers. There are at least five antibodies and one RNAi in development, in a very tight race to the FDA and to the market. To give you an idea, the latest development was Sanofi and Regeneron paying cash to Biomarin for an FDA priority voucher that reduces their review time to six months instead of ten. (No, I didn't know that you could do that, either).
Amgen, though, appears to be trying to own the whole racetrack. They're pushing a patent claim to any antibody to PCSK9, not just their own agent. Update: See below for more on this Here's their press release; they're quite up front about this strategy. But I have doubts about how well that's going to work. Over the years, the judicial trend seems to have been to not go for such broad claims in the biomedical field. There are so many potential antibodies to any given protein, and so many ways of modifying them and dosing them, that I find it hard to imagine a straight patent claim on all that space. It's a lot like claiming "All inhibitors of enzyme X" in small molecules, and we already know that such claims don't stand up. It wouldn't surprise me, in fact, if someone had already tried a broad antibody claim like this and had it shot down (does anyone have an example?)
So I don't think that the PCSK9 struggle is going to be decided by the patent lawyers. It looks to be fought out in the clinic, at the FDA, and especially out there in the real market. And it will be quite something to see.
Update: see the comments section. Amgen doesn't seem to be claiming every single antibody; just those against a particular epitope (although a very useful epitope, obviously). And patent litigation in such situations is complex, with precedents (none of them necessarily exact) going both ways. So this will be quite something to watch. Perhaps Amgen is hoping to paid a cut to go away?
+ TrackBacks (0) | Category: Cardiovascular Disease | Patents and IP
November 20, 2014
Now, if you want to get a paper published in a prestigious journal like The International Journal of Advanced Computer Technology, you'd better make sure that you're up to it. You'd better make sure that you have good stuff to report, and that the paper is worth a spot in a venue like that. Most of all, you'd better make sure to send $150 to some guy named Tej Pal Singh.
That's how this paper made it through. It's from a ticked-off Australian engineer, and he gave it the arresting title of "Get Me Off Your F*#&ing Mailing List". That, by coincidence, is also the text of the abstract. And of the entire paper, over and over and over, just without those internal punctuation marks.
Accepted, with "minor revisions". Rated as "Excellent" by the "reviewer". Ready to publish. All that stands in the way are those hundred and fifty bucks, and somehow I think that Tej is going to have a bit of a wait.
+ TrackBacks (0) | Category: The Scientific Literature
We saw AstraZeneca talking about its upcoming onslaught of drug approvals the other day, and not to be outdone, Sanofi is doing the same. Actually, they're seeing AstraZeneca's puny bet and raising them: Sanofi says that they could have "up to six" approvals next year alone, and as many as 18 by the end of 2020.
An average of three new drugs a year for six years in a row? I hope they can do it, but it's never been done. Not even close. I sort of wish that companies, when they started talking like this, would just go for it, and say something like "We plan an absolutely unprecedented, record-shattering burst of productivity. Grovel before us, puny humans!"
The biggest surge that I know of is GlaxoSmithKline's recent run: five approvals (one of them a vaccine) in 2013. They've had one new molecule approved earlier this year as well. So with that impressive performance, they're averaging out to what Sanofi is proposing to do for a six-year stretch. That will be very good news if they manage it, both for patients and (obviously) for Sanofi. But when companies start talking like this, they seem to attract lightning bolts.
One last thing to note is that if Sanofi really is headed for a surge in drug productivity, how much of that, do you think, happened on ex-CEO Chris Viehbacher's watch? The board just ousted him, and here they are a couple of weeks later talking about how great things are going. . .
+ TrackBacks (0) | Category: Drug Development | Drug Industry History
There's a lot of effort (and a lot of money) going into targeted nanoparticle drug delivery. And that's completely understandable, because the way we dose things now, with any luck, will eventually come to seem primitive. So you used to just have people eat the compound, did you, or just poke it into their bloodstream with a sharp stick, and let it float around wherever it would and hope that it made it to the target without doing too much else? Quaint.
The nanoparticle idea, on the other hand, is to encapsulate the drug somehow in the layers of these tiny particles which will release it only under the right conditions. The outermost layers, meanwhile, are meant to be coated in ways (recognition peptides, usually) that send the payload to only the right cell types. Imagine a drug for lung cancer where all of the dose goes to the lungs, and all of it hits only the cancerous cells. You could put in the roughest, toughest chemotherapy agents available, because you wouldn't be stuck with poisoning the rest of the patient's body at a slightly slower rate than the cancer, which is how it works too much of the time now.
But that level of control is yet to come. We just got another read on this in the clinical results from Bind Therapeutics, one of the leading companies in this field. Bind is another Bob Langer-derived company - when other parts of the US (or other countries) talk about wanting to have humming biotech hubs of their own, they'd be happy just to have Bob Langer. Bind, under CEO Scott Minick, has deals with an impressive list of big pharma companies to try to apply their nanoparticle delivery systems to existing drugs, although Amgen pulled out of an arrangement with them over the summer.
That didn't help the stock, and neither did the latest news. This was a Phase II study in non-small-cell lung cancer patients with docetaxel, a widely used chemotherapy drug that could certainly use some targeted delivery. The results were mixed. Investors were clearly hoping for something better, but it could have been much worse. As that FierceBiotech link above details, the company saw some responders when the new formulation was dosed every three weeks, but not when it was dosed every week, an interesting result that's going to take some thinking about. Inside the every-three-weeks group, the patients with two particular tumor varieties (KRAS or squamous cell carcinoma) seemed to show relatively good responses. But the sample sizes there are small.
The company is planning another round of Phase II, concentrating on those subtypes and dropping the once-a-week dose. That's exactly what you do in Phase II: the drug has hit the real world with real patients in it, and you do whatever seems to work. It would have been great if they'd seen a bigger across-the-board response, but these are the early days of targeted nanoparticles. There's a vast amount we don't know about these things; the odds are huge that no one is going to be hitting any balls over any fences for a while yet. Bind's next trial should tell them, though, if their current docetaxel particle idea is worthwhile for NSCLC.
That could go either way. The current trial may turn out to have lit up just the sorts of patients who will go on to show impressive benefits, or those effects could just flatten out and slide back into the statistical swamp. Here it is, the absolute essence of drug discovery: there is no way to know in advance. The only way to find out is to round up some more patients, round up some more drug, and round up some more money and try it. Good luck to them!
+ TrackBacks (0) | Category: Cancer | Clinical Trials | Pharmacokinetics | Toxicology
November 19, 2014
Spam mail is evolving: this afternoon I had one purportedly from the American Medical Association, although it was definitely not sent from their domain. In slightly ungrammatical English (but a cut above many other spammers), it informs me that they're sharing a document with me on Google Docs, and invite me to click a link. That is to say, it's "an important document for you perusal". I can see that the link, though, would direct not to a shared PDF, but to some Java-based thingie (jquery and so on) in Brazil. No thanks. The message makes liberal use of the actual AMA logo and so on, and includes a slightly garbled copyright notice at the end, which is a touch I always appreciate from these sorts of people.
I note this mainly because it's the first medical/scientific phishing attack I've had. I mean, I get plenty of spam, and more invitations to speak at huge-sounding Chinese conferences than I can count (Track 17?) But they're after me in a more aboveboard fashion.
The lower end of the spam business, I've read, deliberately makes the come-ons so ridiculous to eliminate false positive responses - anyone dumb enough to respond to the exiled widow or corrupt construction minister is probably dumb enough to go all the way, which keeps the senders from wasting valuable scamming time. Phishing attacks, on the other hand, at least the better class of them, go out of their way to seem plausible and clickable. How long will it be before someone starts faking ACS emails? And what will their success rate be?
+ TrackBacks (0) | Category: The Dark Side
So AstraZeneca says that they're expecting "8 to 10 " approvals in 2015-2016. Has anyone ever done that? Even close? I take it that this whole press release is there to pump up investors and keep Pfizer from coming back and making another bid for them, but although the company does have a lot of stuff going on, this just seems wildly optimistic. What's the modern record for most drug approvals in one year? Two years?
+ TrackBacks (0) | Category: Business and Markets | Regulatory Affairs
Hmm. Via Twitter, we find this interesting example of moving the goalposts. NeoStem, a small stem-cell company, announced results the other day for a trial of their cardiac stem cell therapy. One bearish trader who'd been following them was surprised that the stock didn't drop more on the results, given that the trial didn't seem to have reached its primary endpoints at all.
But that's when he discovered that the primary endpoints had been changed. Here's the record at ClincialTrials.gov, and you can see that a lot has been taken out, and a lot added. Measurements of mycocardial perfusion have been de-emphasized, and if you guessed that the company saw no differences there in the trial data, your psychic powers are functional today. Measurements of major adverse cardiac events, though, have been added, and if you guessed that the company did see encouraging numbers there, you're two for two. And if by "encouraging", you meant "but still not statistically significant", then you should head for the race track this afternoon, because you're on fire.
So how much of this sort of thing is allowable? From what I know, this is over some sort of line, but I don't know the ins and outs of Clinicaltrials.gov, never having seen a case quite like this before. Any thoughts? As for NeoStem themselves, their press release was all about "positive data", but the stock, which had moved up on Friday and Monday in anticipation, did end up with a substantial loss. Would it have been even more substantial if more people had read more closely?
+ TrackBacks (0) | Category: Clinical Trials
There's been a lot of safety on the blog this week. One recent accident I haven't talked about is an azide explosion at Minnesota - C&E News, though, has plenty of coverage. Back in June, a grad student was injured when a batch of trimethylsilyl azide exploded - a 200 gram batch. As more details came out, it turned out that the student had reached in to adjust a thermometer on the reaction setup, and was wearing no protective gear:
More important than the reaction, Tolman emphasizes, is the deeper root cause of the incident: insufficient recognition of the reaction’s hazards. Warnings included with literature protocols were “pretty lame,” he says. He also thinks that the lab group became became complacent after doing the reaction several times without incident. “While they were aware of the hazards, concern about them became less up front,” he says.
Indeed. This just emphasizes the sort of thinking I was talking about the other day, the "What's the worst that could happen?" exercise. In the case of two hundred grams of azide, the worst that could happen should be apparent to anyone who's being let loose in a graduate chemistry lab: that thing could blow up. I notice that the student involved is quoted as saying that he's learned that the hazards involved in running a reaction (even an Org. Syn. prep, which this was based on) are not necessarily made clear in the literature. While that's true, the hazards of 200g of sodium azide going to the TMS azide should have been clear from the start. This, to me, was not a failure-to-warn; this was a failure-to-realize, as the department chair says in that quote above.
There was a lot of good discussion at Chemjobber's blog when this happened (here as well), with one commenter noting that 200g of trimethylsilyl azide costs about $600. That, to me, illustrates another problem: it's well worth six hundred bucks to keep someone from having to do an azide reaction of that size in your lab. I know that funding is tight and that academic labs can't just trot out and buy all the reagents they need, but still.
More recently, a letter to C&E News mentions that no one at the university has been talking about blast shields. The response is that these things are sort of the last line of defense, and that a higher-level review ("Should we be doing this at all?") would be a more general solution. That's true, but people are going to set up reactions of all sorts, at all hours, especially in an academic lab. You can't keep an eye on everyone, all the time, not even close. But even if the plan of a large-scale azide prep gets carried out, a general recognition that you shouldn't be near the thing without barriers and PPE would be pretty useful. Now, it's true that on that scale many blast shields are only going to be able to do so much, but they can at least soak up some shrapnel, and I'm just baffled by anyone setting up a reaction even remotely like this without having one in place.
It all gets back to thinking about what you're doing, and there's no form to fill and no box to check to make a person do that. Always think about what the most likely problem might be with a reaction, and what the worst problem might be. If you're heating up two hundred grams of sodium azide and that answer to both questions isn't "This could blow up and kill me", then there are even bigger problems that need to be addressed.
+ TrackBacks (0) | Category: Safety Warnings
November 18, 2014
Several folks on my Twitter feed have mentioned a new book coming out from Springer, 100 Chemical Myths. I haven't seen a copy, but it's supposed to "deals with popular yet largely untrue misconceptions and misunderstandings related to chemistry." That gives plenty of room to work in, for sure.
Looking over the table of contents on that Amazon link, my guess is that I would agree with the authors pretty much across the board. But my other guess is that the book won't do as much good as anyone would like. I fear that it may be 400 pages of preaching to the choir - the people who need to read it won't ever hear about it, and they probably wouldn't crack the cover on it even if they did. They already have their opinions, firmly held ones, and they already know that this book is full of attempts by some chemists to change their minds, so it'll probably be dismissed out of hand. It could provide useful material for one-on-one encounters, though - I'll report back if I get more details.
+ TrackBacks (0) | Category: Snake Oil
Here's an NMR imaging blog with details of a recent problem in an Indian facility. Two people ended up stuck to the machine, pinned by an oxygen cylinder (!) that one of them brought into the room. Both sustained injuries.
There are two questions here: one is how anyone is allowed to wheel a ferromagnetic metal cylinder anywhere near an NMR magnet, and the other is how it took so long to quench the magnet once the accident had occurred. That latter point is addressed by the blog link above - the hospital is saying that the emergency quench circuit malfunctioned, and that it took four hours for a GE technician to arrive and get things shut down. I'm no NMR hardware expert, but I wonder about that one myself. As that blog post concludes:
Whether or not GE was really at fault in Mumbai we shall learn eventually, I hope. (I have heard rumors that some sites like to bypass their quench circuit in order to avoid having the cost of recharging the magnet should the quench button get activated. Insert your own exclamations of disbelief here because I'm incredulous.) In the mean time, this sorry saga is an opportunity for all of us to review our own procedures and take the extra moments to ensure that we've done everything humanely possible to eliminate risks. There really is no excuse.
+ TrackBacks (0) | Category: How Not to Do It | Safety Warnings
The Allergan / Valeant saga has come to an end, with Allergan fighting them off by doing a deal to be taken over by Actavis. No word yet on whether they'are going to let Allergan keep the invisible golf course.
Valeant is, of course, famously tight-fisted (which is why Allergan had no desire to be taken over by them), and the Actavis price was about six billion dollars higher than what Valeant said that they were willing to pay. One wonders if all six billion dollars were necessary to get them to go away, but Activis must have run their own numbers. If the deal turns out to be a success, it might cause people to look with suspicion on any future Valeant bids for other companies, though.
That Reuters story says that the combined companies expect to have an R&D budget of $1.7 billion. Allergan's current spending is around 1 billion (and falling), and Actavis' was 0.62 billion before its most recent acquisition, so we may have another M&A case of one plus one equaling about 1.8. Still, if it had been Valeant, one plus one would have equaled about 1.04, so there is that. Actavis does spend less on R&D as a percentage than Allergan does, though, so there is that.
+ TrackBacks (0) | Category: Business and Markets
November 17, 2014
The data from the IMPROVE-IT trial on cardiovascular outcomes for Vytorin have been released. And the combination met the primary endpoint: fewer heart attacks and strokes compared to those already on statin therapy alone.
Matthew Herper has an excellent roundup of the results and their context. The effect is real, but it's not gigantic, either, so the cost/benefit argument will go on (or at least it will for another couple of years, when the drug goes generic). It is a boost, though, for the use of LDL as a biomarker for cardiovascular risk: a different mechanism to lower LDL (cholesterol absorption inhibition), when added to a statin, decreased the risk. Some are wondering about other effects of ezetimibe (the cholesterol absorption inhibitor itself) influencing the result, but I'm not so sure about that. From what I know about the compound,
it's hardly absorbed at all, and most of its effects are indeed in the gut wall. Update: nope, I'm hallucinating again.
Now it's time to speculate how things might have been if the earlier ENHANCE trial hadn't been handled so clumsily by Merck and Schering-Plough. The suspicion that brought on the drug hurt its sales, and according to today's results, hurt patients as well who didn't get treated with it. If you think being up front about bad results will hurt you, take a look at the costs of looking deceitful.
+ TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials
As many readers will have heard, there was a fatal accident at a DuPont plant in the Houston area over the weekend. Four workers were killed by methyl mercaptan (methanethiol), and here's more on what happened. As is often the case, there were fatalities at the site of the leak, and more from those first trying to help.
Methanethiol itself is certainly toxic, although its powerful smell (which apparently reached 40 miles downwind of this accident) usually keeps anyone away from harmful concentrations of it. In the concentrations experienced next to a major leak, though, differences in toxicity for such gases are almost besides the point: just about anything can be deadly.
It'll be some time before anyone finds out just what went wrong at the La Porte plant. A large chemical operation has plenty of inherent hazards; the question is whether these deaths could have been prevented, and how. One's first assumption is that they could have been, and that this did not have to happen this way, but we'll see what the investigation reveals. One detail that Houston Chronicle story notes is the lack of protective equipment (two of the employees killed appear to have come into the area without masks after responding to the first employee's call for help). I would have to think that the only thing that would keep a person alive in such circumstances would be a full-face mask with its own oxygen supply. I don't know if that's what the last responder, who did have a mask, was carrying. That sort of equipment takes more time to put on than any sort of filter mask, but no filtration system alone would save someone from a room filling with methyl mercaptan, either.
So in memory of these four, here's something that all of us who work in the lab can do today. Take a look around you. Remind yourself of where the fire extinguishers are (and there should be more than one kind). Think of how you'd get to the safety shower if you had to use it. And pick the door you'll use if a situation get beyond that. It's far easier to go over such details when things are quiet, and if you do that every so often you'll have a much better chance of remembering where to go when you really need to.
And whenever you're setting up an experiment that involve any noticeable hazard (pyrophoric reagent, toxic liquid or gas, potential exotherm), think for a moment about what might be most likely to go wrong, and also what the worst thing that could happen might be, and what you'd do about them. Is it dropping that bottle of phosgene solution on the floor? A fire started by your hydrogenation catalyst or your sodium hydride? An exotherm that sends your reaction pouring out over the hot plate or heating mantle? Picturing these things beforehand is never wasted time, because (as everyone with experience in the lab knows) such things do happen, and not on anyone's schedule. Those four DuPont workers were getting ready to go home for the day when suddenly everything went wrong: in their memory, keep an eye out for what might go wrong in your own fume hood.
+ TrackBacks (0) | Category: Safety Warnings
November 14, 2014
The New York Times Magazine has a piece on the current problems with drug discovery. I was interviewed by the author, and get I quoted at one point.
Nothing in the article will amaze any regular reader here, but it's probably the first time many of the magazine's readership are hearing about many of these issues, so I hope it'll be useful in that regard. Those readers might end up tying the concepts of genomics and target-based drug discovery together a bit too tightly, but it's a lot of catching up to do in a small space.
+ TrackBacks (0) | Category: Press Coverage
So how's the new chemistry building at Princeton working out? I last asked for comments four years ago, but I'm prompted to do so again by this post by Luysii, who has been hearing that the building isn't necessarily working out as planned. Can anyone comment? To a first approximation, a lot of buildings don't work out quite as planned, but I'm always interested in hearing about how the glassy high-interaction high-innovation designs are performing in the real world.
+ TrackBacks (0) | Category: Chemical News
I've been enjoying this recent paper in JACS, but then I always like to see intersections of molecular biology with organic synthesis. The authors, from CUNY and the University of Strathclyde, are using a phage display library to see if they can come up with displayed peptide combinations that are catalytic. Specifically, they're trying to form an amide bond from a primary amine and a methyl ester, which at RT in water is going to be a pretty slow process. (But as numerous enzymes show, it's certainly capable of being accelerated).
The phages display five copies of a 12-mer peptide sequence, and they have about 3 billion different sequences in the library. That alone is why I'm always interested in phage display, DNA-encoded libraries, and the like: the sheer number of combinations that can be worked with. There aren't three billion reported organic compounds in the history of the human race - maybe one-twentieth that number. But there are three billion in that Eppendorf vial over there. The problem is to tell the few that you want from the billions that you don't, and that's where the molecular biology really kicks in.
In this case, though, the authors used an ingenious trick to find the phages that would catalyze their reaction. The reacting partners were Fmoc-threonine and leucine methyl ester, and the product peptide is so insoluble that it tends to aggregate around the business end of the phage that's producing it. Careful centrifugation of the mixture, then, will give you a heavier band of phage(s) with more apparent weight, and treatment of those with subtilisin cleaves all those methyl esters and allows the stuck peptides to be washed off. Those phages are then taken back into E. coli and amplified for another round, and after a couple of cycles, they sequenced samples of the phages that had been selected.
To control for possible things happening to that methyl ester along the way, they did the same sorts of experiment with leucineamide, this time using acetonitrile/water to wash off the formed peptides from the phages, rather than hydrolyzing the methyl esters. Comparing the sequences from all these selections shows. . .well, a lot of variation. There aren't any obvious similarities, either among the phages identified in the two experiments, or between the ester/amide ones. But the selected phages do tend to catalyze amide formation; that much seems clear. One difficulty is that the amide formation appears self-limiting, since the aggregating product gums up the region doing the catalysis (the differences in aggregation in that area between active phages and inactive ones can be seen in electron microscopy images).
So you can apparently find catalytically active proteins, even just in repeated dodecapeptide displays. And there are apparently a number of different ways for them to be active (thus the sequence variability). This work reminds me of a phage-display paper I blogged about last year, where the authors were searching for molecular recognition sequences. (Note, though, that as pointed out in the comments, the binding data in that paper are not as compelling as they should be).
So we have some evidence that phage-display peptides could have applications in organic chemistry, but there's nothing conventionally useful yet. The techniques being used are still probably not exerting enough selection pressure. In the current paper, the selected phages definitely stand out from random background, but they're not exactly artificial ribosomes, either. Maybe it's the fairly simple nature of the phage display, or the product inhibition. I can't help but think that there are useful and interesting things out there in those crazy billions of phage-derived peptides, although it's worth remembering that the three billion peptides in this experiment are still only about one-millionth of the possible number of 12-mer sequences. We still need better ways to produce, select, and evaluate them.
Update: Paul D. in the comments put me onto a company I hadn't been aware of, Siluria. They've been using phage display as a screening platform to product new inorganic catalysts, a sort of combinatorial biomineralization process, that seems to be yielding results. Here's an article on them, and here's a note on the progress in their methane-to-ethylene process.
+ TrackBacks (0) | Category: Chemical Biology
November 13, 2014
You know, this is something that I hadn't thought of. Those papers with the egregious mistakes in them, the ones that we all enjoy making fun of? The traffic that comes in to them skews the journal metrics. I wonder how many journals have some of their "Most viewed/shared articles" lists dominated by the ones that everyone comes to laugh at or roll their eyes about? (That link has a good one that I hadn't come across - see the first one on the list).
+ TrackBacks (0) | Category: The Scientific Literature
If you follow cardiovascular therapies, you'll no doubt have seen that a bit of information has come out on Merck's long, long, long-running IMPROVE-IT trial. That is the massive outcomes study for Vytorin, the combination of ezetimibe (the Schering-Plough cholesterol absorption inhibitor) and simvastatin (Merck's early statin drug). This combo ran into serious controversy back in 2008, when some clinical studies suggested that it had far less benefit than it should in some patients. The way that clinical data got released was not very glorious, either, and brought suspicions that the companies had tried to bury it. The only way to be sure was to run a big long-term study on cardiovascular outcomes. Such a trial was already underway even back in 2008, and here we are in 2014 without the numbers.
Over the years there have been various doubts and fears about what was going on. Early last year, there were reports that Merck was having to work over the data again, and that doesn't inspire confidence. But the Wall Street Journal noticed the other day that a Merck SEC filing mentioned that it had determined that the trial's results would not result in any accounting changes to the value the company had assigned to Vytorin.
Ed Silverman wonders at Pharmalot if it really matters at this point. The results could be very scientifically interesting, because there's a lot that we're not grasping about lipoproteins and their relation to disease. But both the ezetimibe combo (Vytorin) and ezetimibe itself (Zetia) are facing generic competition in perhaps 18 months (if the generic companies stay interested) - this is all coming too late in the life cycle to affect anything for Merck. It looks like we'll see the actual number later this month.
Update: Matthew Herper says that this could have a big effect on the use of surrogate markers in clinical trials. But that will happen only if, as he says, the most likely outcome doesn't happen. That outcome? "We Get Mud" - messy, hard-to-interpret data, with no strong signals to act on.
Second update: the data are out. The combination appears to be effective.
+ TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials
There's an old story of a guy who lost three cars by betting on inside straights in poker games. He lost the first two when he drew and didn't fill the hand - and he lost the third one when he did. In other words, you need to be sure that even if things work the way you want them to that they'll be enough, and that brings us to a company called Oxigene.
They've been working on fosbretabulin for ovarian cancer. That's a phosphate prodrug of the known chemotherapy agent combretastatin A-4, and there's nothing wrong with that - prodrugs can dramatically change compound distribution and efficacy. The company has just reported Phase II results in a tough group of patients to treat, recurrent ovarian cancer, and they have positive data. Progression-free survival was increased in the combo of fosbretabulin and Avastin compared to Avastin alone, and a post-hoc look at a smaller group whose cancer was resistant to platinum-containing agents seemed to show an even bigger effect. So far, so good.
But is it good enough? That's what Adam Feuerstein asked at TheStreet.com. One big problem is that, interestingly, Avastin is not approved for this indication. So Oxigene's trial was not conducted versus the official standard of care. The company chose this, though, because Roche is running trials of its own to try to get approval in this area. And their numbers on Avastin plus standard chemotherapy look somewhat better than Oxigene's combination data, and in a larger trial at that. Put another way, if both the Roche and Oxigene data are solid, then Oxigene may have just proven its combo's inferiority.
As Feuerstein goes on to say, and he's absolutely right, Oxigene now faces some tough decisions in Phase III. Later this month, Roche may well get approval for that Avastin+chemo treatment in these patients, so that now becomes the obvious standard-of-care comparison for Oxigene. And if the Avastin combo doesn't get approved, then Oxigene is really up the creek, because all their clinical data will then have been generated with a drug that the FDA hasn't approved for the indication at all. Feuerstein has some advice for them (see his article), but it'll be a bold step. Update: the approval has come through.
Bold steps may be the only ones Oxigene has left, though. If the Roche trials had worked enough for approval, and Oxigene's own trial data had been superior to that in turn, they'd be in great shape. That didn't happen, and the quicker the company comes to terms with that situation, the better off they'll be.
+ TrackBacks (0) | Category: Cancer | Clinical Trials
November 12, 2014
I'm on my way back from my visit to GSK and Duke. I'd never visited the GSK site in the Research Triangle (although I remember the old Burroughs Wellcome one from the mid-1980s), and it was good to see the place. There are a lot of really good people there, some of whom I've known for a long time, and others that I just met. The forum that Bernard Munos and I spoke at seemed to go over well, and I enjoyed doing it. But I couldn't help but think that one of the things energizing it was a pervasive worry at the place about further rounds of re-organization and cutbacks. As so many of us have had the chance to find out, that's not the way to enhance research productivity. Or much of anything else.
And the visit to Duke was quite surreal. The old Gross Chem building has been totally refurbished, and as I told people there, I don't think a more thorough exorcism could have been performed on the place. It was like walking around in some sort of science-fiction film: parts of the building exterior look exactly as they did thirty years ago, absolutely unchanged, but then you open a door or turn a corner, and there's 2014 and no doubt about it. They ripped out all the interior walls (and everything else), knocked a big skylight through the middle of the roof, and now the place, which no longer has any real chemistry-department role at all, is all full of sleek lighting, nice wood and carpeting, flat screen displays and coffee bars. I did not always have such a great time in the old building, and seeing it so thoroughly obliterated was, well, sort of inspiring.
My time in the present-day chemistry department was fun. I got a chance to talk with several of the younger faculty, gossip with some of the longtime ones, meet a number of grad students, and talk to a good crowd. I hope that one of the students I met took the advice I gave her: she was thinking about downloading the university-required template so that she could start on her dissertation, and I told her to do it that very evening. My general grad-school advice remains the same: the point of being there is to get out of there. Finish your degree! Do it honorably, and without making enemies, but finish your degree and move on to the rest of your life, which is most certainly not grad school.
+ TrackBacks (0) | Category: Graduate School
Here's a good article on animal models in drug discovery, and their many limitations.
We have moved away from studying human disease in humans,” (Elias) Zerhouni lamented to the NIH’s Scientific Review Management Board meeting. “We all drank the Kool-Aid on that one, me included.”
“The problem is that it hasn’t worked, and it’s time we stopped dancing around the problem,” he continued, suggesting researchers have become too reliant on questionable animal data. “We need to refocus and adapt new methodologies for use in humans to understand disease biology in humans.”
The article notes the controversy over mouse inflammation models, among others. I'd add pain as an area where something is clearly off kilter with the traditional animal models, and no one has ever been happy with xenograft models in cancer. (An entire post on "worst animal models" is here).
Well, just use human cell cultures you say, or at least you say if you've never tried it. The problem, of course, is that primary human cell cultures are often hard to keep going, and the things you have to do to keep them going often skew them away from being the sorts of cells you were hoping to study. There's nothing like a real in vivo system, unfortunately. Stem cells hold out a lot of promise for generating human cells of many types, but (as this article explains), the problem is that those cells are fresh, newly minted ones. And those don't always recapitulate the sorts of changes that you see in (for example) aging neurons.
It looks like we're moving towards re-creating real human organ-like tissue in vitro, as much as we can. That's not the least bit easy - there are so many factors that influence cellular physiology, from the obvious ones (constantly changing signals from blood chemistry) to the nonobvious (mechanical forces from nearby muscle contractions). The other way to do it (not discussed in the linked article) is to humanize the animal models as much as possible. One could imagine a rather unnerving "mouse" consisting of mostly (or completely) human tissues out in the periphery, for example.
I've defended animal models many times on this site, but my strongest arguments for them are (and remain) that there have to be some intermediate steps on the way to human trials, and that we don't have anything better that mice et al.. If something better comes along - and we do need something better - then out they go. The lack of really predictive models before human trials is the reason that 90% of all trials fail, and having 90% of all our trials fail is gradually squeezing drug research into a very tight corner indeed. Breaking out of it would be a real accomplishment.
+ TrackBacks (0) | Category: Animal Testing | Biological News | Clinical Trials
November 11, 2014
The BBC has an article posted with the title "Pharmaceutical Industry Gets High on Fat Profits", so at least you know where that one's going. And it reads just as you'd expect - if you haven't had enough pharma-bashing recently, that'll provide you with all you need.
It contains a chart of marketing expenses, per company, provided by a firm called Global Data. First, the numbers. In this post, I noted that Pfizer spent $7.9 billion on R&D in 2012, and estimated (based on a more solid number of $0.6 billion on direct-to-consumer ad spending) that they'd spent around $5 billion on marketing. For 2013, Pfizer's R&D expenses were down to $6.5 billion (ugh), but Global Data has their marketing as $11 billion instead. I note from their earnings statement that Pfizer's entire "Selling, Informational, and Administrative" expense number was $14.2 billion, so Global Data has 77% of that as the marketing budget. That statement doesn't jibe with, say, Pharmaceutical Executive's annual "Industry Audit", which claims that marketing expenses are a "relatively low" portion of those sorts of overhead figures. No doubt the Global Data people have their own ways of calculating things, and I'd be glad to get more details.
But both R&D expenses and SG&A expenses are high in the entire health care sector. If you look at the ratio of total overhead (those two together) to sales, Pfizer last year comes out to overhead as about 40% of sales. And that's right at the median for the entire health care sector - that last link shows vividly that the ratios for the health care, financial, and IT sectors are way higher than other industries. (That's also addressed in this post).
But let me go from arguing the numbers to another point, one that I've made many, many times, and that is not addressed in the BBC piece at all. Let's just stick with Pfizer: they had $51.6 billion dollars of revenue last year. Even if they did spend $11 billion on marketing, the reason that they did so was the this marketing was supposed to bring in more than $11 billion dollars of revenue. Marketing is supposed to make money. If Pfizer had done no marketing whatsoever, they would, presumably, have brought in substantially less revenue.
And what would that have done to the R&D budget? The R&D/sales ratio is called "R&D intensity". Pfizer's ratio last year was 12.6% of sales. Apple's R&D/sales ratio, on the other hand, is a bit below 3% (which is, to be fair, surprisingly small). Google's ratio is 13.3%, and Microsoft's is 13.1. IBM is 6.2, and 3M is 5.6%, to give you some other big-tech comparisons.
So if Pfizer's marketing department pulled its weight, which it had better have done, then they brought in more than 11 billion dollars of that 51.6 figure. Let's assume that they only brought in 12 billion, and be really conservative about it. Without that, Pfizer's revenues are 39.6 billion. Are they still going to have an R&D budget of 6.5 billion with those sales, for an R&D/sales ratio of 16.4%? They are not. Hardly anyone can sustain that kind of R&D spending. Even if they stuck with the same ratio they have now, that would take Pfizer's R&D spend down to $5 billion, an 11% cut, which would be pretty unkind.
My point is that you can't just point at marketing expenses and start crying foul unless you understand what marketing expenses are, and what they're for. This point, though, seems largely ungraspable to the wider journalistic world.
Update: The folks at Global Data have written me, noting that (1) I had their name wrong (corrected), and that (2) they feel that the BBC piece does not represent their numbers correctly. Their numbers are "sales and marketing spend", not "marketing spend", and they've asked the BBC to correct the article. I'll be revisiting this topic shortly.
+ TrackBacks (0) | Category: Business and Markets
You want to be careful when you add comments to manuscripts, you know. Sometimes your patent application will publish with all the legal back-and-forthing still in it. (There was an even more egregious example of this in an electronics patent application about ten years ago, but I'm having trouble putting my hands on it - someone at the law department had comments in the claims like "Not sure if we can claim this!")
Journal manuscripts can publish with this sort of thing, too. Don't assume that someone will catch, say, something like thisL
The "show comments" and "track changes" functions haven't ruined quite as many careers as the "reply all" button, but they have their moments. One example of the latter I remember from a former company was one of those farewell e-mails - you know, it's been great working with all of you over the past few years, etc. Someone sent one of those out to a gigantic list of people at the company, including many of the higher-ups, and a poor soul did a Reply To All saying something like "Take me with you! I wish I were getting out of this place, too!" Whoops. I often wondered if they got their chance sooner than they expected, after that. . .
+ TrackBacks (0) | Category: The Scientific Literature
November 10, 2014
I'm speaking at GSK today, in a forum along with Bernard Munos, which should be fun (I hope they think the same way when the day is finished!) And tomorrow I'll be giving a talk at the Duke Chemistry Department, which should be a very weird feeling indeed, since I haven't set foot there for many years now. They've moved out of the not-so-beloved "Gross Chem" building, but I do hope to see what they did during its remodeling about the chromium stain I left on the ceiling there. (I know from eyewitnesses that it persisted for some years).
+ TrackBacks (0) | Category: Blog Housekeeping
I enjoyed this article in Science about looking for hidden types of life. There are, of course, plenty of microorganisms that can't be cultured (this has been known for a long time). And there are plenty of odd DNA sequences that are pulled out of environmental samples, corresponding to undescribed bacteria and archaea.
But what about things that we're not even seeing through those techniques?
Undiscovered life, if it exists, is either absent at the locations of existing environmental surveys or is missed by current approaches. There are reasons to believe that current approaches may indeed miss taxa, particularly if they are very different from those that have so far been characterized. The “universal” primers used to detect 16S rRNA genes from bacteria and archaea in environmental samples can miss major lineages because of primer mismatches (5). Similarly, the selection of specific single cells from environmental samples for genome sequencing has been based on rRNA gene identity, thus also relying on these universal primers. Organisms whose 16S rRNA genes are not recognized by the primers would not be detected using this approach. Past explorations of available metagenomic data sets have focused on the discovery of matches to the known genes and genomes—an analysis that is naturally biased against uncovering completely novel life. Finally, although we may soon have petabases of metagenomic sequence data, samples have been collected from only a minute fraction of Earth's countless different environments.
Recognizing these limitations, it is reasonable to speculate that undiscovered and highly divergent branches of life may exist, possibly represented by domains whose marker genes differ extensively from those of the bacterial and archaeal branches on the tree of life.
The authors speculate that even the hypothetical "RNA world" organisms from the beginning of living systems might still be found in remote, protected environments (deep rocks, etc.) It's just that our current methods of detection are likely to miss them, even if they're present. I look forward to seeing where this goes, and what implications this work may have for chemical biology and synthetic biology. I also like the way that these studies may prove useful in identifying and confirming extraterrestrial organisms. The day when we have some of those to argue about is, I think, closer than many might think.
+ TrackBacks (0) | Category: Life As We (Don't) Know It
You know, it's really hard to explain just how ridiculous the bottom end of the scientific publishing world is. I've mentioned formerly reputable journals that now want you to wire money to a bank account in the Turks and Caicos Islands and long lists of people who will "review" and "publish" outright gibberish as long as the checks clear. Note that the money is the only real thing in that transaction, but note also that some reputable publishers have fallen for random nonsense under the traditional publishing model as well. And there are people who will add your name to a paper for a fee, or even whip up some reasonable-looking data and write the whole thing up, for a somewhat larger fee. Don't have a journal to send it to? They'll fake one up for you. It's just an endless garbage heap.
Some recent posts over at ScholarlyOA make this amusingly clear. Here's a letter (PDF) from one of these so-called publishers inviting submissions to the "American Based Researche Journal" (very much sic), and you know you're in for a good time when it starts off "Dear Dear Author". It's signed by "Dr. Merry Jeans, New York, USA", but the content and grammar of the letter makes it appear that Dr. Merry has been the victim of a recent severe concussion. Or several.
And how about the "Integrated Journal of British"? Integrated Journal of British what, you say, but that's because you're narrow-minded. This, folks, is the journal of British, full stop. As ScholarlyOA discovered, their spiffy logo appears to have been lifted from a home-improvement contractor in Wisconsin, and I am not making this up. This fine publication makes a big deal out of their impact factor, 3.3275 (note the significant figures on that one). How, you wonder, does a journal like this have an impact factor like that from Thomson-Reuters-ISI? Narrow-mindedness again, friend: they have something even better, an impressive-looking certificate from the helpful people at "Universal Impact Factor".
Who they? Good question. They appear to be a fake-impact-factor shop, there to slap numbers on laughable fake journals, doubtless for a fee. (My wife is fond of quoting an Iranian proverb that translates as "A thief robs a thief, and God smiles"). I may be wronging them in that assumption, though. Their page for submitting a new journal to the database makes no mention of any fees per se, and after all, it does say that, and I quote, "Journals those who submitted fake or faulty data, will not consider for Evaluation".
If you put any journal you've ever heard of into the UIFactor database, you will find nothing. They're not interested in rating journals you've heard of; it's not their market. But if you look through their coverage list, you will find treasure after treasure. There's the "World Journal of Pharmaceutical Research", whose home country is listed as "Bulagria", which might as well be correct. "Corea" makes an appearance, and there are three entries for the "European Journal of Experimental Biology", all with different impact factors, but all listed as coming from Iraq. And so on - there's all sorts of exotica on the list, but what they all have in common is that if you click on any of the journal names, the detailed information page for each of them is infested with HTML spam for "online abortion pills" where the journal URL should be. Every single time. Someone is clearly paying close attention here.
So that leaves us with a journal-rating website, itself apparently a scam, which rates piles of obscure journals, many of them scams of their own. And it in turn has been infected by still more scamsters. It's a long way down, that's for sure, and the bottom is nowhere to be seen.
+ TrackBacks (0) | Category: The Dark Side | The Scientific Literature
November 7, 2014
See Arr Oh brings news that the Nicolaou group has published another piece of maitotoxin. This has been going on for some years now, and looks to go one for a few more.
To remind everyone, that's the structure of maitotoxin. Readers can decide for themselves which parts of the molecule they find most useful, and which ones they most look forward to reading about the synthesis of. My own opinion was set some time ago, quite a bit of it after I finished up my own total-synthesis-based PhD work.
+ TrackBacks (0) | Category: Chemical News
Readers will remember the "nanorod paper" controversy from last year - two papers were published from the Pease group at Utah on the fabrication of a type of nanostructure, but the images therein were pretty clearly fabricated themselves, especially the paper in Nano Letters. That one looked like it had been done by a fourth-grader using Microsoft Paint. Here's a post that brought these to wider attention. One of the papers was withdrawn fairly quickly, but not before the editors of ACS Nano had told everyone to run along and not to be so nasty. My take on that is the same it's always been: if you don't want your work commented on and criticized by all comers, on whatever grounds and under whatever names they like, then don't publish it. Once you do, though, those invitations have been sent out, and it was ever so for authors of all kinds. Sad? Maybe. True? Oh, yeah.
It's taken a while, but the ACS Nano paper has now been withdrawn as well. Retraction Watch now brings word of the sequel to all this, the investigation at Utah.
Ultimately, the school pinned the blame on graduate student Rajasekhar Anumolu and exonerated principal investigator Leonard F. Pease. Botkin also told us the investigation found that no federal money had been used in the experiments, despite notes on the two papers indicating otherwise. . .
[From Jeffrey Botkin, research integrity officer at the Univeristy of Utah}:
"The investigation determined that all of the images in the Nano Letters paper were fabricated and, therefore, none of the data were valid.
The supplemental figure S2c in the ACSNano paper was the only manipulated image identified in that paper. The manipulation consisted of a cut and past ³patch² over two relatively small areas of the image. These manipulations represent data falsification. For the ACSNano publication, the Investigation Committee could not determine a rationale for the image manipulation as the ³patches² did not appear to cover significant data elements in the image.
Mr. Anumolu, a graduate student in Chemical Engineering, was, by all accounts, primarily responsible for data acquisition and manuscript preparation for the Nano Letters publication and for Figure S2c in the ACSNano publication. The Investigation Committee determined that Mr. Anumolu was responsible for the image manipulations and was guilty of research misconduct. The other authors were found not guilty of research misconduct. Mr. Anumolu was not awarded his degree and is no longer affiliated with the University of Utah. Both papers have been retracted."
I can well believe that the grad student was at fault here, but it's worth remembering that Mr. Anomalous worked for Professor Pease, who is supposed to be looking over the manuscripts that go out with his name on them. (Mind you, the referees at the two journals are supposed to be looking at them, too, so there's that). But Prof. Pease also at the time apparently threatened legal action against Chemistry Blog for calling attention to the story. (The legal threat is no longer mentioned in that post, but was noted by Chembark at the time). He also asked that nothing be written until the University of Utah had finished their investigation, and we now see how long it look the wheels to turn in that case.
No, while there's room for abuse in the post-publication-commentary world, there's been a lot of abuse of the scientific publication process already - from the authors. And sometimes from their institutions, and from some of the publishers as well. Too many of the complaints from those parties about these situations seem to amount to "Please refrain from making unfavorable comments on my paper where people can see them", or perhaps "Please refrain from making unfavorable comments on this paper that we are charging the scientific community to read". As far as I know, every paper that's created a big post-publication fuss about its validity has been withdrawn, with a few cases of substantial revision and survival.
+ TrackBacks (0) | Category: The Scientific Literature
November 6, 2014
I'm glad that this blog post exists. It's a lengthy, detailed rebuttal to a sheet of advice that the "Food Babe" recommends for her followers. For example, you are apparently supposed to start off the day with some warm lemon juice in water with some cayenne pepper in it. Why would I do that to myself, you ask - to ensure that nothing worse happens to me the rest of the day? No, you fools, you do it to "eliminate environmental and lifestyle toxins" from your system by waking up your liver. And so on, and very much so on.
Life is just too short to swat every bizarre misconception caroming around inside Vani Hari's skull. It's pandemonium in there, because the clerks at the front desk are clearly not very selective. But when someone does take the time, I'll gladly point it out. There's plenty for everyone.
+ TrackBacks (0) | Category: Snake Oil
What happens to drug companies after they get their first drug approved? A new paper coming out in Drug Discovery Today has an answer to that question: most of them never do it again.
Author Michael Kinch (Washington U.) has gone back through the records since the 1930s, and some interesting trends emerge. Up until the 1970s, the likelihood was that a company would go on to get a second approval, but then the odds began to go down. The reason is that in recent decades, nearly two-thirds of the companies that get their first approval go on to be acquired by someone else, and thus disappear from the list. It takes, on average, six to eight years for a second approval to come along after a company's first, and most companies aren't around long enough for that to happen any more.
Where will new drugs come from? Start-up companies are often dismantled following their acquisition, particularly if they are purchased by the subset of companies that market products but does not directly perform new drug discovery. This generally occurs within a year or two following acquisitions. From an optimistic standpoint, such turbidity recycles experienced personnel and allows them to join other organizations and begin afresh. Realistically, the dismantling of successful teams does not seem an efficient use of time or resources when viewed from a business or global public health perspective.
True, but I'm of two minds about that. I don't think that what happens to a lot of small companies (being shucked like ears of corn) is such a great thing, either. But it's worth remembering that for many of them, this was their business plan - to be acquired. The folks who put up the venture capital that went into these companies were hoping for just such an outcome, and the people working there were not, for the most part, taken by surprise.
Another thing to keep in mind is that a lot of economic activity looks wasteful from a calm, utilitarian perspective. That's Schumpeter's "creative destruction", which (never forget) really does involve destruction. The money obtained from these takeovers goes on to fund the next generation of small companies. It's a way of getting a return on capital inside a realistic amount of time. Imagine the thought experiment where we make it illegal for a small drug company to be acquired after its first drug approval: what happens then? My guess is that many fewer small companies get started, because a big part of the possible returns to the investors have now been closed off. (And perhaps we'd eventuallys see an article in Drug Discovery Today about the percentage of small companies who were only able to get their one drug on the market and could never follow up on it).
It would be interesting to know if this system has grown into such a form as drug discovery has gotten more difficult over the years - which is the main driving force, I'd say, for the likelihood of being taken over having risen. Are there perhaps more small companies being formed because this is such a likely outcome?
So yeah, the current system is wasteful and disruptive, but what are we comparing it to? I still can't come up with a better alternative - it's like Churchill's crack about democracy
+ TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History