Corante

About this Author
Derek Lowe
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

May 14, 2013

A Specific Crowdfunding Example

Email This Entry

Posted by Derek

I mentioned Microryza in that last post. Here's Prof. Michael Pirrung, at UC Riverside, with an appeal there to fund the resynthesis of a compound for NCI testing against renal cell carcinoma. It will provide an experienced post-doc's labor for a month to prepare an interesting natural-product-derived proteasome inhibitor that the NCI would like to take to their next stage of evaluation. Have a look - you might be looking at the future of academic research funding, or at least a real part of it.

Comments (8) + TrackBacks (0) | Category: Cancer | General Scientific News

Crowdfunding Research

Email This Entry

Posted by Derek

Crowdfunding academic research might be changing, from a near-stunt to an widely used method of filling gaps in a research group's money supply. At least, that's the impression this article at Nature Jobs gives:

The practice has exploded in recent years, especially as success rates for research-grant applications have fallen in many places. Although crowd-funding campaigns are no replacement for grants — they usually provide much smaller amounts of money, and basic research tends to be less popular with public donors than applied sciences or arts projects — they can be effective, especially if the appeals are poignant or personal, involving research into subjects such as disease treatments.

The article details several venues that have been used for this sort of fund-raising, including Indiegogo, Kickstarter, RocketHub, FundaGeek, and SciFund Challenge. I'd add Microryza to that list. And there's a lot of good advice for people thinking about trying it themselves, including how much money to try for (at least at first), the timelines one can expect, and how to get your message out to potential donors.

Overall, I'm in favor of this sort of thing, but there are some potential problems. This gives the general pubic a way to feel more connected to scientific research, and to understand more about what it's actually like, both of which are goals I feel a close connection to. But (as that quote above demonstrates), some kinds of research are going to be an easier sell than others. I worry about a slow (or maybe not so slow) race to the bottom, with lab heads overpromising what their research can deliver, exaggerating its importance to immediate human concerns, and overselling whatever results come out.

These problems have, of course, been noted. Ethan Perlstein, formerly of Princeton, used RocketHub for his crowdfunding experiment that I wrote about here. And he's written at Microryza with advice about how to get the word out to potential donors, but that very advice has prompted a worried response over at SciFund Challenge, where Jai Ranganathan had this to say:

His bottom line? The secret is to hustle, hustle, hustle during a crowdfunding campaign to get the word out and to get media attention. With all respect to Ethan, if all researchers running campaigns follow his advice, then that’s the end for science crowdfunding. And that would be a tragedy because science crowdfunding has the potential to solve one of the key problems of our time: the giant gap between science and society.

Up to a point, these two are talking about different things. Perlstein's advice is focused on how to run a successful crowdsourcing campaign (based on his own experience, which is one of the better guides we have so far), while Ranganathan is looking at crowdsourcing as part of something larger. Where they intersect, as he says, is that it's possible that we'll end up with a tragedy of the commons, where the strategy that's optimal for each individual's case turns out to be (very) suboptimal for everyone taken together. He's at pains to mention that Ethan Perlstein has himself done a great job with outreach to the public, but worries about those to follow:

Because, by only focusing on the mechanics of the campaign itself (and not talking about all of the necessary outreach), there lurks a danger that could sink science crowdfunding. Positive connections to an audience are important for crowdfunding success in any field, but they are especially important for scientists, since all we have to offer (basically) is a personal connection to the science. If scientists omit the outreach and just contact audiences when they want money, that will go a long way to poisoning the connections between science and the public. Science crowdfunding has barely gotten started and already I hear continuous complaints about audience exasperation with the nonstop fundraising appeals. The reason for this audience fatigue is that few scientists have done the necessary building of connections with an audience before they started banging the drum for cash. Imagine how poisonous the atmosphere will become if many more outreach-free scientists aggressively cold call (or cold e-mail or cold tweet) the universe about their fundraising pleas.

Now, when it comes to overpromising and overselling, a cynical observer might say that I've just described the current granting system. (And if we want even more of that sort of thing, all we have to do is pass a scheme like this one). But the general public will probably be a bit easier to fool than a review committee, at least, if you can find the right segment of the general public. Someone will probably buy your pitch, eventually, if you can throw away your pride long enough to keep on digging for them.

That same cynical observer might say that I've just described the way that we set up donations to charities, and indeed Ranganathan makes an analogy to NPR's fundraising appeals. That's the high end. The low end of the charitable-donation game is about as low as you can go - just run a search for the words "fake" and "charity" through Google News any day, any time, and you can find examples that will make you ashamed that you have the same number of chromosomes as the people you're reading about. (You probably do). Avoiding this state really is important, and I'm glad that people are raising the issue already.

What if, though, someone were to set up a science crowdfunding appeal, with hopes of generating something that could actually turn a profit, and portions of that to be turned over to the people who put up the original money? We have now arrived at the biopharma startup business, via a different road than usual. Angel investors, venture capital groups, shareholders in an IPO - all of these people are doing exactly that, at various levels of knowledge and participation. The pitch is not so much "Give us money for the good of science", but "Give us money, because here's our plan to make you even more". You will note that the scale of funds raised by the latter technique make those raised by the former look like a roundoff error, which fits in pretty well with what I take as normal human motivations.

But academic science projects have no such pitch to make. They'll have to appeal to altruism, to curiosity, to mood affiliation, and other nonpecuniary motivations. Done well, that can be a very good thing, and done poorly, it could be a disaster.

Comments (16) + TrackBacks (0) | Category: Academia (vs. Industry) | Business and Markets | General Scientific News

May 13, 2013

Astellas Closing the OSI and Perseid Sites?

Email This Entry

Posted by Derek

I've heard this morning that Astellas is closing the OSI site in Farmingdale, NY, and the Perseid Therapeutics site in Redwood City, CA. More details as I hear them (and check the comments section; people with more direct knowledge may be showing up in there).

Comments (9) + TrackBacks (0) | Category: Business and Markets

Pyrrolidines, Not the Usual Way

Email This Entry

Posted by Derek

I wanted to mention a new reaction that's come out in a paper in Science. It's from the Betley lab at Harvard, and it's a new way to make densely substituted saturated nitrogen heterocycles (pyrrolidines, in particular).
Iron%20cat.png
You start from a four-carbon chain with an azide at one end, and you end up with a Boc-protected pyrrolidine, by direct activation/substitution of the CH bond at the other end of the chain. Longer chains give you mixtures of different ring sizes (4, 5, and 6), depending on where the catalyst feel like inserting the new bond. I'd like to see how many other functional groups this chemistry is compatible with (can you have another tertiary amine in there somewhere, or a hydroxy?) But we have a huge lack of carbon-hydrogen functionalization reactions in this business, and this is a welcome addition to a rather short list.

There was a paper last year from the Groves group at Princeton on fluorination of aliphatic CH bonds using a manganese porphyrin complex. These two papers are similar in my mind - they're modeling themselves on the CYP enzymes, using high-valent metals to accomplish things that normally we wouldn't think of being able to do easily. The more of this sort of thing, the better, as far as I'm concerned: new reactions will make us think of entirely new things

Comments (8) + TrackBacks (0) | Category: Chemical News

Another Big Genome Disparity (With Bonus ENCODE Bashing)

Email This Entry

Posted by Derek

I notice that the recent sequencing of the bladderwort plant is being played in the press in an interesting way: as the definitive refutation of the idea that "junk DNA" is functional. That's quite an about-face from the coverage of the ENCODE consortium's take on human DNA, the famous "80% Functional, Death of Junk DNA Idea" headlines. A casual observer, if there are casual observers of this sort of thing, might come away just a bit confused.

Both types of headlines are overblown, but I think that one set is more overblown than the other. The minimalist bladderwort genome (8.2 x 107 base pairs) is only about half the size of Arabidopsis thaliana, which rose to fame as a model organism in plant molecular biology partly because of its tiny genome. By contrast, humans (who make up so much of my readership), have about 3 x 109 base pairs, almost 40 times as many as the bladderwort. (I stole that line from G. K. Chesterton, by the way; it's from the introduction to The Napoleon of Notting Hill)

But pine trees have eight times as many base pairs as we do, so it's not a plant-versus-animal thing. And as Ed Yong points out in this excellent post on the new work, the Japanese canopy plant comes in at 1.5 x 1011 base pairs, fifty times the size of the human genome and two thousand times the size of the bladderwort. This is the same problem as the marbled lungfish versus pufferfish one that I wrote about here, and it's not a new problem at all. People have been wondering about genome sizes ever since they were able to estimate the size of genomes, because it became clear very quickly that they varied hugely and according to patterns that often make little sense to us.

That's why the ENCODE hype met (and continues to meet) with such a savage reception. It did nothing to address this issue, and seemed, in fact, to pretend that it wasn't an issue at all. Function, function, everywhere you look, and if that means that you just have to accept that the Japanese canopy plant needs the most wildly complex functional DNA architecture in the living world, well, isn't Nature just weird that way?

Comments (17) + TrackBacks (0) | Category: Biological News

May 10, 2013

Why Not Share More Bioactivity Data?

Email This Entry

Posted by Derek

The ChEMBL database of compounds has been including bioactivity data for some time, and the next version of it is slated to have even more. There are a lot of numbers out in the open literature that can be collected, and a lot of numbers inside academic labs. But if you want to tap the deepest sources of small-molecule biological activity data, you have to look to the drug industry. We generate vast heaps of such; it's the driveshaft of the whole discovery effort.

But sharing such data is a very sticky issue. No one's going to talk about their active projects, of course, but companies are reluctant to open the books even to long-dead efforts. The upside is seen as small, and the downside (though unlikely) is seen as potentially large. Here's a post from the ChEMBL blog that talks about the problem:

. . .So, what would your answer be if someone asked you if you consider it to be a good idea if they would deposit some of their unpublished bioactivity data in ChEMBL? My guess is that you would be all in favour of this idea. 'Go for it', you might even say. On the other hand, if the same person would ask you what you think of the idea to deposit some of ‘your bioactivity data’ in ChEMBL the situation might be completely different.

First and foremost you might respond that there is no such bioactivity data that you could share. Well let’s see about that later. What other barriers are there? If we cut to the chase then there is one consideration that (at least in my experience) comes up regularly and this is the question: 'What’s in it for me?' Did you ask yourself the same question? If you did and you were thinking about ‘instant gratification’ I haven’t got a lot to offer. Sorry, to disappoint you. However, since when is science about ‘instant gratification’? If we would all start to share the bioactivity data that we can share (and yes, there is data that we can share but don’t) instead of keeping it locked up in our databases or spreadsheets this would make a huge difference to all of us. So far the main and almost exclusive way of sharing bioactivity data is through publications but this is (at least in my view) far too limited. In order to start to change this (at least a little bit) the concept of ChEMBL supplementary bioactivity data has been introduced (as part of the efforts of the Open PHACTS project, http://www.openphacts.org).

There's more on this in an article in Future Medicinal Chemistry. Basically, if an assay has been described in an open scientific publication, the data generated through it qualifies for deposit in ChEMBL. No one's asking for companies to throw open their books, but even when details of a finished (or abandoned) project are published, there are often many more data points generated than ever get included in the manuscript. Why not give them a home?

I get the impression, though, that GSK is the only organization so far that's been willing to give this a try. So I wanted to give it some publicity as well, since there are surely many people who aren't aware of the effort at all, and might be willing to help out. I don't expect that data sharing on this level is going to lead to any immediate breakthroughs, of course, but even though assay numbers like this have a small chance of helping someone, they have a zero chance of helping if they're stuck in the digital equivalent of someone's desk drawer.

What can be shared, should be. And there's surely a lot more that falls into that category than we're used to thinking.

Comments (17) + TrackBacks (0) | Category: Drug Assays | The Scientific Literature

May 9, 2013

An Anticoagulant Antidote

Email This Entry

Posted by Derek

Here's a drug-discovery problem that you don't often have to think about. The anticoagulant field is a huge one, with Plavix, warfarin, and plenty of others jostling for a share of a huge market (both for patients to take themselves, and for hospital use). The Factor Xa inhibitors are a recent entry into this area, with Bayer's Xarelto (rivaroxaban) as the key example so far.

But there's a problem with any Xa inhibitor: there's no antidote for them. Blood clotting therapies have a narrow window to work in - anything effective enough to be beneficial will be effective enough to be trouble under other circumstances. Anticoagulants need a corresponding way to cancel out their effects, in case of overdose or other trouble. (Vitamin K is the answer for warfarin). We don't often have to consider this issue, but it's a big one in this case.

Portola Therapeutics has developed a Factor Xa mimic that binds the inhibitors, and thus titrates their effects. They have their own Xa inhibitor coming along (bextrixaban), but if this protein makes it through, they'll have done the whole field a favor as well as themselves.

Comments (8) + TrackBacks (0) | Category: Cardiovascular Disease

Merck's Liptruzet: A Cause For Shame?

Email This Entry

Posted by Derek

Vytorin's been discussed several times around here. The combination of Zetia (ezetimibe), the cholesterol absorption inhibitor discovered at Schering-Plough, with Merck's simvastatin looked as if it should be a very effective cholesterol-lowering medication, but the real-world data have been consistentlypuzzling. There's a big trial going on that people are hoping will clarify things, but so far it's had the opposite effect. It's no exaggeration to say that the entire absorption inhibitor/statin combination idea is in doubt, and we may well learn a lot about human lipidology as we figure out what's happened. It will have been an expensive lesson.

So in the midst of all this, what does Merck do but trot out anotherezetimibe/statin combination? Liptruzet has atorvastatin (generic Lipitor) in it, instead of simavastatin (generic Zocor), and what that is supposed to accomplish is a mystery to me. It's a mystery to Josh Bloom over at the American Council for Science and Health, too, and he's out with an op-ed saying that Merck should be ashamed of itself.

I can't see how he's wrong. What I'm seeing is an attempt by Merck to position itself should the ongoing Vytorin trial actually exonerate the combination idea. Vytorin, you see, doesn't have all that much patent lifetime left; its problems since 2008 have eaten the most profitable years right out of its cycle. So if Vytorin turns out to actually work out, after all the exciting plot twists, Merck will be there to tell people that they shouldn't take it. No, they should take exciting new Liptruzet instead. It's newer.

If anyone can think of a reason why this doesn't make Merck look like shady marketeers, I'd like to hear it. And (as Bloom points out) it doesn't make the FDA look all that great, either, since I'm sure that Liptruzet will count towards the end-of-the-year press release about all the innovative new drugs that the agency has approved. Not this time.

Update: John LaMattina's concerned about that last part, too.

Comments (37) + TrackBacks (0) | Category: Cardiovascular Disease | Why Everyone Loves Us

Your Brain Shifts Gears

Email This Entry

Posted by Derek

Want to be weirded out? Study the central nervous system. I started off my med-chem career in CNS drug discovery, and it's still my standard for impenetrability. There's a new paper in Science, though, that just makes you roll your eyes and look up at the ceiling.

The variety of neurotransmitters is well appreciated - you have all these different and overlapping signaling systems using acetylcholine, dopamine, serotonin, and a host of lesser-known molecules, including such oddities as hydrogen sulfide and even carbon monoxide. And on the receiving end, the various subtypes of receptors are well studied, and those give a tremendous boost to the variety of signaling from a single neurotransmitter type. Any given neuron can have several of these going on at the same time - when you consider how many different axons can be sprawled out from a single cell, there's a lot of room for variety.

That, you might think, is a pretty fair amount of complexity. But note also that the density and population of these receptors can change according to environmental stimuli. That's why you get headaches if you don't have your accustomed coffee in the morning (you've made more adenosine A2 receptors, and you haven't put any fresh caffeine ligand into them). Then there are receptor dimers (homo- and hetero-) that act differently than the single varieties, constituitively active receptors that are always on, until a ligand turns them off (the opposite of the classic signaling mechanism), and so on. Now, surely, we're up to a suitable level of complex function.

Har har, says biology. This latest paper shows, by a series of experiment in rats, that a given population of neurons can completely switch the receptor system it uses in response to environmental cues:

Our results demonstrate transmitter switching between dopamine and somatostatin in neurons in the adult rat brain, induced by exposure to short- and long-day photoperiods that mimic seasonal changes at high latitudes. The shifts in SST/dopamine expression are regulated at the transcriptional level, are matched by parallel changes in postsynaptic D2R/SST2/4R expression, and have pronounced effects on behavior. SST-IR/TH-IR local interneurons synapse on CRF-releasing cells, providing a mechanism by which the brain of nocturnal rats generates a stress response to a long-day photoperiod, contributing to depression and serving as functional integrators at the interface of sensory and neuroendocrine responses.

This remains to be demonstrated in human tissue, but I see absolutely no reason what the same sort of thing shouldn't be happening in our heads as well. There may well be a whole constellation of these neurotransmitter switchovers that can take place in response to various cues, but which neurons can do this, involving which signaling regimes, and in response to what stimuli - those are all open questions. And what the couplings are between the environmental response and all the changes in transcription that need to take place for this to happen, those are going to have to be worked out, too.

There may well be drug targets in there. Actually, there are drug targets everywhere. We just don't know what most of them are yet.

Comments (15) + TrackBacks (0) | Category: The Central Nervous System

May 8, 2013

Total Synthesis in Print

Email This Entry

Posted by Derek

Over at the Baran group's "Open Flask" blog, there's a post on the number of total synthesis papers that show up in the Journal of the American Chemical Society. I'm reproducing one of the figures below, the percentage of JACS papers with the phrase "total synthesis" in their title.
Percent%20total%20synthesis.png
You can see that the heights of the early 1980s have never been reached again, and that post-2000 there has been a marked drought. As the post notes, JACS seems to have begun publishing many more papers in total around that time (anyone notice this or know anything about it?), and it appears that they certainly didn't fill the new pages with total synthesis. 2013, though, already looks like an outlier, and it's only May.

My own feelings about total synthesis are a matter of record, and have been for some time, if anyone cares. So I'm not that surprised to see the trend in this chart, if trend it is.

But that said, it would be worth running the same analysis on a few other likely journal titles. Has the absolute number of total synthesis papers gone down? Or have they merely migrated (except for the really exceptional ones) to the lower-impact journals? Do fewer papers put the phrase "Total synthesis of. . ." in their titles as compared to years ago? Those are a few of the confounding variables I can think of, and there are probably more. But I think, overall, that the statement "JACS doesn't publish nearly as much total synthesis as it used to" seems to be absolutely correct. Is this a good thing, a bad thing, or some of each?

Comments (31) + TrackBacks (0) | Category: Chemical News | The Scientific Literature

Things I Won't Work With: Dimethylcadmium

Email This Entry

Posted by Derek

Cadmium is bad news. Lead and mercury get all the press, but cadmium is just as foul, even if far fewer people encounter it. Never in my career have I had any occasion to use any, and I like it that way. There was an organocadmium reaction in my textbook when I took sophomore organic chemistry, but it was already becoming obsolete, and good riddance, because this one of those metals that's best avoided for life. It has acute toxic effects, chronic toxic effects, and if there are any effects in between those it probably has them, too.

Fortunately, cadmium is not well absorbed from the gut, and even more fortunately, no one eats it. But breathing it, now that's another matter, and if you're a nonchemist wondering how someone can breath metallic elements, then read on. One rather direct way is if someone is careless enough to floof fine powders of them around you. That's how cadmium's toxicity was discovered in the first place, from miners dealing with the dust. But that's only the start. There's a bottom of the list for breathable cadmium, too, which is quite a thought. The general rule is, if you're looking for the worst organic derivatives of any metal, you should hop right on down to the methyl compounds. That's where the most choking vapors, the brightest flames, and the most panicked shouts and heartfelt curses are to be found. Methyl organometallics tend to be small, reactive, volatile, and ready to party.

Dimethyl cadmium, then, represents the demon plunked in the middle of the lowest circle as far as this element is concerned. I'll say only one thing in its favor: it's not quite as reactive as dimethyl zinc, its cousin one row up in the periodic table. No one ever has to worry about inhaling dimethyl zinc; since it bursts into ravenous flames as soon as it hits the air, the topic just never comes up. Then again, when organozincs burn, they turn into zinc oxide, which is inert enough to be used in cosmetics. But slathering your nose with cadmium oxide is not recommended.

Even though dimethylcadmium does not instantly turn into a wall of flame, it can still liven the place up. If you just leave the liquid standing around, hoping it'll go away, there are two outcomes. If you have a nice wide spill of it, with a lot of surface area, you fool, it'll probably still ignite on its own, giving off plenty of poisonous cadmium oxide smoke. If for some reason it doesn't do that, you will still regret your decision: the compound will react with oxygen anyway and form a crust of dimethyl cadmium peroxide, a poorly characterized compound (go figure) which is a friction-sensitive explosive. I've no idea how you get out of that tight spot; any attempts are likely to suddenly distribute the rest of the dimethylcadmium as a fine mist. Water is not the answer. One old literature report says that "When thrown into water, (dimethylcadmium) sinks to the bottom in large drops, which decompose in a series of sudden explosive jerks, with crackling sounds", and you could not ask for a clearer picture of the devil finding work for idle hands. Or idle heads.

Even without all this excitement, the liquid has an alarmingly high vapor pressure, and that vapor is alarmingly well absorbed on inhalation. a few micrograms (yep, millionths of a gram) of it per cubic meter of air hits the legal limits, and I'd prefer to be surrounded by far less. It's toxic to the lungs, naturally, but since it gets into the blood stream so well, it's also toxic to the liver, and to the kidneys (basically, the organs that are on the front lines when it's time to excrete the stuff), and to the brain and nervous system. Cadmium compounds in general have also been confirmed as carcinogenic, should you survive the initial exposure.

After all this, if you still feel the urge to experience dimethylcadmium - stay out of my lab - you can make this fine compound quite easily from cadmium chloride, which I've no particular urge to handle, either, and methyllithium or methyl Grignard reagent. Purifying it away from the ethereal solvents after that route, though, looks like extremely tedious work, which allows you the rare experience of being bored silly by something that's trying to kill you. It is safe to assume that the compound will swiftly penetrate latex gloves, just like deadly and hideous dimethylmercury, so you'll want to devote some time to thinking about how you'll handle the fruits of your labor.

I'm saddened to report that the chemical literature contains descriptions of dimethylcadmium's smell. Whoever provided these reports was surely exposed to far more of the vapor than common sense would allow, because common sense would tell you to stay about a half mile upwind at all times. At any rate, its odor is variously described as "foul", "unpleasant", "metallic", "disagreeable", and (wait for it) "characteristic", which is an adjective that shows up often in the literature with regard to smells, and almost always makes a person want to punch whoever thought it was useful. We can assume that dimethylcadmium is not easily confused with beaujolais in the blindfolded sniff test, but not much more. So if you're working with organocadmium derivatives and smell something nasty, but nasty in a new, exciting way that you've never quite smelled before, then you can probably assume the worst.

Now, as opposed to some of the compounds on my list, you can find people who've handled dimethylcadmium, or even prepared it, worse luck, although it is an (expensive) article of commerce. As mentioned above, it used to be in all the textbooks as a reliable way to form methyl ketones from acid chlorides, but there are far less evil reagents that can do that for you now. It's still used (on a research scale) to make exotic photosensitive and semiconducting materials, but even those hardy folk would love to find an alternative. No, this compound appears to have no fan club whatsoever. Start one at your own risk.

Comments (39) + TrackBacks (0) | Category: Things I Won't Work With

May 7, 2013

Another Germ Theory Victory - Back Pain?

Email This Entry

Posted by Derek

The "New Germ Theory" people may have notched up another one: a pair of reports out from a team in Denmark strongly suggest that many cases of chronic low back pain are due to low-grade bacterial infection. They've identified causative agents (Propionibacterium acnes) by isolating them from tissue, and showed impressive success in the clinic by treating back pain patients with a lengthy course of antibiotics. Paul Ewald is surely smiling about this news, although (as mentioned here) he has some ideas about the drug industry that I can't endorse.

So first we find out that stomach ulcers are not due to over-dominant mothers, and now this. What other hard-to-diagnose infections are we missing? Update - such as obesity, maybe?

Comments (21) + TrackBacks (0) | Category: Infectious Diseases

An Update on Deuterium Drugs

Email This Entry

Posted by Derek

In case you're wondering how the deuterated-drugs idea is coming along, the answer seems to be "just fine", at least for Concert Pharamaceuticals. They've announced their third collaboration inside of a year, this time with Celgene.

And they've got their own compound in development, CTP-499, in Phase II for diabetic nephropathy. That's a deutero analog of HDX (1-((S)-5-hydroxyhexyl)-3,7-dimethylxanthine), which is an active metabolite of the known xanthine drug pentoxifylline (which has also been investigated in diabetic kidney disease). You'd assume that deuteration makes this metabolite hang around longer, rather than being excreted, which is just the sort of profile shift that Concert is targeting.

Long-term, the deuteration idea has now diffused out into the general drug discovery world, and there will be no more easy pickings for it (well, at least not so many, depending on how competently patents are drafted). But if Concert can make a success out of what they have going already, they're already set for a longer term than most startups.

Comments (13) + TrackBacks (0) | Category: Pharmacokinetics

One Case of Plagiarism Down. Two Zillion to Go.

Email This Entry

Posted by Derek

You may remember this case from Chemistry - A European Journal earlier this year, where a paper appeared whose text was largely copy-pasted from a previous JACS paper from another lab. This one has finally been pulled; Retraction Watch has the details.

The most interesting part is that statement "The authors regret this approach", which I don't recall ever seeing in a situation like this. The comments at Retraction Watch build on this, and are quite interesting. There are many countries (and cultures) where it's considered acceptable (or at least a venial sin) to lift passages verbatim from other English-language papers when you're publishing in that language. I can see the attraction - I would hate to have to deliver a scientific manuscript in German, for example, which is the closest thing I have to a second language.

But I still wouldn't do it by copying and pasting big hunks of text, either. Reasons for resorting to that range from wanting to be absolutely sure that things are being expressed correctly in ones third or fourth language, all the way to "Isn't that how it's supposed to be done?" The latter situation obtains in parts of Asia, where apparently there's an emphasis in some schools on verbatim transcription of authoritative sources. There's an interesting cite to Yu Hua's China in Ten Words, where one of those ten words is "copycat" (shanzhai):

As a product of China’s uneven development, the copycat phenomenon has as many negative implications as it has positive aspects. The moral bankruptcy and confusion of right and wrong in China today, for example, and vivid expression in copycatting. As the copycat concept has gained acceptance, plagiarism, piracy, burlesque, parody, slander, and other actions originally seen as vulgar or illegal have been given a reason to exist; and in social psychology and public opinion they have gradually acquired respectability. No wonder that “copycat” has become one of the words most commonly used in China today. All of this serves to demonstrate the truth of the old Chinese saying: “The soil decides the crop, and the vine shapes the gourd.”

Four years ago I saw a pirated edition of [my novel] Brothers for sale on the pedestrian bridge that crosses the street outside my apartment; it was lying there in a stack of other pirated books. When the vendor noticed me running my eyes over his stock, he handed me a copy of my novel, recommending it as a good read. A quick flip through and I could tell at once that it was pirated. “No, it’s not a pirated edition,” he corrected me earnestly. “It’s a copycat.”

This tendency isn't a good fit with a lot of things, but it especially doesn't work out so well with scientific publication. I haven't seen it stated in so many words, but a key assumption is that every scientific paper is supposed to be different. If you take the time to read a new paper, you should learn something new and you should see something that you haven't seen before. It might be trivial, it might well be useless, but it should be at least slightly different from any other paper you've read or could find.

Now, as the Retraction Watch comments mention, some of these plagiarism cases are examples of "templating", where original (or sort of original) work was done, but the presentation of it was borrowed from an existing paper. That's not as bad as faking up results completely, of course, but you still have to wonder about the value of your work if you can lift big swaths of someone else's paper to describe it. Even when the manuscript itself has been written fresh from the ground up, there's plenty of stuff out in the literature like this. Someone gets an interesting reaction with a biphenyl and a zinc catalyst, and before you know it, there are all these quickie communications where someone else says "Hey, we got that with a napthyl", or "Hey, we got that with a boron halide catalyst". Technically, yes, these are different, but we're in the land of least publishable units now, where the salami is sliced so thinly that you can read a newspaper through it.

So the authors regret this approach, do they? So does everyone else.

Comments (9) + TrackBacks (0) | Category: The Dark Side | The Scientific Literature

May 6, 2013

Ken Frazier at Merck: An Assessment

Email This Entry

Posted by Derek

Here's a fine profile of Merck's Ken Frazier at Forbes. Matthew Herper does a good job of showing the hole that Merck has been slowly sliding into over the past few years, and wonders if Frazier is going to be able to drag the company out of it:

But it is clear that Frazier still views himself through the prism of his lawyerly training–he has not yet grown into a commanding and decisive chief executive. He’s scrupulous about not making anyone else look bad–working almost too hard in interviews to be clear that Perlmutter’s predecessor was not fired–and seems to be afraid to be seen as making too many big changes. “I am a person who does not subscribe to the hero-CEO school of thought,” he says. His persona is the culmination of the careful lessons he learned from his long climb to the top and his masterful legal defense against the lawsuits related to the pain pill Vioxx, which saved Merck and got him the top job. In order to be a great leader, he’s going to have to unlearn them.

I don't subscribe much to the hero-CEO school, either, at least not for a company the size of Merck. But even for a huge company, I think a rotten CEO can do a lot more harm than a good one can help (there's some thermodynamic way to express that, I'm sure). Frazier is certainly not in that category, and I've enjoyed some of the things he's had to say in the past (although I've also wondered about the follow-through). I wonder, though: how much of what Merck needs is in Frazier's power to do anything about? Or any one person's?

Update: here's David Shaywitz at Forbes, wondering about similar issues and what biopharma CEOs can actually do about them.

Comments (14) + TrackBacks (0) | Category: Business and Markets

The Medical Periodic Table

Email This Entry

Posted by Derek

Here's the latest "medical periodic table", courtesy of this useful review in Chemical Communications. Element symbols in white are known to be essential in man. The ones with a blue background are found in the structures of known drugs, the orange ones are used in diagnostics, and the green ones are medically useful radioisotopes. (The paper notes that titanium and tantalum are colored blue due to their use in implants).
Medical%20periodic%20table.png
I'm trying to figure out a couple of these. Xenon I've heard of as a diagnostic (hyperpolarized and used in MRI of lung capacity), but argon? (The supplementary material for the paper says that argon plasms has been used locally to control bleeding in the GI tract). And aren't there marketed drugs with a bromine atom in them somewhere? At any rate, the greyed-out elements end up that way through four routes, I think. Some of them (francium, and other high-atomic-number examples) are just too unstable (and thus impossible to obtain) for anything useful to be done with them. Others (uranium) are radioactive, but have not found a use that other radioisotopes haven't filled already. Then you have the "radioactive but toxic) category, the poster child of which is plutonium. (That said, I'm pretty sure that popular reports of its toxicity are exaggerated, but it still ain't vanilla pudding). Then you have the nonradioactive but toxic crowd - cadmium, mercury, beryllium and so on. (There's another question - aren't topical mercury-based antiseptics still used in some parts of the world? And if tantalum gets on the list for metal implants, what about mercury amalgam tooth fillings?) Finally, you have elements that are neither hot not poisonous, but that no one has been able to find any medical use for (scandium, niobium, hafnium). Scandium and beryllium, in fact, are my nominees for "lowest atomic-numbered elements that many people have never heard of", and because of nonsparking beryllium wrenches and the like, I think scandium might win out. I've never found a use for it myself, either. I have used a beryllium-copper wrench (they're not cheap) in a hydrogenation room.

The review goes on to detail the various classes of metal-containing drugs, most prominent of them being, naturally, the platinum anticancer agents. There are ruthenium complexes in the clinic in oncology, and some work has been done with osmium and iridium compounds. Ferrocenyl compounds have been tried several times over the years, often put in place of a phenyl ring, but none of them (as far as I know) have made it into the general pharmacopeia. What I didn't know what that titanocene dichloride has been into the clinic (but with disappointing results). And arsenic compounds have a long (though narrow) history in medicinal chemistry, but have recently made something of a comeback. The thioredoxin pathway seems to be a good fit for exotic elements - there's a gadolinium compound in development, and probably a dozen other metals have shown activity of one kind or another, both in oncology and against things like malaria parasites.

Many of these targets, though, are in sort of a "weirdo metal" category in the minds of most medicinal chemists, and that might not reflect reality very well. There's no reason why metal complexes wouldn't be able to inhibit more traditional drug targets as well, but that brings up another concern. For example, there have been several reports of rhodium, iridium, ruthenium, and osmium compounds as kinase inhibitors, but I've never quite been able to see the point of them, since you can generally get some sort of kinase inhibitor profile without getting that exotic. But what about the targets where we don't have a lot of chemical matter - protein/protein interactions, for example? Who's to say that metal-containing compounds wouldn't work there? But I doubt if that's been investigated to any extent at all - not many companies have such things in their compound collections, and it still might turn out to be a wild metallic goose chase to even look. No one knows, and I wonder how long it might be before anyone finds out.

In general, I don't think anyone has a feel for how such compounds behave in PK and tox. Actually "in general" might not even be an applicable term, since the number and types of metal complexes are so numerous. Generalization would probably be dangerous, even if our base of knowledge weren't so sparse, which sends you right back into the case-by-case wilderness. That's why a metal-containing compound, at almost any biopharma company, would be met with the sort of raised eyebrow that Mr. Spock used to give Captain Kirk. What shots these things have at becoming drugs will be in nothing-else-works areas (like oncology, or perhaps gram-negative antibiotics), or against exotic mechanisms in other diseases. And that second category, as mentioned above, will be hard to get off the ground, since almost no one tests such compounds, and you don't find what you don't test.

Comments (52) + TrackBacks (0) | Category: Cancer | Odd Elements in Drugs | Toxicology

May 3, 2013

Drug Assay Numbers, All Over the Place

Email This Entry

Posted by Derek

There's a truly disturbing paper out in PLoSONE with potential implications for a lot of assay data out there in the literature. The authors are looking at the results of biochemical assays as a function of how the compounds are dispensed in them, pipet tip versus acoustic, which is the sort of idea that some people might roll their eyes at. But people who've actually done a lot of biological assays may well feel a chill at the thought, because this is just the sort of you're-kidding variable that can make a big difference.

Dispensing and dilution processes may profoundly influence estimates of biological activity of compounds. Published data show Ephrin type-B receptor 4 IC50 values obtained via tip-based serial dilution and dispensing versus acoustic dispensing with direct dilution differ by orders of magnitude with no correlation or ranking of datasets.

Lovely. There have been some alarm bells sounded before about disposable-pipet-tip systems. The sticky-compound problem is always out there, where various substances decide that they like the plastic walls of the apparatus a lot more than they like being in solution. That'll throw your numbers all over the place. And there have been concerns about bioactive substances leaching out of the plastic. (Those are just two recent examples - this new paper has several other references, if you're worried about this sort of thing).

This paper seems to have been set off by two recent AstraZeneca patents on the aforementioned EphB4 inhibitors. In the assay data tables, these list assay numbers as determined via both dispensing techniques, and they are indeed all over the place. One of the authors of this new paper is from Labcyte, the makers of the acoustic dispensing apparatus, and it's reasonable to suppose that their interactions with AZ called their attention to this situation. It's also reasonable to note that Labcyte itself has an interest in promoting acoustic dispensing technology, but that doesn't make the numbers any different. The fourteen compounds shown are invariably less potent via the classic pipet method, but by widely varying factors. So, which numbers are right?

The assumption would be that the more potent values have a better chance of being correct, because it's a lot easier to imagine something messing up the assay system than something making it read out at greater potency. But false positives certainly exist, too, so the authors used the data set to generate a possible pharmacophore for the compound series using both sets of numbers. And it turns out that the one from the acoustic dispensing runs gives you a binding model that matches pretty well with reality, while if you use the pipet data you get something broadly similar, but missing some important contributions from hydrophobic groups. That, plus the fact that the assay data shows a correlation with logP in the acoustic-derived data (but not so much with the pipet-derived numbers) makes it look like the sticky-compound effect might be what's operating here. But it's hard to be sure:

No previous publication has analyzed or compared such data (based on tip-based and acoustic dispensing) using computational or statistical approaches. This analysis is only possible in this study because there is data for both dispensing approaches for the compounds in the patents from AstraZeneca that includes molecule structures. We have taken advantage of this small but valuable dataset to perform the analyses described. Unfortunately it is unlikely that a major pharmaceutical company will release 100's or 1000's of compounds with molecule structures and data using different dispensing methods to enable a large scale comparison, simply because it would require exposing confidential structures. To date there are only scatter plots on posters and in papers as we have referenced, and critically, none of these groups have reported the effect of molecular properties on these differences between dispensing methods.

Acoustic.png
Some of those other references are to posters and meeting presentations, so this seems to be one of those things that floats around in the field without landing explicitly in the literature. One of the paper's authors was good enough to send along the figure shown, which brings some of these data together, and it's an ugly sight. This paper is probably doing a real service in getting this potential problem out into the cite-able world: now there's something to point at.

How many other datasets are hosed up because of this effect? Now there's an important question, and one that we're not going to have an answer for any time soon. For some sets of compounds, there may be no problems at all, while others (as that graphic shows) can be a mess. There are, of course, plenty of projects where the assay numbers seem (more or less) to make sense, but there are plenty of others where they don't. Let the screener beware.

Update: here's a behind-the-scenes look at how this paper got published. It was not an easy path into the literature, by any means.

Second update: here's more about this at Nature Methods.

Comments (41) + TrackBacks (0) | Category: Drug Assays

May 2, 2013

Aveo Gets Bad News on Tivozanib

Email This Entry

Posted by Derek

The kinase inhibitor tivozanib (for renal cell carcinoma) was shot down this morning at an FDA committee hearing. There are going to be a lot of arguments about this decision, because feelings have been running high on both sides of the issue.

And this has been an issue for over a year now. As that FierceBiotech story puts it:

Tivozanib hit its primary endpoint, demonstrating a slim but statistically significant improvement in progression-free-survival of patients with advanced renal cell carcinoma when compared to Nexavar (sorafenib). But the sorafenib arm experienced a slightly better overall survival rate, and Aveo has been trying to explain it away ever since.

The developer had to start in the spring of 2012 at a pre-NDA meeting. According to the review document, "the FDA expressed concern about the adverse trend in overall survival in the single Phase III trial and recommended that the sponsor conduct a second adequately powered randomized trial in a population comparable to that in the US."

The Phase III in question was performed in Eastern Europe, and one of the outcomes of today's decision may be a reluctance to rely on that part of the world for pivotal trials. I'm honestly not sure how much of tivozanib's problems were due to that (if the data had been stronger, no one would be wondering). But if the patient population in the trial was far enough off the intended US market to concern the FDA, then there was trouble coming from a long way away.

Aveo, though, may not have had many options by this time. This is one of those situations where a smaller company has enough resources to barely get something through Phase III, so they try to do it as inexpensively as they can (thus Eastern Europe). By the time things looked dicey, there wasn't enough cash to do anything over, so they took what they had to the FDA and hoped for the best. The agency's suggestion to do a US trial must have induced some despair, since (1) they apparently didn't have the money to do it, and (2) this meant that the chances of approval on the existing data were lower than they'd hoped.

One of the other big issues that this decision highlights is in trial design. This was a "crossover" trial, where patients started out on one medication and then could be switched to another as their condition progressed. So many crossed over to the comparison drug (Nexavar, sorafenib) that it seems to have impaired the statistics of the trial. Were the overall survival numbers slightly better in the eventual Nexavar group because they'd been switched to that drug, or because they'd gotten tivozanib first? That's something you'd hope that a more expensive/well-run Phase III would have addressed, but in the same way that this result casts some doubt on the Eastern European clinical data, it casts some doubt on crossover trial design in this area.

Update: a big problem here was that there were many more patients who crossed over to tivozanib from Nexavar than the other way around. That's a design problem for you. . .

What a mess - and what a mess for Aveo, and their investors. I'm not sure if they've got anything else; it looks like they'd pretty much bet the company on this. Which must have been like coming to the showdown at the poker table with a low three-of-a-kind, knowing that someone else probably has it beat. . .

Comments (27) + TrackBacks (0) | Category: Cancer | Clinical Trials | Regulatory Affairs

E. O. Wilson's "Letters to a Young Scientist"

Email This Entry

Posted by Derek

I've been reading E. O. Wilson's new book, Letters to a Young Scientist. It's the latest addition to the list of "advice from older famous scientists" books, which also includes Peter Medawar's similarly titled Advice To A Young Scientist and what is probably the grandfather of the entire genre, Ramón y Cajal's Advice for a Young Investigator. A definite personal point of view comes across in this one, since its author is famously unafraid to express his strongly held opinions. There's some 100-proof Wilson in this book as well:

. . .Science is the wellspring of modern civilization. It is not just "another way of knowing", to be equated with religion or transcendental meditation. It takes nothing away from the genius of the humanities, including the creative arts. Instead it offers ways to add to their content. The scientific method has been consistent better than religious beliefs in explaining the origin and meaning of humanity. The creation stories of organized religions, like science, propose to explain the origin of the world, the content of the celestial sphere, and even the nature of time and space. These mythic accounts, based mostly on the dreams and epiphanies of ancient prophets, vary from one religion's belief to another. Colorful they are, and comforting to the minds of believers, but each contradicts all the others. And when tested in the real world they have so far proved wrong, always wrong.

And that brings up something else about all the books of this type: they're partly what their titles imply, guides for younger scientists. They're partly memoirs of their authors' lives (Francis Crick's What Mad Pursuit is in this category, although it has a lot of useful advice itself). And they're all attempts to explain what science really is and how it really works, especially to readers who may well not be scientists themselves.

Wilson does some of all three here, although he uses examples from his own life and research mainly as examples of the advice he's giving. And that advice, I think, is almost always on target. He has sections on how to pick areas of research, methods to use for discovery, how to best spend your time as a scientist, and so on. The book is absolutely, explicitly aimed at those who want to make their mark by discovering new things, not at those who would wish to climb other sorts of ladders. (For example, he tells academic scientists "Avoid department-level administration beyond thesis committee chairmanships if at all fair and possible. Make excuses, dodge, plead, trade." If your ambition is to become chairman of the department or a VP of this or that, this is not the book to turn to.

But I've relentlessly avoided being put onto the managerial track myself, so I can relate to a lot of what this book has to say. Wilson spent his life at Harvard, so much of his advice has an academic slant, but the general principles of it come through very clearly. Here's how to pick an area to concentrate on:

I believe that other experienced scientists would agree with me that when you are selecting a domain of knowledge in which to conduct original research, it is wise to look for one that is sparsely inhabited. . .I advise you to look for a chance to break away, to find a subject you can make your own. . .if a subject is already receiving a great deal of attention, if it has a glamorous aura, if its practitioners are prizewinners who receive large grants, stay away from that subject.

One of the most interesting parts of the book for me is its take on two abilities that most lay readers would take as prerequisites for a successful scientist: mathematical ability and sheer intelligence in general. The first is addressed very early in the book, in what may well become a famous section:

. . .If, on the other hand, you are a bit short in mathematical training, even very short, relax. You are far from alone in the community of scientists, and here is a professional secret to encourage you: many of the most successful scientists in the world today are mathematically no more than semiliterate.

He recommends making up this deficiency, as much as you find it feasible to do so, but he's right. The topic has come up around here - I can tell you for certain that the math needed to do medicinal chemistry is not advanced, and mostly consists of being able to render (and understand) data in a variety of graphical forms. If you can see why a log/log plot tends to give you straightened-out lines, you've probably got enough math to do med-chem. You'll also need to understand something about statistics, but (again) mostly in how to interpret it so you aren't fooled by data. Pharmacokinetics gets a bit more mathematical, and (naturally) molecular modeling itself is as math-heavy as anyone could want, but the chemistry end of things is not.

As for intelligence, see what you think about this:

Original discoveries cannot be made casually, not by anyone at any time or anywhere. The frontier of scientific knowledge, often referred to as the cutting edge, is reached with maps drawn by earlier investigators. . .But, you may well ask, isn't the cutting edge a place only for geniuses? No, fortunately. Work accomplished on the frontier defines genius, not just getting there. In fact, both accomplishments along the frontier and the final eureka moment are achieved more by entrepreneurship and hard work than by native intelligence. This is so much the case that in most fields most of the time, extreme brightness may be a detriment. It has occurred to me, after meeting so many successful researchers in so many disciplines, that the ideal scientist is smart only in an intermediate degree: bright enough to see what can be done but not so bright as to become bored doing it.

By "entrepreneurship", he doesn't mean forming companies. That's Wilson's term for opportunistic science - setting up some quick and dirty experiments around a new idea to see what might happen, and being open to odd results as indicators of a new direction to take your work. I completely endorse that, in case anyone cares. As for the intelligence part, you have to keep in mind that this is E. O. Wilson telling you that you don't need to be fearsomely intelligent to be successful, and that his scale for evaluating this quality might be calibrated a bit differently from the usual. As Tom Wolfe put it in his essay in Hooking Up, one of Wilson's defining characteristics has been that you could put him down almost anywhere on Earth and he'd be the smartest person in the room. (I should note that Wolfe's essay overall is not exactly a paean, but he knows not to underestimate the guy).

I think that intelligence falls under the "necessary but not sufficient" heading. And I probably haven't seen that many people operate whom the likes of E. O. Wilson would consider extremely smart, so I can't comment much on what happens at that end of the scale. But the phenomenon of people who score very highly on attempted measures of intelligence, but never seem to make much of themselves, is so common as to be a cliché. You cannot be dumb and make a success of yourself as a research scientist. But being smart guarantees nothing.

As an alternative to mathematical ability and (very) high intelligence, Wilson offers the prescription of hard work. "Scientists don't take vacations", he says, they take field trips. That might work out better if you're a field biologist, but not so well for (say) organic chemistry. And actually, I think that clearing your head with some time off actually can help out a great deal when you're bogged down in some topic. But having some part of your brain always on the case really is important. Breaks aside, long-term sustained attention to a problem is worth a lot, and not everyone is capable of it.

Here's more on the opportunistic side of things:

Polymer chemistry, computer programs of biological processes, butterflies of the Amazon, galactic maps, and Neolithic sites in Turkey are the kinds of subjects worthy of a lifetime of devotion. Once deeply engaged, a steady stream of small discoveries is guaranteed. But stay alert for the main chance that lies to the side. There will always be the possibility of a major strike, some wholly unexpected find, some little detail that catches your peripheral attention that might very well, if followed, enlarge or even transform the subject you have chosen. If you sense such a possibility, seize it. In science, gold fever is a good thing.

I know exactly what he's talking about here, and I think he's completely right. Many, many big discoveries have their beginnings in just this sort of thing. Isaac Asimov was on target when he said that the real sound of a breakthrough was not the cry of "Eureka!" but a puzzled voice saying "Hmm. That's funny. . ."

Well, the book has much more where all this comes from. It's short, which tempts a person to read through it quickly. I did, and found that this slighted some of the points it tries to make. It improved on a second pass, in my case, so you may want to keep this in mind.

Comments (14) + TrackBacks (0) | Category: Book Recommendations | Who Discovers and Why

May 1, 2013

Best Sites for a Medicinal Chemist?

Email This Entry

Posted by Derek

I'm going to be traveling today, mostly through airports without good Wi-Fi (for which read "Wi-Fi that they don't want me to pay $10 for during my 90-minute layover"). But I wanted to put out a question sent in by a reader that I think would be worthwhile:

What are the best web sites for a medicinal chemist to have bookmarked? Resources for medicine and biology, organic chemistry, analytical chemist, and pharma development would be appropriate. There are shorter lists available here and there, but I don't think that there's One Big List that easily findable, and I think that there needs to be one. Suggestions in the comments - that should put together something pretty useful.

Comments (31) + TrackBacks (0) | Category: Blog Housekeeping

April 30, 2013

Is Glyphosate Poisoning Everyone?

Email This Entry

Posted by Derek

I've had a few people send along this article, on the possible toxicological effects of the herbicide glyphosate, wondering what I make of it as a medicinal chemist. It's getting a lot of play in some venues, particularly the news-from-Mother-Nature outlets. After spending some time reading this paper over, and looking through the literature, I've come to a conclusion: it is, unfortunately, a load of crap.

The authors believe that glyphosate is responsible for pretty much every chronic illness in humans, and a list of such is recited several times during the course of the long, rambling manuscript. Their thesis is that the compound is an inhibitor of the metabolizing CYP enzymes, of the biosynthesis of aromatic amino acids by gut bacteria, and of sulfate transport. But the evidence given for these assertions, and their connection with disease, while it might look alarming and convincing to someone who has never done research or read a scientific paper, is a spiderweb of "might", "could", "is possibly", "associated with", and so on. The minute you look at the actual evidence, things disappear.

Here's an example - let's go right to the central thesis that glyphosate inhibits CYP enzymes in the liver. Here's a quote from the paper itself:

A study conducted in 1998 demonstrated that glyphosate inhibits cytochrome P450 enzymes in plants [116]. CYP71s are a class of CYP enzymes which play a role in detoxification of benzene compounds. An inhibitory effect on CYP71B1l extracted from the plant, Thlaspi arvensae, was demonstrated through an experiment involving a reconstituted system containing E. coli bacterial membranes expressing a fusion protein of CYP71B fused with a cytochrome P450 reductase. The fusion protein was assayed for activity level in hydrolyzing a benzo(a)pyrene, in the presence of various concentrations of glyphosate. At 15 microM concentration of glyphosate, enzyme activity was reduced by a factor of four, and by 35 microM concentration enzyme activity was completely eliminated. The mechanism of inhibition involved binding of the nitrogen group in glyphosate to the haem pocket in the enzyme.
A more compelling study demonstrating an effect in mammals as well as in plants involved giving rats glyphosate intragastrically for two weeks [117]. A decrease in the hepatic level of cytochrome P450 activity was observed. As we will see later, CYP enzymes play many important roles in the liver. It is plausible that glyphosate could serve as a source for carcinogenic nitrosamine exposure in humans, leading to hepatic carcinoma. N-nitrosylation of glyphosate occurs in soils treated with sodium nitrite [118], and plant uptake of the nitrosylated product has been demonstrated [119]. Preneoplastic and neoplastic lesions in the liver of female Wistar rats exposed to carcinogenic nitrosamines showed reduced levels of several CYP enzymes involved with detoxification of xenobiotics, including NADPH-cytochrome P450 reductase and various glutathione transferases [120]. Hence this becomes a plausible mechanism by which glyphosate might reduce the bioavailability of CYP enzymes in the liver.
Glyphosate is an organophosphate. Inhibition of CYP enzyme activity in human hepatic cells is a well-established property of organophosphates commonly used as pesticides [121]. In [122], it was demonstrated that organophosphates upregulate the nuclear receptor, constitutive androstane receptor (CAR), a key regulator of CYP activity. This resulted in increased synthesis of CYP2 mRNA, which they proposed may be a compensation for inhibition of CYP enzyme activity by the toxin. CYP2 plays an important role in detoxifying xenobiotics [123].

Now, that presumably sounds extremely detailed and impressive if you don't know any toxicology. What you wouldn't know from reading through all of it is that their reference 121 actually tested glyphosate against human CYP enzymes. In fact, you wouldn't know that anyone has ever actually done such an experiment, because all the evidence adduced in the paper is indirect - this species does that, so humans might do this, and this might be that, because this other thing over here has been shown that it could be something else. But the direct evidence is available, and is not cited - in fact, it's explicitly ignored. Reference 121 showed that glyphosate was inactive against all human CYP isoforms except 2C9, where it had in IC50 of 3.7 micromolar. You would also not know from this new paper that there is no way that ingested glyphosate could possibly reach levels in humans to inhibit CYP2C9 at that potency.

I'm not going to spend more time demolishing every point this way; this one is representative. This paper is a tissue of assertions and allegations, a tendentious brief for the prosecution that never should have been published in such a form in any scientific journal. Ah, but it's published in the online journal Entropy, from the MDPI people. And what on earth does this subject have to do with entropy, you may well ask? The authors managed to work that into the abstract, saying that glyphosate's alleged effects are an example of "exogenous semiotic entropy". And what the hell is that, you may well ask? Why, it's a made-up phrase making its first appearance, that's what it is.

But really, all you need to know is that MDPI is the same family of "journals" that published the (in)famous Andrulis "Gyres are the key to everything!" paper. And then made all kinds of implausible noises about layers of peer review afterwards. No, this is one of the real problems with sleazy "open-access" journals. They give the whole idea of open-access publishing a black eye, and they open the floodgates to whatever ridiculous crap comes in, which then gets "peer reviewed" and "published" in an "actual scientific journal", where it can fool the credulous and mislead the uninformed.

Comments (98) + TrackBacks (0) | Category: The Scientific Literature | Toxicology

Travel (University of Wisconsin)

Email This Entry

Posted by Derek

I'm in Madison, Wisconsin, where I'll be giving the Organic Chemistry McElvain Seminar later on today. The title of my talk, which I'm not sure if I'll live up to or not, is "Medicinal Chemistry: Getting Old, Or Just Starting to Grow Up?". It's at 3:30 in the Seminar Hall, room 1315, if you're passing through (!)

Comments (13) + TrackBacks (0) | Category: Blog Housekeeping

April 29, 2013

Costing Just Too Much

Email This Entry

Posted by Derek

There's been a lot of rumbling recently about the price of new cancer drugs (see this article for a very typical reaction). It's a topic that's come up around here many times, as would be only natural - scrolling back in this category will turn up a whole list of posts.

I see that Bernard Munos has weighed in on the topic in Forbes. He's not being Doctor Feelgood about it, either:

All this adds up to a giant pushback against the astronomical drug prices that are becoming commonplace. It seems that price tags of $100,000 or above are becoming the norm. Of 12 cancer drugs approved in 2012, 11 cost more than that. As more drugs are offered at that level and their sponsors get away with it, it seems to set a floor that emboldens drug companies to push the envelope. They are badly misjudging the brewing anger.

The industry’s standard defense has been to run warm-hearted stories about the wonders of biomedical innovation, and to point out that drugs represent only 10% of healthcare costs. Both arguments miss the point. Everyone loves biomedical innovation, but the industry’s annual output of 25 to 35 new drugs is a lousy return for its $135 billion R&D spending. . .

That's a real problem. We in the industry concentrate on our end of it, where we wonder how we can spend this much for our discovery efforts and survive. But there are several sides to the issue. From one angle, as long as we can jack up the prices high enough on what does get through, we can (in theory) stay in business. That's not going to happen. There are limits to what we can charge, and we're starting to bang up against them, in the way that a Martingale player at a roulette table learns why casinos have betting limits at the tables. It's not a fun barrier to bump into.

And there's the problem Munos brings up, which is one that investors have been getting antsy about for some time: return on capital. The huge amounts of money going out the door are (at least in some cases) not sustainable. But we're not spending our money as if there were a problem:

Perhaps the mood would be different if the industry was a model of efficiency, but this is hardly the case. Examples of massive waste are on display everywhere: Pfizer wants to flatten a 750,000-square-foot facility in Groton, CT, and won’t entertain proposals for alternative uses. Lilly writes off over $100 million for a half-built insulin plant in Virginia, only to restart the project a few years later in Indiana. AstraZeneca shutters its R&D labs at Alderley Park and goes on to spend $500 million on a new facility in Cambridge.

Munos is right. We have enough trouble already without asking for more. Don't we?

Comments (37) + TrackBacks (0) | Category: Cancer | Drug Prices | Why Everyone Loves Us

Just Work on the Winners

Email This Entry

Posted by Derek

That Lamar Smith proposal I wrote about earlier this morning can be summarized as "Why don't you people just work on the good stuff?" And I thought it might be a good time to link back to a personal experience I had with just that worldview. As you'll see from that story, all they wanted was for us to meet the goals that we put down on our research goals forms. I was told, face to face, that the idea was that this would make us put our efforts into the projects that were most likely to succeed. Who could object to that? Right?

But since we here in the drug industry are so focused on making money, y'know, you'd think that we would have even more incentives to make sure that we're only working on the things that are likely to pay off. And we can't do it. Committees vet proposals, managers look over progress reports, presentations are reviewed and data are sifted, all to that end, because picking the wrong project can sink you good and proper, while picking the right one can keep you going for years to come. But we fail all the time. A good 90% of the projects that make it into the clinic never make it out the other end, and the attrition even before getting into man is fierce indeed. We back the wrong horses for the best reasons available, and sometimes we back the right ones for reasons that end up evaporating along the way. This is the best we can do, the state of the art, and it's not very good at all.

And that's in applied research, with definite targets and endpoints in mind the whole way through. Now picture what it's like in the basic research end of things, which is where a lot of NSF and NIH money is (and should be) going. It is simply not possible to say where a lot of these things are going, and which ones will bear fruit. If you require everyone to sign forms saying that Yes, This Project Has Immediate Economic and National Security Impact, then the best you can hope for is to make everyone lie to you.

Update: a terrific point from the comments section: "(This) argument was often made when firms were reducing costs by shutting down particular pieces of R&D. The general idea was that the firm would stop doing the things that were unlikely to work, and focus more on the things that would work, and hence improve financial returns on R&D. This argument is implausible because successful R&D is wildly profitable. Financial returns are only dragged down by the things that don't work. Therefore, any company that could REALLY distinguish with any precision between winners and losers on a prospective basis should double or triple its R&D investment, and not cut it."

Comments (12) + TrackBacks (0) | Category: Current Events | Who Discovers and Why

A Dumb Proposal for the NSF

Email This Entry

Posted by Derek

This is a bad idea: Representative Lamar Smith (R-TX) is circulating a draft of a bill to change the way the National Science Foundation reviews grant applications. Science magazine obtained a copy of the current version, and it would require the NSF to certify that all research it funds is:

1) "…in the interests of the United States to advance the national health, prosperity, or welfare, and to secure the national defense by promoting the progress of science;

2) "… the finest quality, is groundbreaking, and answers questions or solves problems that are of utmost importance to society at large; and

3) "…not duplicative of other research projects being funded by the Foundation or other Federal science agencies."

If we could fund things this way, we would be living in a different world entirely. Research, though, does not and cannot follow these guidelines. A lot of stuff gets looked into that doesn't work out, and a lot of things that do work out don't look like they're ever going to be of much use for anything. We are not smart enough to put bets down on only the really important stuff up front - and by "we", I mean the entire scientific community, and the director of the NSF, and even Representative Lamar Smith.

Useless and even bizarre things get funded under the current system, of that I have no doubt. But telling everyone that all research has to be certified as good for something is silly grandstanding. What will happen is that people will rewrite their grant applications in order to make them look attractive under whatever rules apply - which, naturally, is how it's always worked. So I'm not saying that Rep. Smith's proposal would Destroy Science in America. That would take a lot more work. No, what I'm saying is that Rep. Smith's view of the world is flawed. He seems to believe that legislation of this sort is the answer to large, difficult problems (witness his work on the Stop Online Piracy Act). As such, he would seem to be exactly the sort of person that I wish could be barred from serving as an elected official.

If I were Lamar Smith, I would probably be thinking of a bill that I could introduce to that effect (the Stop Overreaching Legislators Act?) But I'm not the sort of person who thinks that the world can be fixed up by passing the right laws and signing the right papers. I'm more in line with Mark Twain, when he said that no one's life, liberty, or property was safe while the legislature was in session.

Note: more thoughts added here, later in the day

Comments (23) + TrackBacks (0) | Category: Current Events

April 26, 2013

Research Fraud, From A Master Fraud Artist

Email This Entry

Posted by Derek

A couple of years back, I wrote about the egregious research fraud case of Diederick Stapel. Here's an extraordinary follow-up in the New York Times Magazine, which will give you the shivers. Here, try this part out:

In one experiment conducted with undergraduates recruited from his class, Stapel asked subjects to rate their individual attractiveness after they were flashed an image of either an attractive female face or a very unattractive one. The hypothesis was that subjects exposed to the attractive image would — through an automatic comparison — rate themselves as less attractive than subjects exposed to the other image.

The experiment — and others like it — didn’t give Stapel the desired results, he said. He had the choice of abandoning the work or redoing the experiment. But he had already spent a lot of time on the research and was convinced his hypothesis was valid. “I said — you know what, I am going to create the data set,” he told me. . .

. . .Doing the analysis, Stapel at first ended up getting a bigger difference between the two conditions than was ideal. He went back and tweaked the numbers again. It took a few hours of trial and error, spread out over a few days, to get the data just right.

He said he felt both terrible and relieved. The results were published in The Journal of Personality and Social Psychology in 2004. “I realized — hey, we can do this,” he told me.

And that's just what he did, for the next several years, leading to scores of publications and presentations on things he had just made up. In light of that Nature editorial statement I mentioned yesterday, this part seems worth thinking on:

. . . The field of psychology was indicted, too, with a finding that Stapel’s fraud went undetected for so long because of “a general culture of careless, selective and uncritical handling of research and data.” If Stapel was solely to blame for making stuff up, the report stated, his peers, journal editors and reviewers of the field’s top journals were to blame for letting him get away with it. The committees identified several practices as “sloppy science” — misuse of statistics, ignoring of data that do not conform to a desired hypothesis and the pursuit of a compelling story no matter how scientifically unsupported it may be.

The adjective “sloppy” seems charitable. . .

It may well be. The temptation of spicing up the results is always there, in any branch of science, and it's our responsibility to resist it. That means not only resisting the opportunities to fool others, it means resisting fooling ourselves, too, because who would know better what we'd really like to hear? Reporting only the time that the idea worked, not the other times when it didn't. Finding ways to explain away the data that would invalidate your hypothesis, but giving the shaky stuff in your favor the benefit of the doubt. N-of-1 experiments taken as facts. No, not many people will go as far as Diederick Stapel (or could, even if they wanted to - he was quite talented at fakery). Unfortunately, things go on all the time that might differ from him in degree, but not in kind.

Comments (27) + TrackBacks (0) | Category: The Dark Side | The Scientific Literature

The Portable Chemist's Consultant

Email This Entry

Posted by Derek

I wanted to mention a project of Prof. Phil Baran of Scripps and his co-authors, Yoshihiro Ishihara and Ana Montero. It's called the Portable Chemist's Consultant, and it's available for iPads here. And here's a web-based look at its features. Baran was good enough to send me an evaluation copy, so I've had a chance to look through it in detail.

It's clearly based on his course in heterocyclic chemistry, and the chapters on pyridines and other heterocycles read like very well-thought-out review articles. But they also take advantage of the iPad's interface, in that specific transformations are shown in detail (with color and animation), and each of these can be expanded to a wider presentation and a thorough list of references (which are linked in their turn). The "Consumer Reports" style tables of recommended synthetic methods at the end of each section seem very useful, too, although they might need some notation for how much experimental support there is for each combination. For an overview of these topics, though, I doubt if anyone could do this better; I became a more literate heterocyclic chemist just by flipping through things. (Here's a video clip of some of these features in action).

So, do I have any reservations? A few. One of the bigger ones (which I'm told that Baran and his team are addressing) might sound trivial: I'm not sure about the title. As it stands, "The Portable Heterocyclic Chemistry Consultant" would be a much more accurate one, because there are large swaths of chemistry that fall within its current subtitle ("A Survival Guide for Discovery, Process, and Radiolabeling") which are not even touched on. For example, scale-up chemistry is mentioned on the cover, but in the current version of the book I didn't really see anything that was of particular relevance to actual scale-up work (things like the feasibility of solvent switching, heat transfer effects and reaction thermodynamics, run-to-run variability and potential purification methods, reagent sourcing, etc.) For medicinal chemists, I can say that the focus is completely on just the synthetic organic end of things; there's nothing on the behavior of any of the heterocyclic systems in vivo (pharmacokinetic trends, routes of metabolism, known toxicity problems, and so on). There's also nothing on spectral characterization, or any analytical chemistry of any sort, and I found no mention of radiolabeling (although I'd be glad to be corrected on that).

So for these reasons, it's a very academic work, but a very good one of its type. And Prof. Baran tells me that it's being revised constantly (at no charge to previous purchasers), and that these sorts of topics are in the works for later versions. If this book is indeed one of those gifts that keeps on giving, then it's a bargain as it stands, but (at the same time) I think that potential buyers should be aware of what they're getting in the current version.

My second reservation is technological. The book is only available on the iPad, and I'm not completely sure that this is a good idea. There's no way that it could be as useful in print, but a web-based interface would still be fine. (Managing ownership and sales is a lot easier in Apple's ecosystem, to be sure). And I'm not sure how many organic chemists own iPads yet. Baran himself seemed a bit surprised when he found out that I don't own one myself (I borrowed a colleague's to have a look). The most common reaction I've had when I tell people about the "PCC" is to say that they don't own an iPad, either, and to ask if there's any other way they can read it. Another problem is that the people that do have iPads certainly don't take them to the lab bench, which is where a work like this would be most useful. On the other hand, plain old computers are ubiquitous at the bench, thanks to electronic lab notebooks and the like.

All this said, though, if you do own an iPad and need to know about heterocyclic chemistry, you should have a look at this work immediately. If not, well, it's well worth keeping an eye on - these are early days.

Comments (15) + TrackBacks (0) | Category: Book Recommendations | Chemical News

April 25, 2013

More Single-Cell Magnetic Imaging

Email This Entry

Posted by Derek

Earlier this year, I wrote about a method to do NMR experiments at the cellular level or below. A new paper uses this same phenomenon (nitrogen-vacancy defects near the surface of diamond crystals) to do magnetic imaging of individual bacteria.

It's well known that many bacteria have "magnetosome" structures that allow them to sense and react to magnetic fields. If you let them wander over the surface of one of these altered diamond crystals, you can use the single-atom unpaired electrons as sensors. This team (several groups at Harvard and at Berkeley) were able to get sub-cellular resolution, and correlate that with real-time optical images of the bacteria (Magnetospirillum magneticum). It's very odd to see images of single bacteria with their field strengths looking like little bar magnets, but there they are. What we'll find by looking at magnetic fields inside individual cells, I have absolutely no idea, but I hope for all kinds of interesting and baffling things. I wonder what you'd get when mammalian cells take up magnetic nanoparticles, for example?

In other news, it's already late April, and things are already far enough along for me to talk about something on the blog as having happened "earlier this year". Sheesh.

Comments (0) + TrackBacks (0) | Category: General Scientific News

Towards Better Papers, With Real Results in Them

Email This Entry

Posted by Derek

This has to be a good thing. From the latest issue of Nature comes news of an initiative to generate more reproducible papers:

From next month, Nature and the Nature research journals will introduce editorial measures to address the problem by improving the consistency and quality of reporting in life-sciences articles. To ease the interpretation and improve the reliability of published results we will more systematically ensure that key methodological details are reported, and we will give more space to methods sections. We will examine statistics more closely and encourage authors to be transparent, for example by including their raw data. . .

. . .We recognize that there is no single way to conduct an experimental study. Exploratory investigations cannot be done with the same level of statistical rigour as hypothesis-testing studies. Few academic laboratories have the means to perform the level of validation required, for example, to translate a finding from the laboratory to the clinic. However, that should not stand in the way of a full report of how a study was designed, conducted and analysed that will allow reviewers and readers to adequately interpret and build on the results.

I hope that Science, the Cell journals at Elsevier, and other other leading outlets for such results will follow through with something similar. In this time of online supplementary info and basically unlimited storage ability, there's no reason not to disclose as much information as possible in a scientific publication. And the emphasis on statistical rigor and possible sources of error is just what's needed as well. Let's see who follows suit first, and congratulate them. And let's see who fails to respond, and treat them appropriately, too.

Comments (7) + TrackBacks (0) | Category: The Scientific Literature

What The Heck Does "Epigenetic" Mean, Anyway?

Email This Entry

Posted by Derek

A lot of people (and I'm one of them) have been throwing the word "epigenetic" around a lot. But what does it actually mean - or what is it supposed to mean? That's the subject of a despairing piece from Mark Ptashne of Sloan-Kettering in a recent PNAS. He noted this article in the journal, one of their "core concepts" series, and probably sat down that evening to write his rebuttal.

When we talk about the readout of genes - transcription - we are, he emphasizes, talking about processes that we have learned many details about. The RNA Polymerase II complex is very well conserved among living organisms, as well it should be, and its motions along strands of DNA have been shown to be very strongly affected by the presence and absence of protein transcription factors that bind to particular DNA regions. "All this is basic molecular biology, people", he does not quite say, although you can pick up the thought waves pretty clearly.

So far, so good. But here's where, conceptually, things start going into the ditch:

Patterns of gene expression underlying development can be very complex indeed. But the underlying mechanism by which, for example, a transcription activator activates transcription of a gene is well understood: only simple binding interactions are required. These binding interactions position the regulator near the gene to be regulated, and in a second binding reaction, the relevant enzymes, etc., are brought to the gene. The process is called recruitment. Two aspects are especially important in the current context: specificity and memory.

Specificity, naturally, is determined by the location of regulatory sequences within the genome. If you shuffle those around deliberately, you can make a variety of regulators work on a variety of genes in a mix-and-match fashion (and indeed, doing this is the daily bread of molecular biologists around the globe). As for memory, the point is that you have to keep recruiting the relevant enzymes if you want to keep transcribing; these aren't switchs that flips on or off forever. And now we get to the bacon-burning part:

Curiously, the picture I have just sketched is absent from the Core Concepts article. Rather, it is said, chemical modifications to DNA (e.g., methylation) and to histones— the components of nucleosomes around which DNA is wrapped in higher organisms—drive gene regulation. This obviously cannot be true because the enzymes that impose such modifications lack the essential specificity: All nucleosomes, for example, “look alike,” and so these enzymes would have no way, on their own, of specifying which genes to regulate under any given set of conditions. . .

. . .Histone modifications are called “epigenetic” in the Core Concepts article, a word that for years has implied memory . . . This is odd: It is true that some of these modifications are involved in the process of transcription per se—facilitating removal and replacement of nucleosomes as the gene is transcribed, for example. And some are needed for certain forms of repression. But all attempts to show that such modifications are “copied along with the DNA,” as the article states, have, to my knowledge, failed. Just as transcription per se is not “remembered” without continual recruitment, so nucleosome modifications decay as enzymes remove them (the way phosphatases remove phosphates put in place on proteins by kinases), or as nucleosomes, which turn over rapidly compared with the duration of a cell cycle, are replaced. For example, it is simply not true that once put in place such modifications can, as stated in the Core Concepts article, “lock down forever” expression of a gene.

Now it does happen, Ptashne points out, that some developmental genes, once activated by a transcription factor, do seem to stay on for longer periods of time. But this takes place via feedback loops - the original gene, once activated, produces the transcription factor that causes another gene to be read off, and one of its products is actually the original transcription factor for the first gene, which then causes the second to be read off again, and so on, pinging back and forth. But "epigenetic" has been used in the past to imply memory, and modifying histones is not a process with enough memory in it, he says, to warrant the term. They are ". . .parts of a response, not a cause, and there is no convincing evidence they are self-perpetuating".

What we have here, as Strother Martin told us many years ago, is a failure to communicate. The biologists who have been using the word "epigenetic" in its original sense (which Ptashne and others would tell you is not only the original sense, but the accurate and true one), have seen its meaning abruptly hijacked. (The Wikipedia entry on epigenetics is actually quite good on this point, or at least it was this morning). A large crowd that previously paid little attention to these matters now uses "epigenetic" to mean "something that affects transcription by messing with histone proteins". And as if that weren't bad enough, articles like the one that set off this response have completed the circle of confusion by claiming that these changes are somehow equivalent to genetics itself, a parallel universe of permanent changes separate from the DNA sequence.

I sympathize with him. But I think that this battle is better fought on the second point than the first, because the first one may already be lost. There may already be too many people who think of "epigenetic" as meaning something to do with changes in expression via histones, nucleosomes, and general DNA unwinding/presentation factors. There really does need to be a word to describe that suite of effects, and this (for better or worse) now seems as if it might be it. But the second part, the assumption that these are necessarily permanent, instead of mostly being another layer of temporary transcriptional control, that does need to be straightened out, and I think that it might still be possible.

Comments (17) + TrackBacks (0) | Category: Biological News

April 24, 2013

A New Book on Longevity Research

Email This Entry

Posted by Derek

The University of Chicago Press has sent along a copy of a new book by DePaul professor Ted Anton, The Longevity Seekers. It's a history of the last thirty years or so of advances in understanding the biochemical pathways of aging. As you'd imagine, much of it focuses on sirtuins, but many other discoveries get put into context as well. There are also thoughts on what this whole story tells us about medical research, the uses of model animal systems, about the public's reaction to new discoveries, and what would happen if (or when) someone actually succeeds in lengthening human lifespan. (That last part is an under-thought topic among people doing research in the field, in my experience, at least in print).

Readers will be interested to note that Anton uses posts and comments on this blog as source material in some places, when he talks about the reaction in the scientific community to various twists and turns in the story. (You'll be relieved to hear that he's also directly interviewed almost all the major players in the field, as well!) If you're looking for a guide to how the longevity field got to where it is today and how everything fits together so far, this should get you up to speed.

Comments (17) + TrackBacks (0) | Category: Aging and Lifespan | Book Recommendations

Watching PARP1 Inhibitors Fail To Work, Cell By Cell

Email This Entry

Posted by Derek

Here's something that's been sort of a dream of medicinal chemists and pharmacologists, and now can begin to be realized: single-cell pharmacokinetics. For those outside the field, you should know that we spend a lot of time on our drug candidates, evaluating whether they're actually getting to where we want them to. And there's a lot to unpack in that statement: the compound (if it's an oral dose) has to get out of the gut and into the bloodstream, survive the versatile shredding machine of the liver (which is where all the blood from from the gut goes first), and get out into the general circulation.

But all destinations are not equal. Tissues with greater blood flow are always going to see more of any compound, for starters. Compounds can (and often do) stick to various blood components preferentially (albumin, red blood cells themselves, etc.), and ride around that way, which can be beneficial, problematic, or a complete non-issue, depending on how the med-chem gods feel about you that week. The brain is famously protected from the riff-raff in the blood supply, so if you want to get into the CNS, you have more to think about. If your compound is rather greasy, it may find other things it likes to stick to rather than hang around in solution anywhere.

And we haven't even talked about the cellular level yet. Is your target on the outside of the cells, or do you have to get in? If you do, you might find your compounds being pumped right back out. There are ongoing nasty arguments about compounds being pumped in in the first place, too, as opposed to just soaking through the membranes. The inside of a cell is a strange place, too, once you're there. The various organelles and structures all have their own affinities for different sorts of compounds, and if you need to get into the mitochondria or the nucleus, you've got another membrane barrier to cross.
PARP1.jpg
At this point, things really start to get fuzzy. It's only been in recent years that it's been possible to follow the traffic of individual species inside a cell, and it's still not trivial, by any means. Some of the techniques used to do it (fluorescent tags of various kinds) also can disturb the very systems you're trying to study. This latest paper uses such a fluorescent label, so you have to keep that in mind, but it's still quite impressive. The authors took a poly(ADP) ribose polymerase 1 (PARP1) inhibitor (part of a class that has had all kinds of trouble in the clinic, despite a lot of biological rationale), attached a fluorescent tag, and watched in real time as it coursed through the vasculature of a tumor (on a time scale of seconds), soaked out into the intracellular space (minutes), and was taken up into the cells themselves (within an hour). Looking more deeply, they could see the compound accumulating in the nucleus (where PARP1 is located), so all indications are that it really does reach its target, and in sufficient amounts to have an effect.

But since it doesn't, there must be something about PARP1 and tumor biology that we're not quite grasping. Inhibiting DNA repair by this mechanism doesn't seem to be the death blow that we'd hoped for, but we now know that that's the place to figure out the failure of these inhibitors. Blaming some problems of delivery and distribution won't cut it.

Comments (24) + TrackBacks (0) | Category: Cancer | Pharmacokinetics

April 23, 2013

IBM And The Limits of Transferable Tech Expertise

Email This Entry

Posted by Derek

Here's a fine piece from Matthew Herper over at Forbes on an IBM/Roche collaboration in gene sequencing. IBM had an interesting technology platform in the area, which they modestly called the "DNA transistor". For a while, it was going to the the Next Big Thing in the field (and the material at that last link was apparently written during that period). But sequencing is a very competitive area, with a lot of action in it these days, and, well. . .things haven't worked out.

Today Roche announced that they're pulling out of the collaboration, and Herper has some thoughts about what that tells us. His thoughts on the sequencing business are well worth a look, but I was particularly struck by this one:

Biotech is not tech. You’d think that when a company like IBM moves into a new field in biology, its fast technical expertise and innovativeness would give it an advantage. Sometimes, maybe, it does: with its supercomputer Watson, IBM actually does seem to be developing a technology that could change the way medicine is practiced, someday. But more often than not the opposite is true. Tech companies like IBM, Microsoft, and Google actually have dismal records of moving into medicine. Biology is simply not like semiconductors or software engineering, even when it involves semiconductors or software engineering.

And I'm not sure how much of the Watson business is hype, either, when it comes to biomedicine (a nonzero amount, at any rate). But Herper's point is an important one, and it's one that's been discussed many time on this site as well. This post is a good catch-all for them - it links back to the locus classicus of such thinking, the famous "Can A Biologist Fix a Radio?" article, as well as to more recent forays like Andy Grove (ex-Intel) and his call for drug discovery to be more like chip design. (Here's another post on these points).

One of the big mistakes that people make is in thinking that "technology" is a single category of transferrable expertise. That's closely tied to another big (and common) mistake, that of thinking that the progress in computing power and electronics in general is the way that all technological progress works. (That, to me, sums up my problems with Ray Kurzweil). The evolution of microprocessing has indeed been amazing. Every field that can be improved by having more and faster computational power has been touched by it, and will continue to be. But if computation is not your rate-limiting step, then there's a limit to how much work Moore's Law can do for you.

And computational power is not the rate-limiting step in drug discovery or in biomedical research in general. We do not have polynomial-time algorithms to predictive toxicology, or to models of human drug efficacy. We hardly have any algorithms at all. Anyone who feels like remedying this lack (and making a few billion dollars doing so) is welcome to step right up.

Note: it's been pointed out in the comments that cost-per-base of DNA sequencing has been dropping at an even faster than Moore's Law rate. So there is technological innovation going on in the biomedical field, outside of sheer computational power, but I'd still say that understanding is the real rate limiter. . .

Comments (17) + TrackBacks (0) | Category: Analytical Chemistry | Biological News | Drug Industry History

Pseudoenzymes: Back From the Dead as Targets?

Email This Entry

Posted by Derek

There's a possible new area for drug discovery that's coming from a very unexpected source: enzymes that don't do anything. About ten years ago, when the human genome was getting its first good combing-through, one of the first enzyme categories to get the full treatment were the kinases. But about ten per cent of them, on closer inspection, seemed to lack one or more key catalytic residues, leaving them with no known way to be active. They were dubbed (with much puzzlement) "pseudokinases", with their functions, if any, unknown.

As time went on and sequences piled up, the same situation was found for a number of other enzyme categories. One family in particular, the sulfotransferases, seems to have at least half of it putative members inactivated, which doesn't make a lot of sense, because these things also seem to be under selection pressure. So they're doing something, but what?

Answer are starting to be filled in. Here's a paper from last year, on some of the possibilities, and this article from Science is an excellent survey of the field. It turns out that many of these seem to have a regulatory function, often on their enzymatically active relations. Some of these pseudoenzymes retain the ability to bind their original substrates, and those events may also have a regulatory function in their downstream protein interactions. So these things may be a whole class of drug targets that we haven't screened for - and in fact may be a set of proteins that we're already hitting with some of our ligands, but with no idea that we're doing so. I doubt if anyone in drug discovery has ever bothered counterscreening against any of them, but it looks like that should change. Update: I stand corrected. See the comment thread for more.

This illustrates a few principles worth keeping in mind: first, that if something is under selection pressure, it surely has a function, even if you can't figure out how or why. (A corollary is that if some sequence doesn't seem to be under such constraints, it probably doesn't have much of a function at all, but as those links show, this is a contentious topic). Next, we should always keep in mind that we don't really know as much about cell biology as we think we do; there are lots of surprises and overlooked things waiting for us. And finally, any of those that appear to have (or retain) small-molecule binding sites are very much worth the attention of medicinal chemists, because so many other possible targets have nothing of the kind, and are a lot harder to deal with.

Comments (8) + TrackBacks (0) | Category: Biological News

April 22, 2013

Cancer: Back to N-of-One

Email This Entry

Posted by Derek

From Nature comes this news of an effort to go back to oncology clinical trials and look at the outliers: the people who actually showed great responses to otherwise failed drugs.

By all rights, Gerald Batist’s patient should have died nine years ago. Her pancreatic cancer failed to flinch in the face of the standard arsenal — surgery, radiation, chemotherapy — and Batist, an oncologist at McGill University in Montreal, Canada, estimated that she had one year to live. With treatment options dwindling, he enrolled her in a clinical trial of a hot new class of drugs called farnesyltransferase inhibitors. Animal tests had suggested that the drugs had the potential to defeat some of the deadliest cancers, and pharmaceutical firms were racing to be the first to bring such compounds to market.

But the drugs flopped in clinical trials. Companies abandoned the inhibitors — one of the biggest heartbreaks in cancer research over the past decade. For Batist’s patient, however, the drugs were anything but disappointing. Her tumours were resolved; now, a decade later, she remains cancer free. And Batist hopes that he may soon find out why.

That's a perfect example, because pancreatic cancer has a well-deserved reputation as one of the most intractable tumor types, and the farnesylation inhibitors were indeed a titanic bust after much anticipation.. So that combination - a terrible prognosis and an ineffective class of compounds - shouldn't have led to anything, but it certainly seems to have in that case. If there was something odd about the combination of mutations in this patient that made her respond, could there be others that would as well? It looks as if that sort of thing could work:

Early n-of-1 successes have bolstered expectations. When David Solit, a cancer researcher also at Memorial Sloan-Kettering, encountered an exceptional responder in a failed clinical trial of the drug everolimus against bladder cancer, he decided to sequence her tumour. Among the 17,136 mutations his team found, two stood out — mutations in each of these genes had been shown to make cancer growth more dependent on the cellular pathway that everolimus shut down1. A further search revealed one of these genes — called TSC1 — was mutated in about 8% of 109 patients in their sample, a finding that could resurrect the notion of using everolimus to treat bladder cancer, this time in a trial of patients with TSC1 mutations.

So we are indeed heading to that dissection of cancer into its component diseases, which are uncounted thousands of cellular phenotypes, all leading to unconstrained growth. It's going to be quite a slog through the sequencing jungle along the way, though, which is why I don't share the optimism of people like Andy von Eschenbach and others who talk about vast changes in cancer therapy being just about to happen. These n-of-1 studies, for example, will be of direct benefit to very few people, the ones who happen to have rare and odd tumor types (that looked like more common ones at first). But tracking these things down is still worthwhile, because eventually we'll want to have all these things tracked down. Every one of them. And that's going to take quite a while, which means we'd better get starting on the ones that we know how to do.

And even then, there's going to be an even tougher challenge: the apparently common situation of multiple tumor cells types in what looks (without sequencing) like a single cancer. How to deal with these, in what order, and in what combinations - now that'll be hard. But not impossible and "not impossible" is enough to go on. Like Francis Bacon's "New Atlantis", what we have before us is the task of understanding ". . .the knowledge of causes, and secret motions of things; and the enlarging of the bounds of human empire, to the effecting of all things possible". Just don't put a deadline on it!

Comments (12) + TrackBacks (0) | Category: Cancer | Clinical Trials

Real Reactions, From Real Lab Notebooks

Email This Entry

Posted by Derek

Over at NextMove software, they have an analysis of what kinds of reactions are being run most often inside a large drug company. Using the company's electronic notebook database and their own software, they can get a real-world picture of what people spend their time on at the bench.

The number one reaction is Buchwald-Hartwig amination. And that seems reasonable to me; I sure see a lot of those being run myself. The number two reaction is reduction of nitro groups to amines, which surprises me a bit. There certainly are quite a few of those - the fellow just down the bench from me was cursing at one just the other day - but I wouldn't have pegged it as number two overall. Number three was the good old Williamson ether synthesis, and only then do we get to the reaction that I would have thought would beat out either of these, N-acylation. After that comes sulfonamide formation, and that one is also a bit of a surprise. Not that there aren't a lot of sulfonamides around, far from it, but I was under the impression that a lot of organizations gave the the semi-official fish-eye, due to higher-than-average rates of trouble (PK and so on) down the line.

My first thought was that there might have been some big and/or recent projects that skewed the numbers around a bit. These sorts of data sets are always going to be lumpy, in the same way that compound collections tend to be (and for the same reasons). The majority of compounds (and reactions) pile up when a great big series of active compounds comes along with Structure X made via Reaction Scheme Y. But that, in a way, is the point: different organizations might have a slightly different rank-ordering, but it seems a safe bet that the same eight or ten reactions would always make up most of the list. (My candidate for number 6, the next one down on the above list: Suzuki coupling).

There's also a pie chart of the general reaction types that are run most often. The biggest category is heteroatom alkylation and arylation, followed by acylation in general. By the time you've covered those two, you've got half the reactions in the database. Next up is C-C bond formations (there are those Suzukis, I'll bet) and reductions. (Interestingly. oxidations are much further down the list). That same trend was noted in an earlier analysis of this sort, and nitro-to-amine reactions were thought to be the main reason for it, as seems to be the case here. There's at least one more study of this sort that I'm aware of, and it came to similar conclusions.

One of the things that might occur to an academic chemist looking over these data is that none of these are exactly the most exciting reactions in the world. That's true, and that's the point. We don't want exciting chemistry, because "exciting" means that it has a significant chance of not working. Our reactions are dull as the proverbial ditchwater (and often about the same color), because the excitement of not knowing whether something is going to pan out or not is deferred a bit down the line. Just getting the primary assay data back on the compounds you just made is often an exercise in finger-crossing. Then waiting to see if your lead compound made it through two-week tox, now that's exciting. Or the first bit of Phase I PK data, when the drug candidate goes into a person's mouth for the first time. Or, even more, the initial Phase II numbers, when you find out if it might actually do something for somebody's who's sick. Now those have all the excitement that you could want, and often quite a bit more. With that sort of unavoidable background, the chemistry needs to be as steady and reliable as it can get.

Comments (21) + TrackBacks (0) | Category: Life in the Drug Labs