Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily

In the Pipeline

Category Archives

August 19, 2014

Fluorinated Fingerprinting

Email This Entry

Posted by Derek

19F%20cube%20plot.jpgHow many ways do we have to differentiate samples of closely related compounds? There's NMR, of course, and mass spec. But what if two compounds have the same mass, or have unrevealing NMR spectra? Here's a new paper in JACS that proposes another method entirely.

Well, maybe not entirely, because it still relies on NMR. But this one is taking advantage of the sensitivity of 19F NMR shifts to molecular interactions (the same thing that underlies its use as a fragment-screening technique). The authors (Timothy Swager and co-workers at MIT) have prepared several calixarene host molecules which can complex a variety of small organic guests. The host structures feature nonequivalent fluorinated groups, and when another molecule binds, the 19F NMR peaks shift around compared to the unoccupied state. (Shown are a set of their test analytes, plotted by the change in three different 19F shifts).

That's a pretty ingenious idea - anyone who's done 19F NMR work will hear about the concept and immediately say "Oh yeah - that would work, wouldn't it?" But no one else seems to have thought of it. Spectra of their various host molecules show that chemically very similar molecules can be immediately differentiated (such as acetonitrile versus propionitrile), and structural isomers of the same mass are also instantly distinguished. Mixtures of several compounds can also be assigned component by component.

This paper concentrates on nitriles, which all seem to bind in a similar way inside the host molecules. That means that solvents like acetone and ethyl acetate don't interfere at all, but it also means that these particular hosts are far from universal sensors. But no one should expect them to be. The same 19F shift idea can be applied across all sorts of structures. You could imagine working up a "pesticide analysis suite" or a "chemical warfare precursor suite" of well-chosen host structures, sold together as a detection kit.

This idea is going to be competing with LC/MS techniques. Those, when they're up and running, clearly provide more information about a given mixture, but good reproducible methods can take a fair amount of work up front. This method seems to me to be more of a competition for something like ELISA assays, answering questions like "Is there any of compound X in this sample?" or "Here's a sample contaminated with an unknown member of Compound Class Y. Which one is it?" The disadvantage there is that an ELISA doesn't need an NMR (with a fluorine probe) handy.

But it'll be worth seeing what can be made of it. I wonder if there could be host molecules that are particularly good at sensing/complexing particular key functional groups, the way that the current set picks up nitriles? How far into macromolecular/biomolecular space can this idea be extended? If it can be implemented in areas where traditional NMR and LC/MS have problems, it could find plenty of use.

Comments (9) + TrackBacks (0) | Category: Analytical Chemistry

July 18, 2014

Thalidomide, Bound to Its Target

Email This Entry

Posted by Derek

There's a new report in the literature on the mechanism of thalidomide, so I thought I'd spend some time talking about the compound. Just mentioning the name to anyone familiar with its history is enough to bring on a shiver. The compound, administered as a sedative/morning sickness remedy to pregnant women in the 1950s and early 1960s, famously brought on a wave of severe birth defects. There's a lot of confusion about this event in the popular literature, though - some people don't even realize that the drug was never approved in the US, although this was a famous save by the (then much smaller) FDA and especially by Frances Oldham Kelsey. And even those who know a good amount about the case can be confused by the toxicology, because it's confusing: no phenotype in rats, but big reproductive tox trouble in mice and rabbits (and humans, of course). And as I mentioned here, the compound is often used as an example of the far different effects of different enantiomers. But practically speaking, that's not the case: thalidomide has a very easily racemized chiral center, which gets scrambled in vivo. It doesn't matter if you take the racemate or a pure enantiomer; you're going to get both of the isomers once it's in circulation.

The compound's horrific effects led to a great deal of research on its mechanism. Along the way, thalidomide itself was found to be useful in the treatment of leprosy, and in recent years it's been approved for use in multiple myeloma and other cancers. (This led to an unusual lawsuit claiming credit for the idea). It's a potent anti-angiogenic compound, among other things, although the precise mechanism is still a matter for debate - in vivo, the compound has effects on a number of wide-ranging growth factors (and these were long thought to be the mechanism underlying its effects on embryos). Those embryonic effects complicate the drug's use immensely - Celgene, who got it through trials and approval for myeloma, have to keep a very tight patient registry, among other things, and control its distribution carefully. Experience has shown that turning thalidomide loose will always end up with someone (i.e. a pregnant woman) getting exposed to it who shouldn't be - it's gotten to the point that the WHO no longer recommends it for use in leprosy treatment, despite its clear evidence of benefit, and it's down to just those problems of distribution and control.

But in 2010, it was reported that the drug binds to a protein called cereblon (CRBN), and this mechanism implicated the ubiquitin ligase system in the embryonic effects. That's an interesting and important pathway - ubiquitin is, as the name implies, ubiquitous, and addition of a string of ubiquitins to a protein is a universal disposal tag in cells: off to the proteosome, to be torn to bits. It gets stuck onto exposed lysine residues by the aforementioned ligase enzyme.

But less-thorough ubiquitination is part of other pathways. Other proteins can have ubiquitin recognition domains, so there are signaling events going on. Even poly-ubiquitin chains can be part of non-disposal processes - the usual oligomers are built up using a particular lysine residue on each ubiquitin in the chain, but there are other lysine possibilities, and these branch off into different functions. It's a mess, frankly, but it's an important mess, and it's been the subject of a lot of work over the years in both academia and industry.

The new paper has the crystal structure of thalidomide (and two of its analogs) bound to the ubiquitin ligase complex. It looks like they keep one set of protein-protein interactions from occurring while the ligase end of things is going after other transcription factors to tag them for degradation. Ubiquitination of various proteins could be either up- or downregulated by this route. Interestingly, the binding is indeed enantioselective, which suggests that the teratogenic effects may well be down to the (S) enantiomer, not that there's any way to test this in vivo (as mentioned above). But the effects of these compounds in myeloma appear to go through the cereblon pathway as well, so there's never going to be a thalidomide-like drug without reproductive tox. If you could take it a notch down the pathway and go for the relevant transcription factors instead, post-cereblon, you might have something, but selective targeting of transcription factors is a hard row to hoe.

Comments (9) + TrackBacks (0) | Category: Analytical Chemistry | Biological News | Cancer | Chemical News | Toxicology

July 8, 2014

An Alzheimer's Blood Test? Not So Fast.

Email This Entry

Posted by Derek

There all all sorts of headlines today about how there's going to be a simple blood test for Alzheimer's soon. Don't believe them.

This all comes from a recent publication in the journal Alzheimer's and Dementia, from a team at King's College (London) and the company Proteome Sciences. It's a perfectly good paper, and it does what you'd think: they quantified a set of proteins in a cohort of potential Alzheimer's patients and checked to see if any of them were associated with progression of the disease. From 26 initial protein candidates (all of them previously implicated in Alzheimer's), they found that a panel of ten seemed to give a prediction that was about 87% accurate.

That figure was enough for a lot of major news outlets, who have run with headlines like "Blood test breakthrough" and "Blood test can predict Alzheimer's". Better ones said something more like "Closer to blood test" or "Progress towards blood test", but that's not so exciting and clickable, is it? This paper may well represent progress towards a blood test, but as its own authors, to their credit, are at pains to say, a lot more work needs to be done. 87%, for starters, is interesting, but not as good as it needs to be - that's still a lot of false negatives, and who knows how many false positives.

That all depends on what the rate of Alzheimer's is in the population you're screening. As Andy Extance pointed out on Twitter, these sorts of calculations are misunderstood by almost everyone, even by people who should know better. A 90 per cent accurate test on a general population whose Alzheimer's incidence rate is 1% would, in fact, be wrong 92% of the time. Here's a more detailed writeup I did in 2007, spurred by reports of a similar Alzheimer's diagnostic back then. And if you have a vague feeling that you heard about all these issue (and another blood test) just a few months ago, you're right.

Even after that statistical problem, things are not as simple as the headlines would have you believe. This new work is a multivariate model, because a number of factors were found to affect the levels of these proteins. The age and gender of the patient were two real covariants, as you'd expect, but the duration of plasma storage before testing also had an effect, as did, apparently, the center where the collection was done. That does not sound like a test that's ready to be rolled out to every doctor's office (which is again what the authors have been saying themselves). There were also different groups of proteins that could be used for a prediction model using the set of Mild Cognitive Impairment (MCI) patients, versus the ones that already appeared to show real Alzheimer's signs, which also tells you that this is not a simple turn-the-dial-on-the-disease setup. Interestingly, they also looked at whether adding brain imaging data (such as hippocampus volume) helped the prediction model. This, though, either had no real effect on the prediction accuracy, or even reduced it somewhat.

So the thing to do here is to run this on larger patient cohorts to get a more real-world idea of what the false negative and false positive rates are, which is the sort of obvious suggestion that is appearing in about the sixth or seventh paragraph of the popular press writeups. This is just what the authors are planning, naturally - they're not the ones who wrote the newspaper stories, after all. This same collaboration has been working on this problem for years now, I should add, and they've had ample opportunity to see their hopes not quite pan out. Here, for example, is a prediction of an Alzheimer's blood test entering the clinic in "12 to 18 months", from . . .well, 2009.

Update: here's a critique of the statistical approaches used in this paper - are there more problems with it than were first apparent?

Comments (30) + TrackBacks (0) | Category: Alzheimer's Disease | Analytical Chemistry | Biological News

July 7, 2014

Catalyst Voodoo, Yielding to Spectroscopy?

Email This Entry

Posted by Derek

Catalysts are absolutely vital to almost every field of chemistry. And catalysis, way too often, is voodoo or a close approximation thereof. A lot of progress has been made over the years, and in some systems we have a fairly good idea of what the important factors are. But even in the comparatively well-worked-out areas one finds surprises and hard-to-explain patterns of reactivity, and when it comes to optimizing turnover, stability, side reactions, and substrate scope, there's really no substitute for good old empirical experimentation most of the time.

The heterogeneous catalysts are especially sorcerous, because the reactions are usually taken place on a poorly characterized particle surface. Nanoscale effects (and even downright quantum mechanical effects) can be important, but these things are not at all easy to get a handle on. Think of the differences between a lump of, say, iron and small particles of the same. The surface area involved (and the surface/volume ratio) is extremely different, just for starters. And when you get down to very small particles (or bits of a rough surface), you find very different behaviors because these things are no longer a bulk material. Each atom becomes important, and can perhaps behave differently.

Now imagine dealing with a heterogeneous catalyst that's not a single pure substance, but is perhaps an alloy of two or more metals, or is some metal complex that itself is adsorbed onto the surface of another finely divided solid, or needs small amounts of some other additive to perform well, etc. It's no mystery why so much time and effort goes into finding good catalysts, because there's plenty of mystery built into them already.

Here's a new short review article in Angewandte Chemie on some of the current attempts to lift some of the veils. A paper earlier this year in Science illustrated a new way of characterizing surfaces with X-ray diffraction, and at short time scales (seconds) for such a technique. Another recent report in Nature Communications describes a new X-ray tomography system to try to characterize catalyst particles.

None of these are easy techniques, and at the moment they require substantial computing power, very close attention to sample preparation, and (in many cases) the brightest X-ray synchrotron sources you can round up. But they're providing information that no one has ever had before about (in these examples) palladium surfaces and nanoparticle characteristics, with more on the way.

Comments (2) + TrackBacks (0) | Category: Analytical Chemistry | Chemical News

May 22, 2014

A Horrible, Expensive, and Completely Avoidable Drug Development Mixup

Email This Entry

Posted by Derek

TIC10.jpgC&E News has a story today that is every medicinal chemist's nightmare. We are paid to find and characterize chemical matter, and to develop it (by modifying structures and synthesizing analogs) into something that can be a drug. Key to that whole process is knowing what structure you have in the first place, and now my fellow chemists will see where this is going and begin to cringe.

Shown at left are two rather similar isomeric structures. The top one was characterized at Penn State a few years ago by Wafik El-Deiry's lab as a stimulator of the TRAIL pathway, which could be a useful property against some tumor types (especially glioblastoma). (Article from Nature News here). Their patent, US8673923, was licensed to Oncoceutics, a company formed by El-Deiry, and the compound (now called ONC201) was prepared for clinical trials.

Meanwhile, Kim Janda at Scripps was also interested in TRAIL compounds, and his group resynthesized TIC10. But their freshly prepared material was totally inactive - and let me tell you, this sort of thing happens all too often. The usual story is that the original "hit" wasn't clean, and that its activity was due to metal contamination or colorful gunk, but that wasn't the case here. Janda requested a sample of TIC10 from the National Cancer Institute, and found that (1) it worked in the assays, and (2) it was clean. That discrepancy was resolved when careful characterization, including X-ray crystallography, showed that (3) the original structure had been misassigned.

It's certainly an honest mistake. Organic chemists will look at those two structures and realize that they're both equally plausible, and that you could end up with either one depending on the synthetic route (it's a question of which of two nitrogens gets alkylated first, and with what). It's also clear that telling one from the other is not trivial. They will, of course, have the same molecular weight, and any mass spec differences will be subtle. The same goes for the NMR spectra - they're going to look very similar indeed, and a priori it could be very hard to have any confidence that you'd assigned the right spectrum to the right structure. Janda's lab saw some worrisome correlation patterns in the HMBC spectra, but X-ray was the way to go, clearly - these two molecules have quite different shapes, and the electron density map would nail things down unambiguously.

To confuse everyone even more, the Ang. Chem. paper reports that a commercial supplier (MedKoo Biosciences) has begun offering what they claim is TIC10, but their compound is yet a third isomer, which has no TRAIL activity, either. (It's the "linear" isomer from the patent, but with the 2-methylbenzyl on the nitrogen in the five-membered ring instead).

So Janda's group had found that the published structure was completely dead, and that the newly assigned structure was the real active compound. They then licensed that structure to Sorrento Therapeutics, who are. . .interested in taking it towards clinical trials. Oh boy. This is the clearest example of a blown med-chem structural assignment that I think I've ever seen, and it will be grimly entertaining to see what happens next.

When you go back and look at the El-Deiry/Oncoceutics patent, you find that its claim structure is pretty unambiguous. TIC10 was a known compound, in the NCI collection, so the patent doesn't claim it as chemical matter. Claim 1, accordingly, is written as a method-of-treatment:

"A method of treatment of a subject having brain cancer, comprising: administering to the subject a pharmaceutical composition comprising a pharmaceutically effective amount of a compound of Formula (I) or a pharmaceutically acceptable salt thereof; and a pharma­ceutically accepted carrier."

And it's illustrated by that top structure shown above - the incorrect one. That is the only chemical structure that appears in the patent, and it does so again and again. All the other claims are written dependent on Claim 1, for treatment of different varieties of tumors, etc. So I don't see any way around it: the El-Deiry patent unambiguously claims the use of one particular compound, and it's the wrong compound. In fact, if you wanted to go to the trouble, you could probably invalidate the whole thing, because it can be shown (and has been) that the chemical structure in Claim 1 does not produce any of the data used to back up the claims. It isn't active at all.

And that makes this statement from the C&E News article a bit hard to comprehend: "Lee Schalop, Oncoceutics’ chief business officer, tells C&EN that the chemical structure is not relevant to Oncoceutics’ underlying invention. Plans for the clinical trials of TIC10 are moving forward." I don't see how. A quick look through the patent databases does not show me anything else that Oncoceutics could have that would mitigate this problem, although I'd be glad to be corrected on this point. Their key patent, or what looks like it to me, has been blown up. What do they own? Anything? But that said, it's not clear what Sorrento owns, either. The C&E News article quotes two disinterested patent attorneys as saying that Sorrento's position isn't very clear, although the company says that its claims have been written with these problems in mind. Could, for example, identifying the active form have been within the abilities of someone skilled in the art? That application doesn't seem to have published yet, so we'll see what they have at that point.

But let's wind up by emphasizing that "skilled in the art" point. As a chemist, you'd expect me to say this, but this whole problem was caused by a lack of input from a skilled medicinal chemist. El-Deiry's lab has plenty of expertise in cancer biology, but when it comes to chemistry, it looks like they just took what was on the label and ran with it. You never do that, though. You never, ever, advance a compound as a serious candidate without at least resynthesizing it, and you never patent a compound without making sure that you're patenting the right thing. What's more, the Oncoceutics patent estate in this area, unless I'm missing some applications that haven't published yet, looks very, very thin.

One compound? You find one compound that works and you figure that it's time to form a company and take it into clinical trials, because one compound equals one drug? I was very surprised, when I saw the patent, that there was no Markush structure and no mention of any analogs whatsoever. No medicinal chemist would look at a single hit out of the NCI collection and say "Well, we're done - let's patent that one single compound and go cure glioblastoma". And no competent medicinal chemist would look at that one hit and say "Yep, LC/MS matches what's on the label - time to declare it our development candidate". There was (to my eyes) a painfully inadequate chemistry follow-through on TCI10, and the price for that is now being paid. Big time.

Comments (31) + TrackBacks (0) | Category: Analytical Chemistry | Cancer | Patents and IP

May 20, 2014

Xenon's Use as a Sports Drug Is Banned

Email This Entry

Posted by Derek

Just a couple of months ago, I wrote about how xenon has been used as a performance-enhancing drug. Well, now it's banned. But I'd guess that they're going to have to look for its downstream effects, because detecting xenon itself, particularly a good while after exposure, is going to be a tall order. . .

Comments (13) + TrackBacks (0) | Category: Analytical Chemistry

March 12, 2014

A New NMR Probe Technology in the Making?

Email This Entry

Posted by Derek

This paper is outside of my usual reading range, but when I saw the title, the first thing that struck me was "NMR probes". The authors describe a very sensitive way to convert weak radio/microwave signals to an optical readout, with very low noise. And looking over the paper, that's one of the applications they suggest as well, so that's about as far into physics as I'll get today. But the idea looks quite interesting, and if it means that you can get higher sensitivity without having to use cryoprobes and other expensive gear, then speed the day.

Comments (5) + TrackBacks (0) | Category: Analytical Chemistry

November 20, 2013

A New Way to Get Protein Crystal Structures

Email This Entry

Posted by Derek

There's a report of a new technique to solve protein crystal structures on a much smaller scale than anyone's done before. Here's the paper: the team at the Howard Hughes Medical Institute has used cryo-electron microscopy to do electron diffraction on microcrystals of lysozyme protein.

We present a method, ‘MicroED’, for structure determination by electron crystallography. It should be widely applicable to both soluble and membrane proteins as long as small, well-ordered crystals can be obtained. We have shown that diffraction data at atomic resolution can be collected and a structure determined from crystals that are up to 6 orders of magnitude smaller in volume than those typically used for X-ray crystallography.

For difficult targets such as membrane proteins and multi-protein complexes, screening often produces microcrystals that require a great deal of optimization before reaching the size required for X-ray crystallography. Sometimes such size optimization becomes an impassable barrier. Electron diffraction of microcrystals as described here offers an alternative, allowing this roadblock to be bypassed and data to be collected directly from the initial crystallization hits.

X-ray diffraction is, of course, the usual way to determine crystal structures. Electrons can do the same thing for you, but practically speaking, that's been hard to realize in a general sense. Protein crystals don't stand up very well to electron beams, particularly if you crank up the intensity in order to see lots of diffraction spots. Electrons interact strongly with atoms, which is nice, because you don't need as big a sample to get diffraction, but they interact so strongly that things start falling apart pretty quickly. You can collect more data by zapping more crystals, but the problem is that you don't know how these things are oriented relative to each other. That leaves you with a pile of jigsaw-puzzle diffraction data and no easy way to fit it together. So the most common application for protein electron crystallography has been for samples that crystallize in a thin film or monolayer - that way, you can continue collecting diffraction data while being a bit more sure that everything is facing in the same direction.
lysozyme%20crystals.jpg
In this new technique, the intensity of the electron beam is turned down greatly, and the crystal itself is precisely rotated through 90 one-degree increments. The team has developed methods to handle the data and combine it into a useful set, and were able to get a 2.9-angstrom resolution on lysozyme crystals that are (as described above) far smaller than the usual standard for X-ray work, as shown. There's been a lot of work over the years to figure out how low you can set the electron intensity and still get useful data in such experiments, and this work started off by figuring out how much total radiation the crystals could stand and dividing that out into portions.

The paper, commendably, has a long section detailing how they tried to check for bias in their structure models, and the data seem pretty solid, for what that's worth coming from a non-crystallographer like me. This is still a work in progress, though - lysozyme is about the easiest example possible, for one thing. The authors describe some of the improvements in data collection and handling that would help make this a regular structural biology tool, and I hope that it does so. There's a lot of promise here - being able to pull structures out of tiny "useless" protein crystals would be a real advance.

Comments (2) + TrackBacks (0) | Category: Analytical Chemistry

September 16, 2013

Crystallography Without Crystallizing: An Update

Email This Entry

Posted by Derek

I wrote here about a very promising X-ray crystallography technique which produces structures of molecules that don't even have to be crystalline. Soaking a test substance into a metal-organic-framework (MOF) lattice gave enough repeating order that x-ray diffraction was possible.

The most startling part of the paper, other than the concept itself, was the determination of the structure of the natural product miyakosyne A. That one's not crystalline, and will never be crystalline, but the authors not only got the structure, but were able to assign its absolute stereochemistry. (The crystalline lattice is full of heavy atoms, giving you a chance for anomalous dispersion).

Unfortunately, though, this last part has now been withdrawn. A correction at Nature (as of last week) says that "previously unnoticed ambiguities" in the data, including "non-negligible disorder" in the molecular structure have led to the configuration being wrongly assigned. They say that their further work has demonstrated that they can determine the chemical structure of the compound, but cannot assign its stereochemistry.

The other structures in the paper have not been called into question. And here's where I'd like to throw things open for discussion This paper has been the subject of a great deal of interest since it came out, and I know of several groups that have been looking into it. It is my understanding that the small molecule structures in the Nature paper can indeed be reproduced. But. . .here we move into unexplored territory. Because if you look at that paper, you'll note that none of the structures feature basic amines or nitrogen heterocycles, just to pick two common classes of compounds that are of great interest to medicinal chemists and natural products chemists alike. And I have yet to hear of anyone getting this MOF technique to work with any such structures, although I am aware of numerous attempts to do so.

So far, then, the impression I have is that this method is certainly not as general as one might have hoped. I would very much enjoy being wrong about this, because it has great potential. It may be that other MOF structures will prove more versatile, and there are certainly a huge number of possibilities to investigate. But I think that the current method needs a lot more work to extend its usefulness. Anyone with experiences in this area that they would like to share, please add them in the comments.

Comments (58) + TrackBacks (0) | Category: Analytical Chemistry

September 12, 2013

Ligands From Nothing

Email This Entry

Posted by Derek

Well, nearly nothing. That's the promise of a technique that's been published by the Ernst lab from the University of Basel. They first wrote about this in 2010, in a paper looking for ligands to the myelin-associated glycoprotein (MAG). That doesn't sound much like a traditional drug target, and so it isn't. It's part of a group of immunoglobulin-like lectins, and they bind things like sialic acids and gangliosides, and they don't seem to bind them very tightly, either.

One of these sialic acids was used as their starting point, even though its affinity is only 137 micromolar. They took this structure and hung a spin label off it, with a short chain spacer. The NMR-savvy among you will already see an application of Wolfgang Jahnke's spin-label screening idea (SLAPSTIC) coming. That's based on the effect of an unpaired electron in NMR spectra - it messes with the relaxation time of protons in the vicinity, and this can be used to determine whatever might be nearby. With the right pulse sequence, you can easily detect any protons on any other molecules or residues out to about 15 or 20 Angstroms from the spin label.

Jahnke's group at Novartis attached spin labels to proteins and used these the find ligands by NMR screening. The NMR field has a traditional bias towards bizarre acronyms, which sometimes calls for ignoring a word or two, so SLAPSTIC stands for "Spin Labels Attached to Protein Side chains as a Tool to identify Interacting Compounds". Ernst's team took their cue from yet another NMR ligand-screening idea, the Abbott "SAR by NMR" scheme. That one burst on the scene in 1996, and caused a lot of stir at the time. The idea was that you could use NMR of labeled proteins, with knowledge of their structure, to find sets of ligands at multiple binding sites, then chemically stitch these together to make a much more potent inhibitor. (This was fragment-based drug discovery before anyone was using that phrase).

The theory behind this idea is perfectly sound. It's the practice that turned out to be the hard part. While fragment linking examples have certainly appeared (including Abbott examples), the straight SAR-by-NMR technique has apparently had a very low success rate, despite (I'm told by veterans of other companies) a good deal of time, money, and effort in the late 1990s. Getting NMR-friendly proteins whose structure was worked out, finding multiple ligands at multiple sites, and (especially) getting these fragments linked together productively has not been easy at all.
spin%20label.png
But Ernst's group has brought the idea back. They did a second-site NMR screen with a library of fragments and their spin-labeled sialic thingie, and found that 5-nitroindole was bound nearby, with the 3-position pointed towards the label. That's an advantage of this idea - you get spatial and structural information without having to label the protein itself, and without having to know anything about its structure. SPR experiments showed that the nitroindole alone had affinity up in the millimolar range.

They then did something that warmed my heart. They linked the fragments by attaching a range of acetylene and azide-containing chains to the appropriate ends of the two molecules and ran a Sharpless-style in situ click reaction. I've always loved that technique, partly because it's also structure-agnostic. In this case, they did a 3x4 mixture of coupling partners, potentially forming 24 triazoles (syn and anti). After three days of incubation with the protein, a new peak showed up in the LC/MS corresponding to a particular combination. They synthesized both possible candidates, and one of them was 2 micromolar, while the other was 190 nanomolar.
sialic%20derivative.png
That molecule is shown here - the percentages in the figure are magnetization transfer in STD experiments, with the N-acetyl set to 100% as reference. And that tells you that both ends of the molecule are indeed participating in the binding, as that greatly increased affinity would indicate. (Note that the triazole appears to be getting into the act, too). That affinity is worth thinking about - one part of this molecule was over 100 micromolar, and the other was millimolar, but the combination is 190 nanomolar. That sort of effect is why people keep coming back to fragment linking, even though it's been a brutal thing to get to work.

When I read this paper at the time, I thought that it was very nice, and I filed it in my "Break Glass in Case of Emergency" section for interesting and unusual screening techniques. One thing that worried me, as usual, was whether this was the only system this had ever worked on, or ever would. So I was quite happy to see a new paper from the Ernst group this summer, in which they did it again. This time, they found a ligand for E-selectin, another one of these things that you don't expect to ever find a decent small molecule for.

In this case, it's still not what an organic chemist would be likely to call a "decent small molecule", because they started with something akin to sialyl Lewis, which is already a funky tetrasaccharide. Their trisaccharide derivative had roughly 1 micromolar affinity, with the spin label attached. A fragment screen against E-selectrin had already identified several candidates that seemed to bind to the protein, and the best guess what that they probably wouldn't be binding in the carbohydrate recognition region. Doing the second-site screen as before gave them, as fate would have it, 5-nitroindole as the best candidate. (Now my worry is that this technique only works when you run it with 5-nitroindole. . .)

They worked out the relative geometry of binding from the NMR experiments, and set about synthesizing various azide/acetylene combinations. In this case, the in situ Sharpless-style click reactions did not give any measurable products, perhaps because the wide, flat binding site wasn't able to act as much of a catalyst to bring the two compounds together. Making a library of triazoles via the copper-catalyzed route and testing those, though, gave several compounds with affinities between 20x and 50x greater than the starting structure, and with dramatically slower off-rates.

They did try to get rid of the nitro group, recognizing that it's only an invitation to trouble. But the few modifications they tried really lowered the affinity, which tells you that the nitro itself was probably an important component of the second-site binding. That, to me, is argument enough to consider not having those things in your screening collection to start with. It all depends on what you're hoping for - if you just want a ligand to use as a biophysical tool compound, then nitro on, if you so desire. But it's hard to stop there. If it's a good hit, people will want to put it into cells, into animals, into who knows what, and then the heartache will start. If you're thinking about these kinds of assays, you might well be better off not knowing about some functionality that has a very high chance of wasting your time later on. (More on this issue here, here, here, and here). Update: here's more on trying to get rid of nitro groups).

This work, though, is the sort of thing I could read about all day. I'm very interested in ways to produce potent compounds from weak binders, ways to attack difficult low-hit-rate targets, in situ compound formation, and fragment-based methods, so these papers push several of my buttons simultaneously. And who knows, maybe I'll have a chance to do something like this all day at some point. It looks like work well worth taking seriously.

Comments (20) + TrackBacks (0) | Category: Analytical Chemistry | Chemical News | Drug Assays

August 16, 2013

An HIV Structure Breakthrough? Or "Complete Rubbish"?

Email This Entry

Posted by Derek

Structural biology needs no introduction for people doing drug discovery. This wasn't always so. Drugs were discovered back in the days when people used to argue about whether those "receptor" thingies were real objects (as opposed to useful conceptual shorthand), and before anyone had any idea of what an enzyme's active site might look like. And even today, there are targets, and whole classes of targets, for which we can't get enough structural information to help us out much.

But when you can get it, structure can be a wonderful thing. X-ray crystallography of proteins, and protein-ligand complexes has revealed so much useful information that it's hard to know where to start. It's not the magic wand - you can't look at an empty binding site and just design something right at your desk that'll be a potent ligand right off the bat. And you can't look at a series of ligand-bound structures and say which one is the most potent, not in most situations, anyway. But you still learn things from X-ray structures that you could never have known otherwise.

It's not the only game in town, either. NMR structures are very useful, although the X-ray ones can be easier to get, especially in these days of automated synchroton beamlines and powerful number-crunching. But what if your protein doesn't crystallize? And what if there are things happening in solution that you'd never pick up on from the crystallized form? You're not going to watch your protein rearrange into a new ligand-bound conformation with X-ray crystallography, that's for sure. No, even though NMR structures can be a pain to get, and have to be carefully interpreted, they'll also show you things you'd never had seen.

And there are more exotic methods. Earlier this summer, there was a startling report of a structure of the HIV surface proteins gp120 and gp41 obtained through cryogenic electron microscopy. This is a very important and very challenging field to work in. What you've got there is a membrane-bound protein-protein interaction, which is just the sort of thing that the other major structure-determination techniques can't handle well. At the same time, though, the number of important proteins involved in this sort of thing is almost beyond listing. Cryo-EM, since it observes the native proteins in their natural environment, without tags or stains, has a lot of potential, but it's been extremely hard to get the sort of resolution with it that's needed on such targets.

Joseph Sodroski's group at Harvard, longtime workers in this area, published their 6-angstrom-resolution structure of the protein complex in PNAS. But according to this new article in Science, the work has been an absolute lightning rod ever since it appeared. Many other structural biologists think that the paper is so flawed that it never should have seen print. No, I'm not exaggerating:

Several respected HIV/AIDS researchers are wowed by the work. But others—structural biologists in particular—assert that the paper is too good to be true and is more likely fantasy than fantastic. "That paper is complete rubbish," charges Richard Henderson, an electron microscopy pioneer at the MRC Laboratory of Molecular Biology in Cambridge, U.K. "It has no redeeming features whatsoever."

. . .Most of the structural biologists and HIV/AIDS researchers Science spoke with, including several reviewers, did not want to speak on the record because of their close relations with Sodroski or fear that they'd be seen as competitors griping—and some indeed are competitors. Two main criticisms emerged. Structural biologists are convinced that Sodroski's group, for technical reasons, could not have obtained a 6-Å resolution structure with the type of microscope they used. The second concern is even more disturbing: They solved the structure of a phantom molecule, not the trimer.

Cryo-EM is an art form. You have to freeze your samples in an aqueous system, but without making ice. The crystals of normal ice formation will do unsightly things to biological samples, on both the macro and micro levels, so you have to form "vitreous ice", a glassy amorphous form of frozen water, which is odd enough that until the 1980s many people considered it impossible. Once you've got your protein particles in this matrix, though, you can't just blast away at full power with your electron beam, because that will also tear things up. You have to take a huge number of runs at lower power, and analyze them through statistical techniques. The Sodolski HIV structure, for example, is the product of 670,000 single-particle images.

But its critics say that it's also the product of wishful thinking.:

The essential problem, they contend, is that Sodroski and Mao "aligned" their trimers to lower-resolution images published before, aiming to refine what was known. This is a popular cryo-EM technique but requires convincing evidence that the particles are there in the first place and rigorous tests to ensure that any improvements are real and not the result of simply finding a spurious agreement with random noise. "They should have done lots of controls that they didn't do," (Sriram) Subramaniam asserts. In an oft-cited experiment that aligns 1000 computer-generated images of white noise to a picture of Albert Einstein sticking out his tongue, the resulting image still clearly shows the famous physicist. "You get a beautiful picture of Albert Einstein out of nothing," Henderson says. "That's exactly what Sodroski and Mao have done. They've taken a previously published structure and put atoms in and gone down into a hole." Sodroski and Mao declined to address specific criticisms about their studies.

Well, they decline to answer them in response to a news item in Science. They've indicated a willingness to take on all comers in the peer-reviewed literature, but otherwise, in print, they're doing the we-stand-by-our-results-no-comment thing. Sodroski himself, with his level of experience in the field, seems ready to defend this paper vigorously, but there seem to be plenty of others willing to attack. We'll have to see how this plays out in the coming months - I'll update as things develop.

Comments (34) + TrackBacks (0) | Category: Analytical Chemistry | Biological News | In Silico | Infectious Diseases

August 8, 2013

The 3D Fragment Consortium

Email This Entry

Posted by Derek

Fragment-based screening comes up here fairly often (and if you're interested in the field, you should also have Practical Fragments on your reading list). One of the complaints both inside and outside the fragment world is that there are a lot of primary hits that fall into flat/aromatic chemical space (I know that those two don't overlap perfectly, but you know the sort of things I mean). The early fragment libraries were heavy in that sort of chemical matter, and the sort of collections you can buy still tend to be.

So people have talked about bringing in natural-product-like structures, and diversity-oriented-synthesis structures and other chemistries that make more three-dimensional systems. The commercial suppliers have been catching up with this trend, too, although some definitions of "three-dimensional" may not match yours. (Does a biphenyl derivative count, or is that what you're trying to get away from?)

The UK-based 3D Fragment Consortium has a paper out now in Drug Discovery Today that brings together a lot of references to work in this field. Even if you don't do fragment-based work, I think you'll find it interesting, because many of the same issues apply to larger molecules as well. How much return do you get for putting chiral centers into your molecules, on average? What about molecules with lots of saturated atoms that are still rather squashed and shapeless, versus ones full of aromatic carbons that carve out 3D space surprisingly well? Do different collections of these various molecular types really have differences in screening hit rates, and do these vary by the target class you're screening against? How much are properties (solubility, in particular) shifting these numbers around? And so on.

The consortium's site is worth checking out as well for more on their activities. One interesting bit of information is that the teams ended up crossing off over 90% of the commercially available fragments due to flat structures, which sounds about right. And that takes them where you'd expect it to:

We have concluded that bespoke synthesis, rather than expansion through acquisition of currently available commercial fragment-sized compounds is the most appropriate way to develop the library to attain the desired profile. . .The need to synthesise novel molecules that expand biologically relevant chemical space demonstrates the significant role that academic synthetic chemistry can have in facilitating target evaluation and generating the most appropriate start points for drug discovery programs. Several groups are devising new and innovative methodologies (i.e. methyl activation, cascade reactions and enzymatic functionalisation) and techniques (e.g. flow and photochemistry) that can be harnessed to facilitate expansion of drug discovery-relevant chemical space.

And as long as they stay away from the frequent hitters/PAINS, they should end up with a good collection. I look forward to future publications from the group to see how things work out!

Comments (3) +