About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: email@example.com
February 18, 2015
I don't spend too much time on physical organic chemistry here on the blog, which in a way is a shame. The readership would dwindle, although probably not as much as when I talk about patent law and intellectual property. But physical organic is an area I've always enjoyed, intellectually, even though it was sometimes hard to infer that during my graduate school classes. (I doubt if I have the patience to be much good at it in a lab setting).
But there's a new paper out in Science from a team at Stanford's SLAC, home of some of the brightest and hardest X-ray beams that ever fried a target sample. (Here's the press release from Stanford). Working with the University of Stockholm, they claim to have actually detected X-ray spectral data (K-edge absorption) from the transition state in the catalytic oxidation of CO to carbon dioxide. This was done on the surface of a ruthenium catalyst, with extremely fast and precise heating from an optical laser to get things going.
For any non-chemistry types reading down this far, try imagining a chemical reaction as a journey from one valley to another, through a high mountain pass. "Elevation" in this landscape, is how much energy the system has, and an irreversible reaction features things going, overall, into a lower valley/energy state. The absolute peak of the mountain transit, though, is the transition state for the reaction. It's a real thing, but it only lasts for one molecular vibration before it heads off down one slope or another. It's the highest-energy species in the whole path because it features all sorts of half-formed and half-broken bonds, the sort of state that molecules generally avoid ever getting themselves wrenched into. But since getting up to and over that particular hump is such a big part of any reaction, anything that stabilizes the TS will speed a reaction up, sometimes immensely, which is just the sort of thing we'd like to learn more about how to do on demand.
As that press release says, quite honestly, this was "long thought impossible", and my prediction is that there will be quite a few people who won't accept that it's been done. My x-ray fu is not strong enough, personally, to be able to offer an informed opinion. Even if this report is accurate, it's surely right on the edge of what's possible with some of the best equipment in the world, so you really have to know this area at a high level to critique it thoroughly. But what's reported is both plausible and interesting.
What they saw was that the oxygen molecules began to change first. The the electron distribution began to change in the CO molecules, followed by a productive collision (some of the time) to form the transition state itself. And one of the interesting things about that was how many times it apparently collapsed back to the original molecules, rather than going on to product. This is going to be subtly different for every reaction, or so theory tells us, but if we are finally able to physically investigate such things we may find ourselves revising a few theories a bit. This particular reaction, taking place between two small molecules, has been modeled extensively at all levels of calculation, and the results seen fit very well, so it's not like we're going to be packing big swaths of human knowledge into the dumpster. But anything we can learn about transition states (and how to make them selectively happier and unhappier) is the key to chemistry as a whole.
Here's a YouTube video from the Stanford team on what they've been up to. We'll see the rest of the analytical and theoretical chemistry community reacts to this work.
+ TrackBacks (0) | Category: Analytical Chemistry | Chemical News
February 4, 2015
As many will have heard, the New York State Attorney General's office is going after a number of herbal supplement retailers for selling products with poor quality controls. A range of supplements (including echinacea, garlic, gingko, and saw palmetto) were purchased, and analyzed by the "DNA barcode" technique at Clarkson U.
The results were not encouraging. The great majority of samples had no detectable DNA from the plants that supposedly make up the supplement. (In the case of Wal-Mart's store brand, 94% of all the samples failed on this count). But it's not like no plant DNA was found - no, there was rice, bean, wild carrot, asparagus, wheat, palm tree, daisy. . and my personal favorite, dracena, a well-known houseplant. This parallels a 2013 study from Guelph, which found very similar mismatches and contamination.
Analyzing the contents of these herbal preparations is not easy, as this 2013 C&E News piece by Jyllian Kemsley details. That leads to one possible way out for the supplement makers (and salesmen): if you look at the labels for (say) GNC brand ginkgo biloba, you find that it's an extract. (The saw palmetto, on the other hand, is available as an extract or as the berries, which are presumably dried and powdered). It's not clear from the NYAG's press release which of these were tested, but if it's a solvent-derived plant extract formulation, you might well not expect to find any of the original plant's DNA. This, in fact, is the defense being offered by some of the spokespeople for the industry today, and it has some merit.
What pokes a hole in that defense, though, are the contaminants. Tablets or capsules of plant extracts should, by that argument, have no DNA in them at all. They especially should not show evidence of rice, beans, weeds, and houseplants. But these do, which makes a person wonder a bit about the manufacturing process. Another interesting fact that turned up was that echinacea and saw palmetto themselves turned up as contaminants in other supplements entirely, which also points to sloppy practice back at the factory, wherever that may be.
I stand by my former conclusion: that the herbal supplement industry is not a very funny joke. The 1994 law - thank you so much, Orrin Hatch - that enables these people also shields them from a great deal of regulatory scrutiny. As libertarian as my sympathies are sometimes, I have to admit that in medicine and health products the scamsters multiply like crazed cockroaches when you let up on them, and this industry is a massive, whalloping example of just that problem. This article at The Atlantic is correct: "If one wanted to engineer a lucrative sham, the model of the supplement industry is a promising one". Lucrative it certainly is. And a sham, too.
+ TrackBacks (0) | Category: Analytical Chemistry | Snake Oil
December 12, 2014
There's a new and very useful paper out on the "molecular sponge" technique for crystallography (first blogged about here, with updates here and here). It's from the Clardy group at Harvard with collaboration from Argonne, in Acta Crystallographica, and you can tell by reading it that it's intended to put the whole method on a firmer footing.
That it does. Some of the data sets produced so far haven't really been up to the quality standards that most crystallographers feel comfortable standing behind, but the paper notes that synchrotron sources (as is often the case!) are a far better bet for useful structures than lab-scale equipment using Mo K-alpha X-rays. The paper also contains detailed advice on the production and handling of the MOF crystals themselves, how best to approach the structure refinement of the soaked guests, and much more. It's essential reading for anyone looking at this method. It's still not a casual stroll to high-quality structures, though:
Despite the described synthetic and crystallographic guidelines, it is imperative to note that the crystalline sponge method must be used judiciously, and that the results obtained are not always unambiguous or ‘crystal clear’, per se. Great care must be taken in interpreting the residual electron density for the guest molecules, especially in cases where the structure is not completely known, or if it exhibits conformational flexibility and thus disorder. With excessive disorder, poor data, over-modeling and/or making erroneous assumptions based upon misguided optimism, the disastrous outcome of drawing incorrect conclusions is very real. . .
Spoken like a crystallographer, for sure. These are early days for the whole MOF structure field, and it wouldn't surprise me at all to find the current "Zn-MOF" framework superseded by something with wider applicability. (Indeed, I think its inventors, the Fujita group, are busy trying to supersede it right now). One of the biggest limitations, which I've had a chance to explore personally, is the apparently complete incompatibility of the current frameworks with basic amines and/or heterocycles. But the idea has tremendous promise, and I'm happy to see this amount of work being put into it.
Update: forgot to add the link to the paper!
+ TrackBacks (0) | Category: Analytical Chemistry
December 11, 2014
I have to say, I didn't even know that this could be done. This paper from Angewandte Chemie describes a mass spec/NMR combination analysis that had never occurred to me as possible. The authors (from Ohio U. and Purdue) are looking at a common peptide ion seen in proteomic mass spec studies. And what they do is collect enough of the ions to run an NMR. That just seems bizarre, somehow, because I think that most of us picture the ionic fragments in a mass spec as these ghostly, esoteric things that live only in the outer-space-like vacuum of the instrument (and are present in vanishingly small amounts, at that). The idea of piling them up and running their NMR spectrum seems like someone taking a reflectance IR spectrum of an angel's wing.
That's because NMR, for most organic chemists, is a much more home-style, hands-on technique. We take measurable amounts of compounds, stick them into glass tubes, and use pipets to dissolve them up in solvent before taking them over to the NMR instrument. You get your hands on these things - and if you need to, you can go get the tube after it comes back out of the magnet, evaporate the solvent, and get all your sample back. Mass spec, on the other hand, uses ridiculously tiny amounts of material. It's really, really hard as an organic chemist to underload an LC/MS - it seems like you'll always get something, if there's something to get. You take a spec of sample, dissolve it in solvent, and then the machine takes a sip of it that a mosquito wouldn't bother with, because it doesn't need the rest.
The structures of these "b ions", as they're known in protein mass spec, has been the subject of a lot of work. Some N-terminal sequences give you oxazolones, some give you diketopiperazines, and others are still undetermined. This study used atmospheric pressure thermal dissociation (APTD), which gets around that vacuum-chamber problem, and they were able to condense material on the inside of the thermal dissociation tube. Bradykinin's b2 ion species was isolated, and NMR showed that it was a trifluoroacetate salt of the diketopiperazine structure. A model peptide, Gly-His-Gly, gave similar results.
So how many other mass spec species can this be applied to? And here's a weird thought: could this be a small-scale preparative method for some unusual syntheses? The thermal dissociation method is similar to pyrolysis, but I wonder if there are some sorts of structures that could be made this way that would be difficult to access by other routes. . .
+ TrackBacks (0) | Category: Analytical Chemistry
October 29, 2014
Google's "Google X" division, the part that works on odd high-risk high-reward projects, is apparently interested in diagnostic nanoparticles. That Wired article is pretty short on specifics, but the company's Andrew Conrad revealed a few details in a talk yesterday. The idea, apparently, is to use magnetic-cored nanoparticles to interrogate various body functions, then to reconcentrate them in some superficial vein for a readout. I had thought initially that there would be a blood draw at that point, which seems like less of a leap, but apparently the idea is for some sort of across-the-skin readout.
That's still not a crazy idea, although it has a ways to go (and an awful lot of work in animals). And I'm not sure how the noninvasive readout thing is supposed to work - I can imagine a lot more being done if you take some of these things back out. The nanoparticles could be tagged in various ways for sorting after removal, and (in theory) you could get quite a bit of information that way. The tricky part, either way, will be targeting the things that you can't get from just circulating - otherwise you'd just take a blood sample as usual and add your nanoparticle brew to it ex vivo (which is not such a bad idea, either, and has been worked on by many others). Perhaps that's why Google's team is going the extra step.
So they're presumably checking up on solid tissues somewhere, not the soluble blood factors, and that brings up a lot of pharmacokinetic issues. Many things that interact well enough to be diagnostic might well stick to the tissue instead of circulating back around, for example. And the ultimate fate of all these particles will be key - what effects will they have themselves, how well are they cleared, by what routes and at what rate, and so on. But I'll reserve judgment until we know more about this. Google is saying that they're not planning on developing this all the way themselves, but are trying to get other life sciences companies interested. How interested anyone gets might be a measure to watch.
+ TrackBacks (0) | Category: Analytical Chemistry
October 22, 2014
No matter how long you've been doing chemistry, there are still things that you come across that surprise you. Did you know that plain old L-phenylalanine has been one of the most difficult subjects ever for small-molecule crystallography? I sure didn't. But people have tried for decades to grow good enough crystals of it to decide what space group it's in. One big problem has been the presence of several polymorphs (see blog posts here and here), but it looks like the paper linked above has finally straightened things out.
+ TrackBacks (0) | Category: Analytical Chemistry | Chemical News
October 16, 2014
This week has brought news that Agilent is getting out of the NMR business, which brings an end to the Varian line of machines, one of the oldest in the business. (Agilent bought Varian in 2010). The first NMR I ever used was a Varian EM-360, which was the workhorse teaching instrument back then. A full 60 MHz of continuous wave for your resolving pleasure - Fourier transform? Superconducting magnets? Luxury! Why, we used to dream of. . .
I used many others in the years to come. But over time, the number of players in the NMR hardware market has contracted. You used to be able to walk into a good-sized NMR room and see machines from Varian, Bruker, JEOL, Oxford, GE (edit - added them) and once in a while an oddity like the 80-MHz IBM-brand machine that I used to use at Duke thirty years ago. No more - Bruker is now the major player. Their machines are good ones (and they've been in the business a while, too), but I do wish that they had some competition to keep them on their toes.
How come there isn't any? It's not that NMR spectroscopy is a dying art. It's as useful as ever, if not even more so. But I think that the market for equipment is pretty saturated. Every big company and university has plenty of capacity, and will buy a new machine only once in a while. The smaller companies are usually fixed pretty well, too, thanks to the used equipment market. And most of those colleges that used to have something less than a standard 300 MHz magnet have worked their way up to one.
There's not much room for a new company to come in and say that their high-field magnets are so much better than the existing ones, either, because the hardware has also reached something of a plateau. You can go out and buy a 700 MHz instrument (and Bruker no doubt wishes that you would), and that's enough to do pretty much any NMR experiment that you can think of. 1000 MHz instruments exist, but I'm not sure how many times you run into a situation where one of those would do the job for you, but a 700 wouldn't. I'm pretty sure that no one even knows how to build a 2000 MHz NMR, but if they did, the number sold would probably be countable on the fingers of one hand. Someone would have to invent a great reason for such a machine to exist -this isn't supercomputing, where the known applications can soak up all the power you can throw at them.
So farewell to the line of Varian NMR machines. Generations of chemists have used their equipment, but Bruker is the one left standing.
+ TrackBacks (0) | Category: Analytical Chemistry
September 23, 2014
If you want to really push the frontiers of analytical chemistry, try making compounds of the superheavy elements. Science is reporting the characterization of seaborgium hexacarbonyl, which gives us all a chance to use Sg in an empirical formula. We're not going to be using it too often, though, because this work was conducted on eighteen atoms of Sg, and that's at least as hard as it sounds. You have several seconds in which to do all your work, and then it's back to the gigantic particle accelerator to see if you can make another atom or two. Separating these from the various decay products and other stuff is one of the hardest parts of that process, and was a key step in getting this experiment to work at all.
The reason for going to all this trouble was the predicted behavior of the valence electrons. Elements of this size are rather strange in that regard, in that the
outer inner-shell electrons (corrected: jet-lag, I think - DBL) are relativistic - Sg's have velocities of about 0.8c, which leads to some unusual effects. The element itself doesn't differ as much from its other periodic relatives (as opposed to 104 Rutherfordium and 105 Dubnium), but compounds leaving some outer-shell electrons free were still calculated to show some changes. In this case, the hexacarbonyl had similar behavior to the molybdenum and tungsten complexes, but its properties only come out right if you take the relativistic effects into account. So both Mendeleev and Einstein come out well in this one.
Irrationally, this makes seaborgium more of a "real" element to me (there have been a couple of other compounds reported before as well). Single atoms seem to me to be the province of physics, but once you start describing compounds, it's chemistry.
+ TrackBacks (0) | Category: Analytical Chemistry | Chemical News
September 15, 2014
Last year I mentioned a paper that described the well-known drug tramadol as a natural product, isolated from a species of tree in Cameroon. Rather high concentrations were found in the root bark, and the evidence looked solid that the compound was indeed being made biochemically.
Well, thanks to chem-blogger Quintus (and a mention on Twitter by See Arr Oh), I've learned that this story has taken a very surprising turn. This new paper in Ang. Chem. investigates the situation more closely. And you can indeed extract tramadol from the stated species - there's no doubt about it. You can extract three of its major metabolites, too - its three major mammalian metabolites. That's because, as it turns out, tramadol is given extensively to cattle (!) in the region, so much of it that the parent drug and its metabolites have soaked into the soil enough for the African peach/pincushion tree to have taken it up into its roots. I didn't see that one coming.
The farmers apparently take the drug themselves, at pretty high dosages, saying that it allows them to work without getting tiree. Who decided it would be a good thing to feed to the cows, no one knows, but the farmers feel that it benefits them, too. So in that specific region in the north of Cameroon, tramadol contamination in the farming areas has built up to the point that you can extract the stuff from tree roots. Good grief. In southern Cameroon, the concentrations are orders of magnitude lower, and neither the farmers nor the cattle have adopted the tramadol-soaked lifestyle. Natural products chemistry is getting trickier all the time.
+ TrackBacks (0) | Category: Analytical Chemistry | Chemical News | Natural Products
August 19, 2014
How many ways do we have to differentiate samples of closely related compounds? There's NMR, of course, and mass spec. But what if two compounds have the same mass, or have unrevealing NMR spectra? Here's a new paper in JACS that proposes another method entirely.
Well, maybe not entirely, because it still relies on NMR. But this one is taking advantage of the sensitivity of 19F NMR shifts to molecular interactions (the same thing that underlies its use as a fragment-screening technique). The authors (Timothy Swager and co-workers at MIT) have prepared several calixarene host molecules which can complex a variety of small organic guests. The host structures feature nonequivalent fluorinated groups, and when another molecule binds, the 19F NMR peaks shift around compared to the unoccupied state. (Shown are a set of their test analytes, plotted by the change in three different 19F shifts).
That's a pretty ingenious idea - anyone who's done 19F NMR work will hear about the concept and immediately say "Oh yeah - that would work, wouldn't it?" But no one else seems to have thought of it. Spectra of their various host molecules show that chemically very similar molecules can be immediately differentiated (such as acetonitrile versus propionitrile), and structural isomers of the same mass are also instantly distinguished. Mixtures of several compounds can also be assigned component by component.
This paper concentrates on nitriles, which all seem to bind in a similar way inside the host molecules. That means that solvents like acetone and ethyl acetate don't interfere at all, but it also means that these particular hosts are far from universal sensors. But no one should expect them to be. The same 19F shift idea can be applied across all sorts of structures. You could imagine working up a "pesticide analysis suite" or a "chemical warfare precursor suite" of well-chosen host structures, sold together as a detection kit.
This idea is going to be competing with LC/MS techniques. Those, when they're up and running, clearly provide more information about a given mixture, but good reproducible methods can take a fair amount of work up front. This method seems to me to be more of a competition for something like ELISA assays, answering questions like "Is there any of compound X in this sample?" or "Here's a sample contaminated with an unknown member of Compound Class Y. Which one is it?" The disadvantage there is that an ELISA doesn't need an NMR (with a fluorine probe) handy.
But it'll be worth seeing what can be made of it. I wonder if there could be host molecules that are particularly good at sensing/complexing particular key functional groups, the way that the current set picks up nitriles? How far into macromolecular/biomolecular space can this idea be extended? If it can be implemented in areas where traditional NMR and LC/MS have problems, it could find plenty of use.
+ TrackBacks (0) | Category: Analytical Chemistry
July 18, 2014
There's a new report in the literature on the mechanism of thalidomide, so I thought I'd spend some time talking about the compound. Just mentioning the name to anyone familiar with its history is enough to bring on a shiver. The compound, administered as a sedative/morning sickness remedy to pregnant women in the 1950s and early 1960s, famously brought on a wave of severe birth defects. There's a lot of confusion about this event in the popular literature, though - some people don't even realize that the drug was never approved in the US, although this was a famous save by the (then much smaller) FDA and especially by Frances Oldham Kelsey. And even those who know a good amount about the case can be confused by the toxicology, because it's confusing: no phenotype in rats, but big reproductive tox trouble in mice and rabbits (and humans, of course). And as I mentioned here, the compound is often used as an example of the far different effects of different enantiomers. But practically speaking, that's not the case: thalidomide has a very easily racemized chiral center, which gets scrambled in vivo. It doesn't matter if you take the racemate or a pure enantiomer; you're going to get both of the isomers once it's in circulation.
The compound's horrific effects led to a great deal of research on its mechanism. Along the way, thalidomide itself was found to be useful in the treatment of leprosy, and in recent years it's been approved for use in multiple myeloma and other cancers. (This led to an unusual lawsuit claiming credit for the idea). It's a potent anti-angiogenic compound, among other things, although the precise mechanism is still a matter for debate - in vivo, the compound has effects on a number of wide-ranging growth factors (and these were long thought to be the mechanism underlying its effects on embryos). Those embryonic effects complicate the drug's use immensely - Celgene, who got it through trials and approval for myeloma, have to keep a very tight patient registry, among other things, and control its distribution carefully. Experience has shown that turning thalidomide loose will always end up with someone (i.e. a pregnant woman) getting exposed to it who shouldn't be - it's gotten to the point that the WHO no longer recommends it for use in leprosy treatment, despite its clear evidence of benefit, and it's down to just those problems of distribution and control.
But in 2010, it was reported that the drug binds to a protein called cereblon (CRBN), and this mechanism implicated the ubiquitin ligase system in the embryonic effects. That's an interesting and important pathway - ubiquitin is, as the name implies, ubiquitous, and addition of a string of ubiquitins to a protein is a universal disposal tag in cells: off to the proteosome, to be torn to bits. It gets stuck onto exposed lysine residues by the aforementioned ligase enzyme.
But less-thorough ubiquitination is part of other pathways. Other proteins can have ubiquitin recognition domains, so there are signaling events going on. Even poly-ubiquitin chains can be part of non-disposal processes - the usual oligomers are built up using a particular lysine residue on each ubiquitin in the chain, but there are other lysine possibilities, and these branch off into different functions. It's a mess, frankly, but it's an important mess, and it's been the subject of a lot of work over the years in both academia and industry.
The new paper has the crystal structure of thalidomide (and two of its analogs) bound to the ubiquitin ligase complex. It looks like they keep one set of protein-protein interactions from occurring while the ligase end of things is going after other transcription factors to tag them for degradation. Ubiquitination of various proteins could be either up- or downregulated by this route. Interestingly, the binding is indeed enantioselective, which suggests that the teratogenic effects may well be down to the (S) enantiomer, not that there's any way to test this in vivo (as mentioned above). But the effects of these compounds in myeloma appear to go through the cereblon pathway as well, so there's never going to be a thalidomide-like drug without reproductive tox. If you could take it a notch down the pathway and go for the relevant transcription factors instead, post-cereblon, you might have something, but selective targeting of transcription factors is a hard row to hoe.
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News | Cancer | Chemical News | Toxicology
July 8, 2014
There all all sorts of headlines today about how there's going to be a simple blood test for Alzheimer's soon. Don't believe them.
This all comes from a recent publication in the journal Alzheimer's and Dementia, from a team at King's College (London) and the company Proteome Sciences. It's a perfectly good paper, and it does what you'd think: they quantified a set of proteins in a cohort of potential Alzheimer's patients and checked to see if any of them were associated with progression of the disease. From 26 initial protein candidates (all of them previously implicated in Alzheimer's), they found that a panel of ten seemed to give a prediction that was about 87% accurate.
That figure was enough for a lot of major news outlets, who have run with headlines like "Blood test breakthrough" and "Blood test can predict Alzheimer's". Better ones said something more like "Closer to blood test" or "Progress towards blood test", but that's not so exciting and clickable, is it? This paper may well represent progress towards a blood test, but as its own authors, to their credit, are at pains to say, a lot more work needs to be done. 87%, for starters, is interesting, but not as good as it needs to be - that's still a lot of false negatives, and who knows how many false positives.
That all depends on what the rate of Alzheimer's is in the population you're screening. As Andy Extance pointed out on Twitter, these sorts of calculations are misunderstood by almost everyone, even by people who should know better. A 90 per cent accurate test on a general population whose Alzheimer's incidence rate is 1% would, in fact, be wrong 92% of the time. Here's a more detailed writeup I did in 2007, spurred by reports of a similar Alzheimer's diagnostic back then. And if you have a vague feeling that you heard about all these issue (and another blood test) just a few months ago, you're right.
Even after that statistical problem, things are not as simple as the headlines would have you believe. This new work is a multivariate model, because a number of factors were found to affect the levels of these proteins. The age and gender of the patient were two real covariants, as you'd expect, but the duration of plasma storage before testing also had an effect, as did, apparently, the center where the collection was done. That does not sound like a test that's ready to be rolled out to every doctor's office (which is again what the authors have been saying themselves). There were also different groups of proteins that could be used for a prediction model using the set of Mild Cognitive Impairment (MCI) patients, versus the ones that already appeared to show real Alzheimer's signs, which also tells you that this is not a simple turn-the-dial-on-the-disease setup. Interestingly, they also looked at whether adding brain imaging data (such as hippocampus volume) helped the prediction model. This, though, either had no real effect on the prediction accuracy, or even reduced it somewhat.
So the thing to do here is to run this on larger patient cohorts to get a more real-world idea of what the false negative and false positive rates are, which is the sort of obvious suggestion that is appearing in about the sixth or seventh paragraph of the popular press writeups. This is just what the authors are planning, naturally - they're not the ones who wrote the newspaper stories, after all. This same collaboration has been working on this problem for years now, I should add, and they've had ample opportunity to see their hopes not quite pan out. Here, for example, is a prediction of an Alzheimer's blood test entering the clinic in "12 to 18 months", from . . .well, 2009.
Update: here's a critique of the statistical approaches used in this paper - are there more problems with it than were first apparent?
+ TrackBacks (0) | Category: Alzheimer's Disease | Analytical Chemistry | Biological News
July 7, 2014
Catalysts are absolutely vital to almost every field of chemistry. And catalysis, way too often, is voodoo or a close approximation thereof. A lot of progress has been made over the years, and in some systems we have a fairly good idea of what the important factors are. But even in the comparatively well-worked-out areas one finds surprises and hard-to-explain patterns of reactivity, and when it comes to optimizing turnover, stability, side reactions, and substrate scope, there's really no substitute for good old empirical experimentation most of the time.
The heterogeneous catalysts are especially sorcerous, because the reactions are usually taken place on a poorly characterized particle surface. Nanoscale effects (and even downright quantum mechanical effects) can be important, but these things are not at all easy to get a handle on. Think of the differences between a lump of, say, iron and small particles of the same. The surface area involved (and the surface/volume ratio) is extremely different, just for starters. And when you get down to very small particles (or bits of a rough surface), you find very different behaviors because these things are no longer a bulk material. Each atom becomes important, and can perhaps behave differently.
Now imagine dealing with a heterogeneous catalyst that's not a single pure substance, but is perhaps an alloy of two or more metals, or is some metal complex that itself is adsorbed onto the surface of another finely divided solid, or needs small amounts of some other additive to perform well, etc. It's no mystery why so much time and effort goes into finding good catalysts, because there's plenty of mystery built into them already.
Here's a new short review article in Angewandte Chemie on some of the current attempts to lift some of the veils. A paper earlier this year in Science illustrated a new way of characterizing surfaces with X-ray diffraction, and at short time scales (seconds) for such a technique. Another recent report in Nature Communications describes a new X-ray tomography system to try to characterize catalyst particles.
None of these are easy techniques, and at the moment they require substantial computing power, very close attention to sample preparation, and (in many cases) the brightest X-ray synchrotron sources you can round up. But they're providing information that no one has ever had before about (in these examples) palladium surfaces and nanoparticle characteristics, with more on the way.
+ TrackBacks (0) | Category: Analytical Chemistry | Chemical News
May 22, 2014
C&E News has a story today that is every medicinal chemist's nightmare. We are paid to find and characterize chemical matter, and to develop it (by modifying structures and synthesizing analogs) into something that can be a drug. Key to that whole process is knowing what structure you have in the first place, and now my fellow chemists will see where this is going and begin to cringe.
Shown at left are two rather similar isomeric structures. The top one was characterized at Penn State a few years ago by Wafik El-Deiry's lab as a stimulator of the TRAIL pathway, which could be a useful property against some tumor types (especially glioblastoma). (Article from Nature News here). Their patent, US8673923, was licensed to Oncoceutics, a company formed by El-Deiry, and the compound (now called ONC201) was prepared for clinical trials.
Meanwhile, Kim Janda at Scripps was also interested in TRAIL compounds, and his group resynthesized TIC10. But their freshly prepared material was totally inactive - and let me tell you, this sort of thing happens all too often. The usual story is that the original "hit" wasn't clean, and that its activity was due to metal contamination or colorful gunk, but that wasn't the case here. Janda requested a sample of TIC10 from the National Cancer Institute, and found that (1) it worked in the assays, and (2) it was clean. That discrepancy was resolved when careful characterization, including X-ray crystallography, showed that (3) the original structure had been misassigned.
It's certainly an honest mistake. Organic chemists will look at those two structures and realize that they're both equally plausible, and that you could end up with either one depending on the synthetic route (it's a question of which of two nitrogens gets alkylated first, and with what). It's also clear that telling one from the other is not trivial. They will, of course, have the same molecular weight, and any mass spec differences will be subtle. The same goes for the NMR spectra - they're going to look very similar indeed, and a priori it could be very hard to have any confidence that you'd assigned the right spectrum to the right structure. Janda's lab saw some worrisome correlation patterns in the HMBC spectra, but X-ray was the way to go, clearly - these two molecules have quite different shapes, and the electron density map would nail things down unambiguously.
To confuse everyone even more, the Ang. Chem. paper reports that a commercial supplier (MedKoo Biosciences) has begun offering what they claim is TIC10, but their compound is yet a third isomer, which has no TRAIL activity, either. (It's the "linear" isomer from the patent, but with the 2-methylbenzyl on the nitrogen in the five-membered ring instead).
So Janda's group had found that the published structure was completely dead, and that the newly assigned structure was the real active compound. They then licensed that structure to Sorrento Therapeutics, who are. . .interested in taking it towards clinical trials. Oh boy. This is the clearest example of a blown med-chem structural assignment that I think I've ever seen, and it will be grimly entertaining to see what happens next.
When you go back and look at the El-Deiry/Oncoceutics patent, you find that its claim structure is pretty unambiguous. TIC10 was a known compound, in the NCI collection, so the patent doesn't claim it as chemical matter. Claim 1, accordingly, is written as a method-of-treatment:
"A method of treatment of a subject having brain cancer, comprising: administering to the subject a pharmaceutical composition comprising a pharmaceutically effective amount of a compound of Formula (I) or a pharmaceutically acceptable salt thereof; and a pharmaceutically accepted carrier."
And it's illustrated by that top structure shown above - the incorrect one. That is the only chemical structure that appears in the patent, and it does so again and again. All the other claims are written dependent on Claim 1, for treatment of different varieties of tumors, etc. So I don't see any way around it: the El-Deiry patent unambiguously claims the use of one particular compound, and it's the wrong compound. In fact, if you wanted to go to the trouble, you could probably invalidate the whole thing, because it can be shown (and has been) that the chemical structure in Claim 1 does not produce any of the data used to back up the claims. It isn't active at all.
And that makes this statement from the C&E News article a bit hard to comprehend: "Lee Schalop, Oncoceutics’ chief business officer, tells C&EN that the chemical structure is not relevant to Oncoceutics’ underlying invention. Plans for the clinical trials of TIC10 are moving forward." I don't see how. A quick look through the patent databases does not show me anything else that Oncoceutics could have that would mitigate this problem, although I'd be glad to be corrected on this point. Their key patent, or what looks like it to me, has been blown up. What do they own? Anything? But that said, it's not clear what Sorrento owns, either. The C&E News article quotes two disinterested patent attorneys as saying that Sorrento's position isn't very clear, although the company says that its claims have been written with these problems in mind. Could, for example, identifying the active form have been within the abilities of someone skilled in the art? That application doesn't seem to have published yet, so we'll see what they have at that point.
But let's wind up by emphasizing that "skilled in the art" point. As a chemist, you'd expect me to say this, but this whole problem was caused by a lack of input from a skilled medicinal chemist. El-Deiry's lab has plenty of expertise in cancer biology, but when it comes to chemistry, it looks like they just took what was on the label and ran with it. You never do that, though. You never, ever, advance a compound as a serious candidate without at least resynthesizing it, and you never patent a compound without making sure that you're patenting the right thing. What's more, the Oncoceutics patent estate in this area, unless I'm missing some applications that haven't published yet, looks very, very thin.
One compound? You find one compound that works and you figure that it's time to form a company and take it into clinical trials, because one compound equals one drug? I was very surprised, when I saw the patent, that there was no Markush structure and no mention of any analogs whatsoever. No medicinal chemist would look at a single hit out of the NCI collection and say "Well, we're done - let's patent that one single compound and go cure glioblastoma". And no competent medicinal chemist would look at that one hit and say "Yep, LC/MS matches what's on the label - time to declare it our development candidate". There was (to my eyes) a painfully inadequate chemistry follow-through on TCI10, and the price for that is now being paid. Big time.
+ TrackBacks (0) | Category: Analytical Chemistry | Cancer | Patents and IP
May 20, 2014
Just a couple of months ago, I wrote about how xenon has been used as a performance-enhancing drug. Well, now it's banned. But I'd guess that they're going to have to look for its downstream effects, because detecting xenon itself, particularly a good while after exposure, is going to be a tall order. . .
+ TrackBacks (0) | Category: Analytical Chemistry
March 12, 2014
This paper is outside of my usual reading range, but when I saw the title, the first thing that struck me was "NMR probes". The authors describe a very sensitive way to convert weak radio/microwave signals to an optical readout, with very low noise. And looking over the paper, that's one of the applications they suggest as well, so that's about as far into physics as I'll get today. But the idea looks quite interesting, and if it means that you can get higher sensitivity without having to use cryoprobes and other expensive gear, then speed the day.
+ TrackBacks (0) | Category: Analytical Chemistry
November 20, 2013
There's a report of a new technique to solve protein crystal structures on a much smaller scale than anyone's done before. Here's the paper: the team at the Howard Hughes Medical Institute has used cryo-electron microscopy to do electron diffraction on microcrystals of lysozyme protein.
We present a method, ‘MicroED’, for structure determination by electron crystallography. It should be widely applicable to both soluble and membrane proteins as long as small, well-ordered crystals can be obtained. We have shown that diffraction data at atomic resolution can be collected and a structure determined from crystals that are up to 6 orders of magnitude smaller in volume than those typically used for X-ray crystallography.
For difficult targets such as membrane proteins and multi-protein complexes, screening often produces microcrystals that require a great deal of optimization before reaching the size required for X-ray crystallography. Sometimes such size optimization becomes an impassable barrier. Electron diffraction of microcrystals as described here offers an alternative, allowing this roadblock to be bypassed and data to be collected directly from the initial crystallization hits.
X-ray diffraction is, of course, the usual way to determine crystal structures. Electrons can do the same thing for you, but practically speaking, that's been hard to realize in a general sense. Protein crystals don't stand up very well to electron beams, particularly if you crank up the intensity in order to see lots of diffraction spots. Electrons interact strongly with atoms, which is nice, because you don't need as big a sample to get diffraction, but they interact so strongly that things start falling apart pretty quickly. You can collect more data by zapping more crystals, but the problem is that you don't know how these things are oriented relative to each other. That leaves you with a pile of jigsaw-puzzle diffraction data and no easy way to fit it together. So the most common application for protein electron crystallography has been for samples that crystallize in a thin film or monolayer - that way, you can continue collecting diffraction data while being a bit more sure that everything is facing in the same direction.
In this new technique, the intensity of the electron beam is turned down greatly, and the crystal itself is precisely rotated through 90 one-degree increments. The team has developed methods to handle the data and combine it into a useful set, and were able to get a 2.9-angstrom resolution on lysozyme crystals that are (as described above) far smaller than the usual standard for X-ray work, as shown. There's been a lot of work over the years to figure out how low you can set the electron intensity and still get useful data in such experiments, and this work started off by figuring out how much total radiation the crystals could stand and dividing that out into portions.
The paper, commendably, has a long section detailing how they tried to check for bias in their structure models, and the data seem pretty solid, for what that's worth coming from a non-crystallographer like me. This is still a work in progress, though - lysozyme is about the easiest example possible, for one thing. The authors describe some of the improvements in data collection and handling that would help make this a regular structural biology tool, and I hope that it does so. There's a lot of promise here - being able to pull structures out of tiny "useless" protein crystals would be a real advance.
+ TrackBacks (0) | Category: Analytical Chemistry
September 16, 2013
I wrote here about a very promising X-ray crystallography technique which produces structures of molecules that don't even have to be crystalline. Soaking a test substance into a metal-organic-framework (MOF) lattice gave enough repeating order that x-ray diffraction was possible.
The most startling part of the paper, other than the concept itself, was the determination of the structure of the natural product miyakosyne A. That one's not crystalline, and will never be crystalline, but the authors not only got the structure, but were able to assign its absolute stereochemistry. (The crystalline lattice is full of heavy atoms, giving you a chance for anomalous dispersion).
Unfortunately, though, this last part has now been withdrawn. A correction at Nature (as of last week) says that "previously unnoticed ambiguities" in the data, including "non-negligible disorder" in the molecular structure have led to the configuration being wrongly assigned. They say that their further work has demonstrated that they can determine the chemical structure of the compound, but cannot assign its stereochemistry.
The other structures in the paper have not been called into question. And here's where I'd like to throw things open for discussion This paper has been the subject of a great deal of interest since it came out, and I know of several groups that have been looking into it. It is my understanding that the small molecule structures in the Nature paper can indeed be reproduced. But. . .here we move into unexplored territory. Because if you look at that paper, you'll note that none of the structures feature basic amines or nitrogen heterocycles, just to pick two common classes of compounds that are of great interest to medicinal chemists and natural products chemists alike. And I have yet to hear of anyone getting this MOF technique to work with any such structures, although I am aware of numerous attempts to do so.
So far, then, the impression I have is that this method is certainly not as general as one might have hoped. I would very much enjoy being wrong about this, because it has great potential. It may be that other MOF structures will prove more versatile, and there are certainly a huge number of possibilities to investigate. But I think that the current method needs a lot more work to extend its usefulness. Anyone with experiences in this area that they would like to share, please add them in the comments.
+ TrackBacks (0) | Category: Analytical Chemistry
September 12, 2013
Well, nearly nothing. That's the promise of a technique that's been published by the Ernst lab from the University of Basel. They first wrote about this in 2010, in a paper looking for ligands to the myelin-associated glycoprotein (MAG). That doesn't sound much like a traditional drug target, and so it isn't. It's part of a group of immunoglobulin-like lectins, and they bind things like sialic acids and gangliosides, and they don't seem to bind them very tightly, either.
One of these sialic acids was used as their starting point, even though its affinity is only 137 micromolar. They took this structure and hung a spin label off it, with a short chain spacer. The NMR-savvy among you will already see an application of Wolfgang Jahnke's spin-label screening idea (SLAPSTIC) coming. That's based on the effect of an unpaired electron in NMR spectra - it messes with the relaxation time of protons in the vicinity, and this can be used to determine whatever might be nearby. With the right pulse sequence, you can easily detect any protons on any other molecules or residues out to about 15 or 20 Angstroms from the spin label.
Jahnke's group at Novartis attached spin labels to proteins and used these the find ligands by NMR screening. The NMR field has a traditional bias towards bizarre acronyms, which sometimes calls for ignoring a word or two, so SLAPSTIC stands for "Spin Labels Attached to Protein Side chains as a Tool to identify Interacting Compounds". Ernst's team took their cue from yet another NMR ligand-screening idea, the Abbott "SAR by NMR" scheme. That one burst on the scene in 1996, and caused a lot of stir at the time. The idea was that you could use NMR of labeled proteins, with knowledge of their structure, to find sets of ligands at multiple binding sites, then chemically stitch these together to make a much more potent inhibitor. (This was fragment-based drug discovery before anyone was using that phrase).
The theory behind this idea is perfectly sound. It's the practice that turned out to be the hard part. While fragment linking examples have certainly appeared (including Abbott examples), the straight SAR-by-NMR technique has apparently had a very low success rate, despite (I'm told by veterans of other companies) a good deal of time, money, and effort in the late 1990s. Getting NMR-friendly proteins whose structure was worked out, finding multiple ligands at multiple sites, and (especially) getting these fragments linked together productively has not been easy at all.
But Ernst's group has brought the idea back. They did a second-site NMR screen with a library of fragments and their spin-labeled sialic thingie, and found that 5-nitroindole was bound nearby, with the 3-position pointed towards the label. That's an advantage of this idea - you get spatial and structural information without having to label the protein itself, and without having to know anything about its structure. SPR experiments showed that the nitroindole alone had affinity up in the millimolar range.
They then did something that warmed my heart. They linked the fragments by attaching a range of acetylene and azide-containing chains to the appropriate ends of the two molecules and ran a Sharpless-style in situ click reaction. I've always loved that technique, partly because it's also structure-agnostic. In this case, they did a 3x4 mixture of coupling partners, potentially forming 24 triazoles (syn and anti). After three days of incubation with the protein, a new peak showed up in the LC/MS corresponding to a particular combination. They synthesized both possible candidates, and one of them was 2 micromolar, while the other was 190 nanomolar.
That molecule is shown here - the percentages in the figure are magnetization transfer in STD experiments, with the N-acetyl set to 100% as reference. And that tells you that both ends of the molecule are indeed participating in the binding, as that greatly increased affinity would indicate. (Note that the triazole appears to be getting into the act, too). That affinity is worth thinking about - one part of this molecule was over 100 micromolar, and the other was millimolar, but the combination is 190 nanomolar. That sort of effect is why people keep coming back to fragment linking, even though it's been a brutal thing to get to work.
When I read this paper at the time, I thought that it was very nice, and I filed it in my "Break Glass in Case of Emergency" section for interesting and unusual screening techniques. One thing that worried me, as usual, was whether this was the only system this had ever worked on, or ever would. So I was quite happy to see a new paper from the Ernst group this summer, in which they did it again. This time, they found a ligand for E-selectin, another one of these things that you don't expect to ever find a decent small molecule for.
In this case, it's still not what an organic chemist would be likely to call a "decent small molecule", because they started with something akin to sialyl Lewis, which is already a funky tetrasaccharide. Their trisaccharide derivative had roughly 1 micromolar affinity, with the spin label attached. A fragment screen against E-selectrin had already identified several candidates that seemed to bind to the protein, and the best guess what that they probably wouldn't be binding in the carbohydrate recognition region. Doing the second-site screen as before gave them, as fate would have it, 5-nitroindole as the best candidate. (Now my worry is that this technique only works when you run it with 5-nitroindole. . .)
They worked out the relative geometry of binding from the NMR experiments, and set about synthesizing various azide/acetylene combinations. In this case, the in situ Sharpless-style click reactions did not give any measurable products, perhaps because the wide, flat binding site wasn't able to act as much of a catalyst to bring the two compounds together. Making a library of triazoles via the copper-catalyzed route and testing those, though, gave several compounds with affinities between 20x and 50x greater than the starting structure, and with dramatically slower off-rates.
They did try to get rid of the nitro group, recognizing that it's only an invitation to trouble. But the few modifications they tried really lowered the affinity, which tells you that the nitro itself was probably an important component of the second-site binding. That, to me, is argument enough to consider not having those things in your screening collection to start with. It all depends on what you're hoping for - if you just want a ligand to use as a biophysical tool compound, then nitro on, if you so desire. But it's hard to stop there. If it's a good hit, people will want to put it into cells, into animals, into who knows what, and then the heartache will start. If you're thinking about these kinds of assays, you might well be better off not knowing about some functionality that has a very high chance of wasting your time later on. (More on this issue here, here, here, and here). Update: here's more on trying to get rid of nitro groups).
This work, though, is the sort of thing I could read about all day. I'm very interested in ways to produce potent compounds from weak binders, ways to attack difficult low-hit-rate targets, in situ compound formation, and fragment-based methods, so these papers push several of my buttons simultaneously. And who knows, maybe I'll have a chance to do something like this all day at some point. It looks like work well worth taking seriously.
+ TrackBacks (0) | Category: Analytical Chemistry | Chemical News | Drug Assays
August 16, 2013
Structural biology needs no introduction for people doing drug discovery. This wasn't always so. Drugs were discovered back in the days when people used to argue about whether those "receptor" thingies were real objects (as opposed to useful conceptual shorthand), and before anyone had any idea of what an enzyme's active site might look like. And even today, there are targets, and whole classes of targets, for which we can't get enough structural information to help us out much.
But when you can get it, structure can be a wonderful thing. X-ray crystallography of proteins, and protein-ligand complexes has revealed so much useful information that it's hard to know where to start. It's not the magic wand - you can't look at an empty binding site and just design something right at your desk that'll be a potent ligand right off the bat. And you can't look at a series of ligand-bound structures and say which one is the most potent, not in most situations, anyway. But you still learn things from X-ray structures that you could never have known otherwise.
It's not the only game in town, either. NMR structures are very useful, although the X-ray ones can be easier to get, especially in these days of automated synchroton beamlines and powerful number-crunching. But what if your protein doesn't crystallize? And what if there are things happening in solution that you'd never pick up on from the crystallized form? You're not going to watch your protein rearrange into a new ligand-bound conformation with X-ray crystallography, that's for sure. No, even though NMR structures can be a pain to get, and have to be carefully interpreted, they'll also show you things you'd never had seen.
And there are more exotic methods. Earlier this summer, there was a startling report of a structure of the HIV surface proteins gp120 and gp41 obtained through cryogenic electron microscopy. This is a very important and very challenging field to work in. What you've got there is a membrane-bound protein-protein interaction, which is just the sort of thing that the other major structure-determination techniques can't handle well. At the same time, though, the number of important proteins involved in this sort of thing is almost beyond listing. Cryo-EM, since it observes the native proteins in their natural environment, without tags or stains, has a lot of potential, but it's been extremely hard to get the sort of resolution with it that's needed on such targets.
Joseph Sodroski's group at Harvard, longtime workers in this area, published their 6-angstrom-resolution structure of the protein complex in PNAS. But according to this new article in Science, the work has been an absolute lightning rod ever since it appeared. Many other structural biologists think that the paper is so flawed that it never should have seen print. No, I'm not exaggerating:
Several respected HIV/AIDS researchers are wowed by the work. But others—structural biologists in particular—assert that the paper is too good to be true and is more likely fantasy than fantastic. "That paper is complete rubbish," charges Richard Henderson, an electron microscopy pioneer at the MRC Laboratory of Molecular Biology in Cambridge, U.K. "It has no redeeming features whatsoever."
. . .Most of the structural biologists and HIV/AIDS researchers Science spoke with, including several reviewers, did not want to speak on the record because of their close relations with Sodroski or fear that they'd be seen as competitors griping—and some indeed are competitors. Two main criticisms emerged. Structural biologists are convinced that Sodroski's group, for technical reasons, could not have obtained a 6-Å resolution structure with the type of microscope they used. The second concern is even more disturbing: They solved the structure of a phantom molecule, not the trimer.
Cryo-EM is an art form. You have to freeze your samples in an aqueous system, but without making ice. The crystals of normal ice formation will do unsightly things to biological samples, on both the macro and micro levels, so you have to form "vitreous ice", a glassy amorphous form of frozen water, which is odd enough that until the 1980s many people considered it impossible. Once you've got your protein particles in this matrix, though, you can't just blast away at full power with your electron beam, because that will also tear things up. You have to take a huge number of runs at lower power, and analyze them through statistical techniques. The Sodolski HIV structure, for example, is the product of 670,000 single-particle images.
But its critics say that it's also the product of wishful thinking.:
The essential problem, they contend, is that Sodroski and Mao "aligned" their trimers to lower-resolution images published before, aiming to refine what was known. This is a popular cryo-EM technique but requires convincing evidence that the particles are there in the first place and rigorous tests to ensure that any improvements are real and not the result of simply finding a spurious agreement with random noise. "They should have done lots of controls that they didn't do," (Sriram) Subramaniam asserts. In an oft-cited experiment that aligns 1000 computer-generated images of white noise to a picture of Albert Einstein sticking out his tongue, the resulting image still clearly shows the famous physicist. "You get a beautiful picture of Albert Einstein out of nothing," Henderson says. "That's exactly what Sodroski and Mao have done. They've taken a previously published structure and put atoms in and gone down into a hole." Sodroski and Mao declined to address specific criticisms about their studies.
Well, they decline to answer them in response to a news item in Science. They've indicated a willingness to take on all comers in the peer-reviewed literature, but otherwise, in print, they're doing the we-stand-by-our-results-no-comment thing. Sodroski himself, with his level of experience in the field, seems ready to defend this paper vigorously, but there seem to be plenty of others willing to attack. We'll have to see how this plays out in the coming months - I'll update as things develop.
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News | In Silico | Infectious Diseases
August 8, 2013
Fragment-based screening comes up here fairly often (and if you're interested in the field, you should also have Practical Fragments on your reading list). One of the complaints both inside and outside the fragment world is that there are a lot of primary hits that fall into flat/aromatic chemical space (I know that those two don't overlap perfectly, but you know the sort of things I mean). The early fragment libraries were heavy in that sort of chemical matter, and the sort of collections you can buy still tend to be.
So people have talked about bringing in natural-product-like structures, and diversity-oriented-synthesis structures and other chemistries that make more three-dimensional systems. The commercial suppliers have been catching up with this trend, too, although some definitions of "three-dimensional" may not match yours. (Does a biphenyl derivative count, or is that what you're trying to get away from?)
The UK-based 3D Fragment Consortium has a paper out now in Drug Discovery Today that brings together a lot of references to work in this field. Even if you don't do fragment-based work, I think you'll find it interesting, because many of the same issues apply to larger molecules as well. How much return do you get for putting chiral centers into your molecules, on average? What about molecules with lots of saturated atoms that are still rather squashed and shapeless, versus ones full of aromatic carbons that carve out 3D space surprisingly well? Do different collections of these various molecular types really have differences in screening hit rates, and do these vary by the target class you're screening against? How much are properties (solubility, in particular) shifting these numbers around? And so on.
The consortium's site is worth checking out as well for more on their activities. One interesting bit of information is that the teams ended up crossing off over 90% of the commercially available fragments due to flat structures, which sounds about right. And that takes them where you'd expect it to:
We have concluded that bespoke synthesis, rather than expansion through acquisition of currently available commercial fragment-sized compounds is the most appropriate way to develop the library to attain the desired profile. . .The need to synthesise novel molecules that expand biologically relevant chemical space demonstrates the significant role that academic synthetic chemistry can have in facilitating target evaluation and generating the most appropriate start points for drug discovery programs. Several groups are devising new and innovative methodologies (i.e. methyl activation, cascade reactions and enzymatic functionalisation) and techniques (e.g. flow and photochemistry) that can be harnessed to facilitate expansion of drug discovery-relevant chemical space.
And as long as they stay away from the frequent hitters/PAINS, they should end up with a good collection. I look forward to future publications from the group to see how things work out!
+ TrackBacks (0) | Category: Analytical Chemistry | Chemical News | Drug Assays | In Silico
August 7, 2013
A reader sends this new literature citation along, from Organometallics. He directed my attention to the Supplementary Information file, page 12. And what do we find there?
. . .Solvent was then removed to leave a yellow residue in the vial, the remaining clear, yellow solution was concentrated to a volume of about 1ml, and diethyl ether was added in a dropwise manner to the stirred solution to precipitate a yellow solid. The vial was centrifuged so the supernatant solvent could be decanted off by Pasteur pipette. The yellow solid was washed twice more with ether and the dried completely under high vacuum to give 99mg (93% yield) of product.
Emma, please insert NMR data here! where are they? and for this compound, just make up an elemental analysis...
And don't forget to proofread the manuscript, either, while you're at it. Oops.
Update: I see that Chembark is on this one, and has gone as far as contacting the corresponding author, whose day has gotten quite a bit longer. . .
+ TrackBacks (0) | Category: Analytical Chemistry | The Scientific Literature
July 16, 2013
Organic chemists have been taking NMR spectra for quite a while now. Routine use came on in the 1960s, and higher-field instruments went from exotic big-ticket items in the 1970s to ordinary equipment in the 1980s. But NMR can tell you more about your sample than you wanted to know (good analytical techniques are annoying that way). So what to do when you have those little peaks showing up where no peaks should be?
The correct answer is "Live with 'em or clean up your sample", but wouldn't it be so much easier and faster to just clean up the spectrum? After all, that's all that most people are ever going to see - right? This little line of thought has occurred to countless chemists over the years. Back In The Day, the technology needed to remove solvent peaks, evidence of isomers, and other pesky impurities was little more than a bottle of white-out and a pen (to redraw the lovely flat baseline once the extra peaks were daubed away). Making a photocopy of the altered spectrum gave you publication-ready purity in one easy step.
NMR spectra are probably the most-doctored of the bunch, but LC/MS and HPLC traces are very capable of showing you peaks you didn't want to see, either. These days there are all sorts of digital means to accomplish this deception, although I've no doubt that the white-out bottle is still deployed. In case anyone had any doubt about that, last month Amos Smith, well-known synthetic organic chemist and editor of Organic Letters, had this to say in a special editorial comment in the journal:
Recently, with the addition of a Data Analyst to our staff, Organic Letters has begun checking the submitted Supporting Information more closely. As a result of this increased scrutiny, we have discovered several instances where reported spectra had been edited to remove evidence of impurities.
Such acts of data manipulation are unacceptable. Even if the experimental yields and conclusions of a study are not affected, ANY manipulation of research data casts doubts on the overall integrity and validity of the work reported.
That it does. He went on to serve notice on authors that the journal will be checking, and will be enforcing and penalizing. And you can tell that Smith and the Org Lett staff have followed up on some of these already, because they've already had a chance to hear the default excuse:
In some of the cases that we have investigated further, the Corresponding Author asserted that a student had edited the spectra without the Corresponding Author’s knowledge. This is not an acceptable excuse! The Corresponding Author (who is typically also the research supervisor of the work performed) is ultimately responsible for warranting the integrity of the content of the submitted manuscript. . .
As the editorial goes on the say, and quite rightly, if a student did indeed alter the spectrum before showing it to the boss, it's very likely because the boss was running a group whose unspoken rule was that only perfection was acceptable. And that's an invitation to fraud, large and small. I'm glad to see statements like Smith's - the only ways to keep down this sort of data manipulation are to make the rewards for it small, increase the chances of it being found out, and make the consequences for it real.
As for those, the editorial speaks only of "significant penalties". But I have some ideas for those that might help people think twice about the data clean-up process. How about a special correction in the journal, showing the altered spectra, with red circles around the parts that had been flattened out? And a copy of the same to the relevant granting agencies and department heads? That might help get the message out, you think?
As an aside, I wanted to mention that I have seen someone stand right up and take responsibility for extra peaks in an NMR. Sort of. I saw a person once presenting what was supposed to be the final product's spectrum, only there were several other singlet peaks scattered around. "What are those?" came the inevitable question. "Water" was the answer. "Umm. . .how many water peaks, exactly?" "Oh, this one is water in solution. And this one is water complexed with the compound. And this one is water adsorbed to the inside of the NMR tube. And this one is water adsorbed to the outside of the. . ." It took a little while for order to be restored at that point. . .
+ TrackBacks (0) | Category: Analytical Chemistry | The Dark Side | The Scientific Literature
June 25, 2013
Once in a while, you see people who've gone to the trouble of synthesizing a natural product, only to find that its structure had been incorrectly assigned. (Back in the days when structure elucidation was much harder, R. B. Woodward had this on his list of reasons to do total synthesis, although it wasn't number one).
Now there might be computational method that could flag incorrect structures earlier. This paper describes a carbon-13-NMR-based neural-network program, from a training set of 200 natural products, that seems to do a good job of flagging inconsistencies. It won't tell you that the assigned structure is right (there's probably a list of plausible fits for any given NMR), but it will speak up when something appears to be wrong.
And that's the mode I see this being used in, actually. I suspect that some groups will be motivated to go after the misassigned compounds synthetically, if they can come up with a believable alternative, in order to revise the structure. I'm not sure what happens if you put one of those South Pacific marine toxins into it, the ones that practically need a centerfold to publish their structures in a journal, but this looks like it could be a useful tool.
+ TrackBacks (0) | Category: Analytical Chemistry | Natural Products
June 17, 2013
That's my take-away from this paper, which takes a deep look at a reconstituted beta-adrenergic receptor via fluorine NMR. There are at least four distinct states (two inactive ones, the active one, and an intermediate), and the relationships between them are different with every type of ligand that comes in. Even the ones that look similar turn out to have very different thermodynamics on their way to the active state. If you're into receptor signaling, you'll want to read this one closely - and if you're not, or not up for it, just take away the idea that the landscape is not a simple one. As you'd probably already guessed.
Note: this is a multi-institution list of authors, but it did catch my eye that David Shaw of Wall Street's D. E. Shaw does make an appearance. Good to see him keeping his hand in!
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News | In Silico
June 13, 2013
Single-molecule techniques are really the way to go if you're trying to understand many types of biomolecules. But they're really difficult to realize in practice (a complaint that should be kept in context, given that many of these experiments would have sounded like science fiction not all that long ago). Here's an example of just that sort of thing: watching DNA polymerase actually, well, polymerizing DNA, one base at a time.
The authors, a mixed chemistry/physics team at UC Irvine, managed to attach the business end (the Klenow fragment) of DNA Polymerase I to a carbon nanotube (a mutated Cys residue and a maleimide on the nanotube did the trick). This give you the chance to use the carbon nanotube as a field effect transistor, with changes in the conformation of the attached protein changing the observed current. It's stuff like this, I should add, that brings home to me the fact that it really is 2013, the relative scarcity of flying cars notwithstanding.
The authors had previously used this method to study attached lysozyme molecules (PDF, free author reprint access). That second link is a good example of the sort of careful brush-clearing work that has to be done with a new system like this: how much does altering that single amino acid change the structure and function of the enzyme you're studying? How do you pick which one to mutate? Does being up against the side of a carbon nanotube change things, and how much? It's potentially a real advantage that this technique doesn't require a big fluorescent label stuck to anything, but you have to make sure that attaching your test molecule to a carbon nanotube isn't even worse.
It turns out, reasonably enough, that picking the site of attachment is very important. You want something that'll respond conformationally to the actions of the enzyme, moving charged residues around close to the nanotube, but (at the same time) it can't be so crucial and wide-ranging that the activity of the system gets killed off by having these things so close, either. In the DNA polymerase study, the enzyme was about 33% less active than wild type.
And the authors do see current variations that correlate with what should be opening and closing of the enzyme as it adds nucleotides to the growing chain. Comparing the length of the generated DNA with the FET current, it appears that the enzyme incorporates a new base at least 99.8% of the time it tries to, and the mean time for this to happen is about 0.3 milliseconds. Interestingly, A-T pair formation takes a consistently longer time than C-G does, with the rate-limiting step occurring during the open conformation of the enzyme in each case.
I look forward to more applications of this idea. There's a lot about enzymes that we don't know, and these sorts of experiments are the only way we're going to find out. At present, this technique looks to be a lot of work, but you can see it firming up before your eyes. It would be quite interesting to pick an enzyme that has several classes of inhibitor and watch what happens on this scale.
It's too bad that Arthur Kornberg, the discoverer of DNA Pol I, didn't quite live to see such an interrogation of the enzyme; he would have enjoyed it very much, I think. As an aside, that last link, with its quotes from the reviewers of the original manuscript, will cheer up anyone who's recently had what they thought was a good paper rejected by some journal. Kornberg's two papers only barely made it into JBC, but one year after a referee said "It is very doubtful that the authors are entitled to speak of the enzymatic synthesis of DNA", Kornberg was awarded the Nobel for just that.
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News | The Scientific Literature
May 31, 2013
It's molecular imaging week! See Arr Oh and others have sent along this paper from Science, a really wonderful example of atomic-level work. (For those without journal access, Wired and PhysOrg have good summaries).
As that image shows, what this team has done is take a starting (poly) phenylacetylene compound and let it cyclize to a variety of products. And they can distinguish the resulting frameworks by direct imaging with an atomic force microscope (using a carbon monoxide molecule as the tip, as in this work), in what is surely the most dramatic example yet of this technique's application to small-molecule structure determination. (The first use I know of, from 2010, is here). The two main products are shown, but they pick up several others, including exotica like stable diradicals (compound 10 in the paper).
There are some important things to keep in mind here. For one, the only way to get a decent structure by this technique is if your molecules can lie flat. These are all sitting on the face of a silver crystal, but if a structure starts poking up, the contrast in the AFM data can be very hard to interpret. The authors of this study had this happen with their compound 9, which curls up from the surface and whose structure is unclear. Another thing to note is that the product distribution is surely altered by the AFM conditions: a molecule in solution will probably find different things to do with itself than one stuck face-on to a metal surface.
But these considerations aside, I find this to be a remarkable piece of work. I hope that some enterprising nanotechnologists will eventually make some sort of array version of the AFM, with multiple tips splayed out from each other, with each CO molecule feeding to a different channel. Such an AFM "hand" might be able to deconvolute more three-dimensional structures (and perhaps sense chirality directly?) Easy for me to propose - I don't have to get it to work!
+ TrackBacks (0) | Category: Analytical Chemistry | Chemical News
May 22, 2013
A conversation the other day about 2-D NMR brought this thought to mind. What do you think are the most underused analytical methods in organic chemistry? Maybe I should qualify that, to the most underused (but potentially useful) ones.
I know, for example, that hardly anyone takes IR spectra any more. I've taken maybe one or two in the last ten years, and that was to confirm the presence of things like alkynes or azides, which show up immediately and oddly in the infrared. Otherwise, IR has just been overtaken by other methods for many of its application in organic chemistry, and it's no surprise that it's fallen off so much since its glory days. But I think that carbon-13 NMR is probably underused, as are a lot of 2D NMR techniques. Any other nominations?
+ TrackBacks (0) | Category: Analytical Chemistry | Life in the Drug Labs
April 23, 2013
Here's a fine piece from Matthew Herper over at Forbes on an IBM/Roche collaboration in gene sequencing. IBM had an interesting technology platform in the area, which they modestly called the "DNA transistor". For a while, it was going to the the Next Big Thing in the field (and the material at that last link was apparently written during that period). But sequencing is a very competitive area, with a lot of action in it these days, and, well. . .things haven't worked out.
Today Roche announced that they're pulling out of the collaboration, and Herper has some thoughts about what that tells us. His thoughts on the sequencing business are well worth a look, but I was particularly struck by this one:
Biotech is not tech. You’d think that when a company like IBM moves into a new field in biology, its fast technical expertise and innovativeness would give it an advantage. Sometimes, maybe, it does: with its supercomputer Watson, IBM actually does seem to be developing a technology that could change the way medicine is practiced, someday. But more often than not the opposite is true. Tech companies like IBM, Microsoft, and Google actually have dismal records of moving into medicine. Biology is simply not like semiconductors or software engineering, even when it involves semiconductors or software engineering.
And I'm not sure how much of the Watson business is hype, either, when it comes to biomedicine (a nonzero amount, at any rate). But Herper's point is an important one, and it's one that's been discussed many time on this site as well. This post is a good catch-all for them - it links back to the locus classicus of such thinking, the famous "Can A Biologist Fix a Radio?" article, as well as to more recent forays like Andy Grove (ex-Intel) and his call for drug discovery to be more like chip design. (Here's another post on these points).
One of the big mistakes that people make is in thinking that "technology" is a single category of transferrable expertise. That's closely tied to another big (and common) mistake, that of thinking that the progress in computing power and electronics in general is the way that all technological progress works. (That, to me, sums up my problems with Ray Kurzweil). The evolution of microprocessing has indeed been amazing. Every field that can be improved by having more and faster computational power has been touched by it, and will continue to be. But if computation is not your rate-limiting step, then there's a limit to how much work Moore's Law can do for you.
And computational power is not the rate-limiting step in drug discovery or in biomedical research in general. We do not have polynomial-time algorithms to predictive toxicology, or to models of human drug efficacy. We hardly have any algorithms at all. Anyone who feels like remedying this lack (and making a few billion dollars doing so) is welcome to step right up.
Note: it's been pointed out in the comments that cost-per-base of DNA sequencing has been dropping at an even faster than Moore's Law rate. So there is technological innovation going on in the biomedical field, outside of sheer computational power, but I'd still say that understanding is the real rate limiter. . .
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News | Drug Industry History
April 9, 2013
You know, mass spectrometry has been gradually taking over the world. Well, maybe not your world, but mine (and that of a lot of biopharma/biophysical researchers). There are just so many things that you can do with modern instrumentation that the assays and techniques just keep on coming.
This paper from a recent Angewandte Chemie is a good example. They're looking at post-translational modifications of proteins, which has always been a big field, and shows no signs of getting any smaller. The specific example here is SIRT1, an old friend to readers of this site, and the MALDI-based assay reported is a nice alternative to the fluorescence-based assays in that area, which have (notoriously) been shown to cause artifacts. The mass spec can directly detect deacetylation of a 16-mer histone H4 peptide - no labels needed.
The authors then screened a library of about 5500 natural product compounds (5 compounds per well in 384-well plates). As they showed, though, the hit rates observed would support higher pool numbers, and they successfully tested mixtures of up to 30 compounds at a time. Several structures were found to be micromolar inhibitors of the deacetylation reaction. None of these look very interesting or important per se, although some of them may find use as tool compounds. But the levels of detection and the throughput make me think that this might be a very useful technique for screening a fragment library.
Interestingly, they were also able to run the assay in the other direction, looking at acetylation of the histone protein, and discovered a new inhibitor of that process as well. These results prompted the authors to speculate that their assay conditions would be useful for a whole range of protein-modifying targets, and they may well be right.
So if this is such a good idea, why hasn't it been done before? The answer is that it has, especially if you go beyond the "open literature" and into the patents. Here, for example, is a 2009 application from Sirtris (who else?) on deacetylation/acetylation mass spec assays. And here's a paper (PDF) from 2009 (also in Angewandte) that used shorter peptides (6-mers) to profile enzymes of this type as well. There are many other assays of this sort that have been reported, or worked out inside various biopharma companies for their own uses. But this latest paper serves to show people (or remind them) that you can do such things on realistic substrates, with good reproducibility and throughput, and without having to think for a moment about coupled assays, scintillation plates, fluorescence windows, tagged proteins, and all the other typical details. Other things being equal, the more label-free your assay conditions, the better off you are. And other things are getting closer equal all the time.
+ TrackBacks (0) | Category: Analytical Chemistry | Drug Assays
March 28, 2013
There's an absolutely startling new paper out from Makoto Fujita and co-workers at the University of Tokyo. I've written a number of times here about X-ray crystallography, which can be the most powerful tool available for solving the structures of both large and small molecules - if you can get a crystal, and if that crystal is good enough. Advances in X-ray source brightness, in detectors, and in sheer computational power have all advanced the field far beyond what Sir Lawrence Bragg could have imagined. But you still need a crystal.
Maybe not any more, you don't. This latest paper demonstrates that if you soak a solution of some small molecule in a bit of crystalline porous "molecular sponge", you can get the x-ray structure of the whole complex, small molecules and all. If you're not a chemist you might not feel the full effect of that statement, but so far, every chemist I've tried it out on has reacted with raised eyebrows, disbelief, and sometimes a four-letter exclamation for good measure. The idea that you can turn around and get a solid X-ray structure of a compound after having soaked it with a tiny piece of crystalline stuff is going to take some getting used to, but I think we'll manage.
The crystalline stuff in question turns out to be two complexes with tris(4-pyridyl)triazine and either cobalt isothiocyanate or zinc iodide. These form large cage-like structures in the solid state, with rather different forms, but each of them seems to be able to pick up small molecules and hold them in a repeating, defined orientation. Shown is a lattice of santonin molecules in the molecular cage, to give you the idea.
Just as impressive is the scale that this technique works on. They demonstrate that by solving the structure of a marine natural product, miyakosyne A, using a 5-microgram sample. I might add that its structure certainly does not look like something that is likely to crystallize easily on its own, and indeed, no crystal is known. By measuring the amount of absorbed material in other examples and extrapolating down to their X-ray sample size, the authors estimate that they can get a structure on as little as 80 nanograms of actual compound. Holy crap.
Not content with this, the paper goes on to show how this method can be applied to give a completely new form of analysis: LC/SCD. Yes, that means what it says - they show that you can run an HPLC separation on a mixture, dip bits of the molecular sponge in the fractions, and get (if you are so inclined) X-ray structures of everything that comes off your column. Now, this is not going to be a walk-up technique any time soon. You still need a fine source of X-rays, plenty of computational resources, and so on. But just the idea that this is possible makes me feel as if I'm reading science fiction. If this is as robust as it looks like, the entire field of natural product structure determination has just ended.
Here's a comment in the same issue of Nature from Pierre Stallforth and Jon Clardy, whose opinions on X-ray crystallography are taken seriously by anyone who knows anything about the field. This new work is described as "breathtakingly simple", and furthermore, that "One can even imagine that, in the near future, researchers will not bother trying to crystallize new molecules". Indeed one can.
I would guess that there are many more refinements to be made in what sorts of host frameworks are used - different ones are likely to be effective for different classes of compounds. A number of very interesting extensions to this idea are occurring to me right now, and I'm sure that'll be true for a lot of the people who will read it. But for now, what's in this paper is plenty. Nobel prizes have been given for less. Sir Lawrence Bragg, were he with us, would stand up and lead the applause himself.
Update: as those of you reading up on this have discovered by now, the literature on metal-organic frameworks (MOFs) is large and growing. But I wanted to highlight this recent report of one with pore large enough for actual proteins to enter. Will they?
And here's more on the story from Nature News.
+ TrackBacks (0) | Category: Analytical Chemistry
March 7, 2013
Every so often I've mentioned some of the work being done with atomic force microscopy (AFM), and how it might apply to medicinal chemistry. It's been used to confirm a natural product structural assignment, and then there are images like these. Now comes a report of probing a binding site with the technique. The experimental setup is shown at left. The group (a mixed team from Linz, Vienna, and Berlin) reconstituted functional uncoupling protein 1 (UCP1) in a lipid bilayer on a mica surface. Then they ran two different kinds of ATM tips across them - one with an ATP molecule attached, and another with an anti-UCP1 antibody, and with different tether links on them as well.
What they found was that ATP seems to be able to bind to either side of the protein (some of the UCPs in the bilayer were upside down). There also appears to be only one nucleotide binding site per UCP (in accordance with the sequence). That site is about 1.27 nM down into the central pore, which could well be a particular residue (R182) that is thought to protrude into the pore space. Interestingly, although ATP can bind while coming in from either direction, it has to go in deeper from one side than the other (which shows up in the measurements with different tether lengths). And the leads to the hypothesis that the deeper-binding mode sets off conformational changes in the protein that the shallow-binding mode doesn't - which could explain how the protein is able to function while its cytosolic side is being exposed to high concentrations of ATP.
For some reason, these sorts of direct physical measurements weird me out more than spectroscopic studies. Shining light or X-rays into something (or putting it into a magnetic field) just seems more removed. But a single molecule on an AFM tip seems, when a person's hand is on the dial, to somehow be the equivalent of a long, thin stick that we're using to poke the atomic-level structure. What can I say; a vivid imagination is no particular handicap in this business!
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News
February 4, 2013
Two different research teams have reported a completely different way to run NMR experiments, one that looks like it could take the resolution down to cellular (or even large protein) levels. These two papers in Science have the details (and there's an overall commentary here, and more at Nature News).
This is not, as you've probably guessed, just a matter of shrinking down the probe and its detector coil. Our usual method of running NMR spectra doesn't scale down that far; there are severe signal/noise problems, among other things. This new method uses crystal defects just under the surface of diamond crystals - if a nitrogen atom gets in there instead of a carbon, you're left with a negatively charged center with a very useful spin state. It's capable of extraordinarily sensitive detection of magnetic fields; you have a single-atom magnetometer.
And that's been used to detect NMR signals in volumes of a few cubic nanometers. By comparison, erythrocytes (among the smallest of human cells) have a volume of around 100 cubic micrometers. By contrast, a 50 kD protein has a minimal radius of 2.4 nm, giving it a volume of 58 cubic nanometers at the absolute minimum. This is all being done at room temperature, I might add. If this technique can be made more robust, we are potentially looking at MRI imaging of individual proteins, and surely at a detailed intracellular level, which is a bizarre thought. And there's room for improvement:
By implementing different advanced noise suppression techniques, Mamin et al. and Staudacher et al. have succeeded in using near-surface NVs to detect small volumes of proton spins outside of the diamond crystal. Both authors conclude that their observed signals are consistent with a detection volume on the order of (5 cubic nanometers) or less. This sensitivity is comparable to that of the cryogenic MRFM technique and should be adequate for detecting large individual protein molecules. Both groups also project much smaller detection volumes in the future by using NVs closer to the diamond surface. Staudacher et al. expect to improve sensitivity by using the NV to spin-polarize the nuclei. Mamin et al. project that sensitivity may eventually approach the level of single protons, provided that the NV coherence time can be kept long enough.
I love this sort of thing, and I don't mind admitting it. Imagine detecting a ligand binding event by NMR on an individual protein molecule, or following the distribution of a fluorinated drug candidate inside a single cell. I can't wait to see it in action.
+ TrackBacks (0) | Category: Analytical Chemistry
December 3, 2012
Here's another next-generation X-ray crystal paper, this time using a free electron laser X-ray source. That's powerful enough to cause very fast and significant radiation damage to any crystals you put in its way, so the team used a flow system, with a stream of small crystals of T. brucei cathepsin B enzyme being exposed in random orientations to very short pulses of extremely intense X-rays. (Here's an earlier paper where the same team used this technique to obtain a structure of the Photosystem I complex). Note that this was done at room temperature, instead of cryogenically. The other key feature is that the crystals were actually those formed inside Sf9 insect cells via baculovirus overexpression, not purified protein that was then crystallized in vitro.
Nearly 4 million of these snapshots were obtained, with almost 300,000 of them showing diffraction. 60% of these were used to refine the structure, which out at 2.1 Angstroms, and clearly showed many useful features of the enzyme. (Like others in its class, it starts out inhibited by a propeptide, which is later cleaved - that's one of the things that makes it a challenge to get an X-ray structure by traditional means).
I'm always happy to see bizarre new techniques used to generate X-ray structures. Although I'm well aware of their limitations, such structures are still tremendous opportunities to learn about protein functions and how our small molecules interact with them. I wrote about the instrument used in these papers here, before it came on line, and it's good to see data coming out of it.
+ TrackBacks (0) | Category: Analytical Chemistry | Chemical News
November 28, 2012
Via Chemjobber, we have here an excellent example of how much detail you have to get into if you're seriously making a drug for the market. When you have to account for every impurity, and come up with procedures that generate the same ones within the same tight limits every time, this is the sort of thing you have to pay attention to: how you dry your compound. And how long. And why. Because if you don't, huge amounts of money (time, lost revenue, regulatory trouble, lawsuits) are waiting. . .
+ TrackBacks (0) | Category: Analytical Chemistry | Chemical News | Drug Development
November 8, 2012
We're getting closer to real-time X-ray structures of protein function, and I think I speak for a lot of chemists and biologists when I say that this has been a longstanding dream. X-ray structures, when they work well, can give you atomic-level structural data, but they've been limited to static time scales. In the old, old days, structures of small molecules were a lot of work, and structure of a protein took years of hard labor and was obvious Nobel Prize material. As time went on, brighter X-ray sources and much better detectors sped things up (since a lot of the X-rays deflected from a large compound are of very low intensity), and computing power came along to crunch through the piles of data thus generated. These days, x-ray structures are generated for systems of huge complexity and importance. Working at that level is no stroll through the garden, but more tractable protein structures are generated almost routinely (although growing good protein crystals is still something of a dark art, and is accomplished through what can accurately be called enlightened brute force).
But even with synchrotron X-ray sources blasting your crystals, you're still getting a static picture. And proteins are not static objects; the whole point of them is how they move (and for enzymes, how they get other molecules to move in their active sites). I've heard Barry Sharpless quoted to the effect that understanding an enzyme by studying its X-ray structures is like trying to get to know a person by visiting their corpse. I haven't heard him say that (although it sounds like him!), but whoever said it was correct.
Comes now this paper in PNAS, a multinational effort with the latest on the attempts to change that situation. The team is looking at photoactive yellow protein (PYP), a blue-light receptor protein from a purple sulfur bacterium. Those guys vigorously swim away from blue light, which they find harmful, and this seems to be the receptor that alerts them to its presence. And the inner workings of the protein are known, to some extent. There's a p-courmaric acid in there, bound to a Cys residue, and when blue light hits it, the double bond switches from trans to cis. The resulting conformational change is the signaling event.
But while knowing things at that level is fine (and took no small amount of work), there are still a lot of questions left unanswered. The actual isomerization is a single-photon event and happens in a picosecond or two. But the protein changes that happen after that, well, those are a mess. A lot of work has gone into trying to unravel what moves where, and when, and how that translates into a cellular signal. And although this is a mere purple sulfur bacterium (What's so mere? They've been on this planet a lot longer than we have), these questions are exactly the ones that get asked about protein conformational signaling all through living systems. The rods and cones in your eyes are doing something very similar as you read this blog post, as are the neurotransmitter receptors in your optic nerves, and so on.
This technique, variations of which have been coming on for some years now, uses multiple wavelengths of X-rays simultaneously, and scans them across large protein crystals. Adjusting the timing of the X-ray pulse compared to the light pulse that sets off the protein motion gives you time-resolved spectra - that is, if you have extremely good equipment, world-class technique, and vast amounts of patience. (For one thing, this has to be done over and over again from many different angles).
And here's what's happening: first off, the cis structure is quite weird. The carbonyl is 90 degrees out of the plane, making (among other things) a very transient hydrogen bond with a backbone nitrogen. Several dihedral angles have to be distorted to accommodate this, and it's a testament to the weirdness of protein active sites that it exists at all. It then twangs back to a planar conformation, but at the cost of breaking another hydrogen bond back at the phenolate end of things. That leaves another kind of strain in the system, which is relieved by a shift to yet another intermediate structure through a dihedral rotation, and that one in turn goes through a truly messy transition to a blue-shifted intermediate. That involves four hydrogen bonds and a 180-degree rotation in a dihedral angle, and seems to be the weak link in the whole process - about half the transitions fail and flop back to the ground state at that point. That also lets a crucial water molecule into the mix, which sets up the transition to the actual signaling state of the protein.
If you want more details, the paper is open-access, and includes movie files of these transitions and much more detail on what's going on. What we're seeing is light energy being converted (and channeled) into structural strain energy. I find this sort of thing fascinating, and I hope that the technique can be extended in the way the authors describe:
The time-resolved methodol- ogy developed for this study of PYP is, in principle, applicable to any other crystallizable protein whose function can be directly or indirectly triggered with a pulse of light. Indeed, it may prove possible to extend this capability to the study of enzymes, and literally watch an enzyme as it functions in real time with near- atomic spatial resolution. By capturing the structure and temporal evolution of key reaction intermediates, picosecond time-resolved Laue crystallography can provide an unprecedented view into the relations between protein structure, dynamics, and function. Such detailed information is crucial to properly assess the validity of theoretical and computational approaches in biophysics. By com- bining incisive experiments and theory, we move closer to resolving reaction pathways that are at the heart of biological functions.
Speed the day. That's the sort of thing we chemists need to really understand what's going on at the molecular level, and to start making our own enzymes to do things that Nature never dreamed of.
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News | Chemical Biology | Chemical News
August 7, 2012
Courtesy of a reader in the UK, here's an ad from GlaxoSmithKline that I don't think has been seen much on this side of the Atlantic. I hadn't realized that they were involved in the drug testing for the London games; it's interesting that their public relations folks feel that it's worth highlighting. They're almost certainly right - I think one of the major objections people have when they hear of a case of athletic doping is a violation of the spirit of fair play.
But one can certainly see the hands of the advertising people at work. The napthyl rings for the double-O of "blood" are a nice touch, but the rest of the "chemistry" is complete nonsense. Update: it's such complete nonsense that they have the double bonds in the napthyl banging into each other, which I hadn't even noticed at first. Is it still a "Texas Carbon" when it's from London? In fact, it's so far off that it took me a minute of looking at the image to realize that the reason things were written so oddly was that the words were supposed to be more parts of a chemical formula. It's that wrong - the chemical equivalent of one of those meaningless Oriental language tattoos.
But as in the case of the tattoos, it probably gets its message across to people who've never been exposed to any of the actual symbols and syntax. I'd be interested to know if this typography immediately says "Chemistry!" to people who don't know any. I don't have many good opportunities to test that, though - everyone around me during the day knows the lingo!
+ TrackBacks (0) | Category: Analytical Chemistry | General Scientific News
June 27, 2012
A lot of natural product structures have been misassigned over the years. In the old days, it was a wonder when you were able to assign a complex one at all. Structure determination, pre-NMR, could be an intellectual challenge at the highest level, something like trying to reconstruct a position on a chess board in the dark, based on acrostic clues in a language you don't speak. The advent of modern spectroscopy turned on the lights, which is definitely a good thing, but many people who'd made their careers under the old system missed the thrill of the old hunt when it was gone.
But even now, it's possible to get structures wrong - even with high-field 2-D NMR, even with X-ray spectroscopy. Natural products can be startlingly weird by the standards of human chemistry, and I still have a lot of sympathy for anyone who's figuring them out. My sympathy goes only so far, though.
Specifically, this case. I have to agree with the BRSM Blog, which says: "I have to say that I think I could have done a better job myself. Drunk." Think that's harsh? Check out the structures. The proposed structure had two napthalenes, with two methoxys and four phenols. But the real natural product, as it turns out, has one methoxy and one phenol. And no napthyls. And four flipping bromine atoms. Why the vengeful spirit of R. B. Woodward hasn't appeared, shooting lightning bolts and breaking Scotch bottles over people's heads, I just can't figure.
+ TrackBacks (0) | Category: Analytical Chemistry | Chemical News | Natural Products
February 2, 2012
Fluorine NMR is underused in chemistry. Well, then again, maybe it's not, but it's one of those thing that just seems like it should have more uses than it does. (Here's a recent bookon the subject). Fluorine is a great NMR nucleus - all the F in the world is the same isotope, unless you're right next to a PET scanning facility - and the different compound show up over a very wide range of chemical shifts. You've got that going for you, coupling information, NOE, basically all your friends from proton NMR.
There's a pretty recent paper showing a good use of all these qualities (blogged about here at Practical Fragments as well). A group at Amgen reports on their work using fluorine NMR as a fragment-screening tool. They can take mixtures of 10 or 12 compounds at a time (because of all those different chemical shifts) and run the spectra with and without a target protein in the vial. If a fragment binds, its F peak broadens out (you can even get binding constants if you run at a few different concentrations). A simple overlay of the two spectra tells you immediately if you have hits. You don't need to have any special form of the protein, and you don't even need to run in deuterated solvents, since you're just ignoring protons altogether.
Interestingly, when they go on to try other assay techniques as follow-up, they find that the fluorines themselves aren't always a key part of the binding. Sometimes switching to the non-fluorinated version of the fragment gives you a better compound; sometimes it doesn't. The binding constants you get from the NMR, though, do compare very well to the ones from other assays.
The part I found most interesting was the intra-ligand NOE example. (That's also something that's been done in proton NMR, although it's not easy). They show a case where 19F ligands do get close enough to show the effect, and that a linked version of the two fragments does, as you'd hope, make a much more potent compound. That's the sort of thing that fragment people are always wanting to know - what fits next door to my hit? Can they be linked together? Fragment linking has its ups and downs, going back to the Abbott SAR-by-NMR days. That was a technique that never really panned out, as far as can be seen, but this is at least an experimentally easy way to give it a shot. (Of course, the chances of the fluorines on your ligands actually being pointed at each other is probably small, so that does cancel things out a bit).
Overall, it's a fun paper to read - well, allowing for my geeky interests, it is - and perhaps it'll lead a few more people to think of things that could be done with fluorine NMR in general. It's just sitting there, waiting to be used. . .
+ TrackBacks (0) | Category: Analytical Chemistry | Drug Assays
December 14, 2011
Here's a very nice poster-style presentation of proton NMR and spectral interpretation, courtesy of Jon Chui. I wish I'd had something like it when I was learning the topic, and it's a very useful way to picture it even for those of us who've been taking spectra for years. Recommended.
+ TrackBacks (0) | Category: Analytical Chemistry | Chemical News
November 16, 2011
It's messy inside a cell. The closer we look, the more seems to be going on. And now there's a closer look than ever at the state of proteins inside a common human cell line, and it does nothing but increase your appreciation for the whole process.
The authors have run one of these experiments that (in the days before automated mass spec techniques and huge computational power) would have been written off as a proposal from an unbalanced mind. They took cultured human U2OS cells, lysed them to release their contents, and digested those with trypsin. This gave, naturally, an extremely complex mass of smaller peptides, but these, the lot of them, were fractionated out and run through the mass spec machines, with use of ion-trapping techniques and mass-label spiking to get quantification. The whole process is reminiscent of solving a huge jigsaw puzzle by first running it through a food processor. The techniques for dealing with such massive piles of mass spec/protein sequence data, though, have improved to the point where this sort of experiment can now be carried out, although that's not to say that it isn't still a ferocious amount of work.
What did they find? These cells are expressing on the order of at least ten thousand different proteins (well above the numbers found in previous attempts at such quantification). Even with that, the authors have surely undercounted membrane-bound proteins, which weren't as available to their experimental technique, but they believe that they've gotten a pretty good read of the soluble parts. And these proteins turn out to expressed over a huge dynamic range, from a few dozen copies (or less) per cell up to tens of millions of copies.
As you'd figure, those copy numbers represent very different sorts of proteins. It appears, broadly, that signaling and regulatory functions are carried out by a host of low-expression proteins, while the basic machinery of the cell is made of hugely well-populated classes. Transcription, translation, metabolism, and transport are where most of the effort seems to be going - in fact, the most abundant proteins are there to deal with the synthesis and processing of proteins. There's a lot of overhead, in other words - it's like a rocket, in which a good part of the fuel has to be there in order to lift the fuel.
So that means that most of our favored drug targets are actually of quite low abundance - kinases, proteases, hydrolases of all sorts, receptors (most likely), and so on. We like to aim for regulatory choke points and bottlenecks, and these are just not common proteins - they don't need to be. In general (and this also makes sense) the proteins that have a large number of homologs and family members tend to show low copy numbers per variant. Ribosomal machinery, on the other hand - boy, is there a lot of ribosomal stuff. But unless it's bacterial ribosomes, that's not exactly a productive drug target, is it?
It's hard to picture what it's like inside a cell, and these numbers just make it look even stranger. What's strangest of all, perhaps, is that we can get small-molecule drugs to work under these conditions. . .
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News
January 14, 2011
Everyone in this industry wants to have good, predictive biomarkers for human diseases. We've wanted that for a very long time, though, and in most cases, we're still waiting. [For those outside the field, a biomarker is some sort of easy-to-run test that for a factor that correlates with the course of the real disease. Viral titer for an infection or cholesterol levels for atherosclerosis are two examples. The hope is to find a simple blood test that will give you advance news of how a slow-progressing disease is responding to treatment]. Sometimes the problem is that we have markers, but that no one can quite agree on how relevant they are (and for which patients), and other times we have nothing to work with at all.
A patient's antibodies might, in theory, be a good place to look for markers in many disease states, but that's some haystack to go rooting around in. Any given person is estimated, very roughly, to produce maybe ten billion different antibodies. And in many cases, we have no idea of what ones to look for since we don't really know what abnormal molecules they've been raised to recognize. (It's a chicken-and-egg problem: if we knew what those antigens were, we'd probably just look for them directly with reagents of our own).
So if you don't have a good starting point, what to do? One approach has been to go straight into tissue samples from patients and look for unusual molecules, in the belief that these might well be associated with the disease. (You can then do just as above to try to use them as a biomarker - look for the molecules themselves, if they're easy to assay, or look for circulating antibodies that bind to them). This direct route has only become feasible in recent years, with advanced mass spec and data handling techniques, but it's still a pretty formidable challenge. (Here's a review of the field).
A new paper in Cell takes another approach. The authors figured that antigen molecules would probably look like rather weirdly modified peptides, so they generated a library of several thousand weirdo "peptoids". (These are basically poly-glycines with anomalous N-substituents). They put these together as a microarray and used them as probes against serum from animal models of disease.
Rather surprisingly, the idea seems to have worked. In a rodent model of multiple sclerosis (the EAE, or experimental autoimmune encephalitis model), they found several peptoids that pulled down antibodies from the model animals and not from the controls. A time course showed that these antibodies came on at just the speed expected for an immune response in the animal model. As a control, another set of mice were immunized with a different (non-disease-causing) protein, and a different set of peptoids pulled down those resulting antibodies, with little or no cross-reactivity.
Finally, the authors turned to a real-world case: Alzheimer's disease. They tried out their array on serum from six Alzheimer's patients, versus six age-matched controls, and six Parkinson's patients as another control, and found three peptoids that seems to have about a 3-fold window for antibodies in the AD group. Further experimentation (passing serum repeated over these peptoids before assaying) showed that two of them seem to react with the same antibody, while one of them has a completely different partner. These experiments also showed that they are indeed pulling down the same antibodies in each of the patients, which is an important thing to make sure of.
Using those three peptoids by themselves, they tried a further 16 AD patient samples, 16 negative controls, and 6 samples from patients with lupus, all blinded, and did pretty well: the lupus patients were clearly distinguished as weak binders, the AD patients all showed strong binding, and 14 out of the 16 control patients showed weak binding. Two of the controls, though, showed raised levels of antibody detection, up to the lowest of the AD patients.
So while this isn't good enough for a diagnostic yet, for a blind shot into the wild blue immunological yonder, it's pretty impressive. Although. . .there's always the possibility that this is already good enough, and that the test picked up presymptomatic Alzheimer's in those two control patients. I suppose we're going to have to wait to find that out. As you'd imagine, the authors are extending these studies to wider patient populations, trying to make the assay easier to run, and trying to find out what native antigens these antibodies might be recognizing. I wish them luck, and I hope that it turns out that the technique can be applied to other diseases as well. This should keep a lot of people usefully occupied for quite some time!
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News | The Central Nervous System
January 11, 2011
How's the XMRV / chronic fatigue syndrome connection holding up? Not real well. Science has a roundup of the latest news in the area, and none of it looks encouraging. There are four studies that have come out in the journal Retrovirology that strongly suggest that earlier positive test results for the virus in CFS samples are just artifacts.
For one thing, when you look closely, it turns out that the sequences from cell-cultured XMRV samples are quite a bit more diverse than the ones taken from widely separated patients at different times. And that's just not right for an infectious agent; it's the opposite of what you should see. A number of supposedly XMRV-specific primers that have been used in such assays also appear to amplify other murine viral sequences as well, and samples that show positive for XMRV also appear to have some mouse DNA in them. Finally, there's reason to believe that some common sources of PCR reagents may have murine viral contaminants that blow up this particular assay.
Taken together, these latest results really have to make you cautious in assigning any role at all to XMRV based on the published data. You can't be sure that any of the numbers are what they're supposed to be, and the most parsimonious explanation is that the whole thing has been a mistake. To illustrate the state of things, you may remember an effort to have several labs (on both sides of the issue) test the same set of samples. Well, according to Science. . .
Some had hoped that a project in which several U.S. labs are testing for XMRV in the same samples would clear up the picture. But so far this effort has been inconclusive. Four CFS patients' blood initially tested positive for XMRV at WPI and the U.S. Centers for Disease Control and Prevention but not at an NCI lab. When all three labs tested new samples from the same patients, none found XMRV—for reasons that aren't yet clear, says Coffin. The group now plans to test blood from several dozen CFS patients and controls.
No, this isn't looking good at all. It's pretty typical, though, of how things are out at the frontiers in this business. There are always more variables than you think, and more reasons to be wrong than you've counted. A theory doesn't hold up until everyone who wants to has had a chance to take some big piñata-shattering swings at it, with weapons of their choice. So, to people outside of research: you're not seeing evidence of bad faith, conspiracy, or stupidity here. You're seeing exactly how science gets done. It isn't pretty, but it gets results in the end. Circumspice.
+ TrackBacks (0) | Category: Analytical Chemistry | Infectious Diseases
January 4, 2011
This story on a new diagnostic method in oncology is getting a lot of attention in the press. It's a collaboration between J&J, a small company they've bought called Veridex, and several oncology centers to see if very sensitive monitoring of circulating tumor cells could be a more useful biomarker.
The press coverage has some hype in it - for one thing, all the stuff about detecting one single cancer cell in the whole body isn't too helpful. The cells have to be circulating in the blood, and they have to display the markers you're looking for, to start with. But I can't deny that this is an interesting and potentially exciting field. There's some evidence to suggest that circulating tumor cells could be a strongly predictive marker can in several kinds of cancer.
These studies are looking at the sorts of endpoints that clinicians (and patients, and the FDA) all respect: overall survival, and progression-free survival. As discussed around here before, it's widely felt in oncology that these are where the field should really be spending its time, rather than on tumor size and so on. (You'd think that tumor size or number of detectable tumors would correlate with survival, but in many cases it's a strikingly poor predictor - which is a shame, since those are easier and faster numbers to get). A blood test, on the other hand, that strongly correlates with survival would be a real advance.
The value would not just be in telling (some) patients that they're showing better chances for survival, although I'm sure that'll be greatly appreciated. It's the patients whose numbers come back worse that may well be helped out the most, because that indicates that the current therapy isn't doing the job, and that it's time to switch to something else (assuming that there is something else, of course). The more quickly and confidently you can make that call, the better.
And from a drug development perspective, the uses of such assays in clinical trials are immediately obvious. Additionally, I'd think that these would be a real help to rolling-enrollment Bayesian trial designs, since you could assign patients to (and move them between) the different study groups with more confidence.
The Veridex/J&J assay (called CellSearch) uses an ingenious magnetic immunochemical approach. Blood samples are treated with antibody-coated iron nanoparticles that recognize a common adhesion protein. The cells that get bound are separated magnetically on a diagnostic chip for further immunostaining and imaging. There are other techniques out there as well - here's an article from Technology Review on a competing one that's said to be more sensitive, and here's a San Diego company trying to enter the market with an assay that's supposed to be broader-based). The key for all of these things will be bringing the costs down (and the speed of production up, in some cases). These are tests that ideally would be run early and often, so the cheaper and faster the assay can be made, the better.
Now, of course, we just need some more therapies that work, so that when people find out that their current regimen isn't working, then they have something else to try. If these circulating-cell assays help us sort things out faster in the clinic, maybe we'll be able to make better use of our time and money to that end.
+ TrackBacks (0) | Category: Analytical Chemistry | Cancer | Clinical Trials
December 8, 2010
Since the posts here on the possible arsenic-using bacteria have generated so many comments, I'd like to try to bring things together. If you think that the NASA results need shoring up - and a lot of people do, including me - please leave a comment here about what data or new experiments you'd want to see. I'll assemble these into a new post and try to get some attention for it.
The expertise among the readership here is largely in chemistry, so it would make sense to have suggestions from that angle - I assume that microbiologists are putting together their own lists elsewhere! I know that several readers have already put forward some ideas in the comment threads from the earlier posts - I'll go back and harvest those, but feel free to revise and extend your remarks for this one.
So, the questions on the table are: do you find the Science paper convincing? And if not, what would it take to make it so?
+ TrackBacks (0) | Category: Analytical Chemistry | General Scientific News
October 26, 2010
Earlier this year, I wrote here about using calorimetry in drug discovery. Years ago, people would have given you the raised eyebrow if you'd suggested that, but it's gradually becoming more popular, especially among people doing fragment-based drug discovery. After all, the binding energy that we depend on for our drug candidates is a thermodynamic property, and you can detect the heat being given off when the molecules bind well. Calorimetry also lets you break that binding energy down into its enthalpic (delta-H) and entropic (T delta-S) components, which is hard to do by other means.
And there's where the arguing starts. As I mentioned back in March, one idea that's been floating around is that better drug molecules tend to have more of an enthalpic contribution to their binding. Very roughly speaking, enthalpic interactions are often what med-chemists call "positive" ones like forming a new hydrogen bond or pi-stack, whereas entropic interactions are often just due to pushing water molecules off the protein with some greasy part of your molecule. (Note: there are several tricky double-back-around exceptions to both of those mental models. Thermodynamics is a resourceful field!) But in that way, it makes sense that more robust compounds with better properties might well be more enthalpically-driven in their binding.
But we do not live in a world bounded by what makes intuitive sense. Some people think that the examples given in the literature for this effect are the only decent examples that anyone has. At the fragment conference I attended the other week, though, a speaker from Astex (a company that's certainly run a lot of fragment optimization projects) said that they're basically not seeing it. In their hands, some lead series are enthalpy-driven as they get better, some are entropy-driven, and some switch gears as the SAR evolves. Another speaker said that they, on the other hand, do tend to go with the enthalpy-driven compounds, but I'm not sure if that's just because they don't have as much data as the Astex people do.
So as far as I'm concerned, the whole concept that I talked about in March is still in the "interesting but unproven" category. We're all looking for new ways to pick better starting compounds or optimize leads, but I'm still not sure if this is going to do the trick. . .
+ TrackBacks (0) | Category: Analytical Chemistry | Drug Assays | Life in the Drug Labs
October 22, 2010
Well, the latest for 1960, anyway. That's the Bruker KIS-1 NMR machine there, folks, operating at 25 MHZ, and ready to dim the lights in the whole building when you switch on that electromagnet. Allow about 12 hours of acquisition time to get a decent spectrum.
For those of you outside the field, a 300 MHZ NMR machine is now considered a average workhorse instrument, and should give you a spectrum (with resolution that would have made someone back then faint with joy) in a minute or so of acquisition time. We can do things with modern machines that they wouldn't have even dreamed of back in 1960, and people are still thinking up new tricks. All hail NMR!
+ TrackBacks (0) | Category: Analytical Chemistry
October 5, 2010
Here's an interesting example of a way that synthetic chemistry is creeping into the provinces of molecular biology. There have been a lot of interesting ideas over the years around the idea of polymers made to recognize other molecules. These appear in the literature as "molecularly imprinted polymers", among other names, and have found some uses, although it's still something of a black art. A group at Cal-Irvine has produced something that might move the field forward significantly, though.
In 2008, they reported that they'd made polymer particles that recognized the bee-sting protein melittin. Several combinations of monomers were looked at, and the best seemed to be a crosslinked copolymer with both acrylic acid and an N-alkylacrylamide (giving you both polar and hydrophobic possibilities). But despite some good binding behavior, there are limits to what these polymers can do. They seem to be selective for melittin, but they can't pull it out of straight water, which is a pretty stringent test. (If you can compete with the hydrogen-bonding network of bulk water that's holding the hydrophilic parts of your target, as opposed to relying on just the hydrophobic interactions with the other parts, you've got something impressive).
Another problem, which is shared by all polymer-recognition ideas, is that the materials you produce aren't very well defined. You're polymerizing a load of monomers in the presence of your target molecule, and they can (and will) link up in all sorts of ways. So there are plenty of different binding sites on the particles that get produced, with all sorts of affinities. How do you sort things out?
Now the Irvine group has extended their idea, and found some clever ways around these problems. The first is to use good old affinity chromatography to clean up the mixed pile of polymer nanoparticles that you get at first. Immobilizing melittin onto agarose beads and running the nanoparticles over them washes out the ones with lousy affinity - they don't hold up on the column. (Still, they had to do this under fairly high-salt conditions, since trying this in plain water didn't allow much of anything to stick at all). Washing the column at this point with plain water releases a load of particles that do a noticeably better job of recognizing melittin in buffer solutions.
The key part is coming up, though. The polymer particles they've made show a temperature-dependent change in structure. At RT, they're collapsed polymer bundles, but in the cold, they tend to open up and swell with solvent. As it happens, that process makes them lose their melittin-recognizing abilities. Incubating the bound nanoparticles in ice-cold water seems to only release the ones that were using their specific melittin-binding sites (as opposed to more nonspecific interactions with the agarose and the like). The particles eluted in the cold turned out to be the best of all: they show single-digit nanomolar affinity even in water! They're only a few per cent of the total, but they're the elite.
Now several questions arise: how general is this technique? That is, is melittin an outlier as a peptide, with structural features that make it easy to recognize? If it's general, then how small can a recognition target be? After all, enzymes and receptors can do well with ridiculously small molecules: can we approach that? It could be that it can't be done with such a simple polymer system - but if more complex ones can also be run through such temperature-transition purification cycles, then all sorts of things might be realized. More questions: What if you do the initial polymerization in weird solvents or mixtures? Can you make receptor-blocking "caps" out of these things if you use overexpressed membranes as the templates? If you can get the particles to the right size, what would happen to them in vivo? There are a lot of possibilities. . .
+ TrackBacks (0) | Category: Analytical Chemistry | Chemical Biology | Chemical News | Drug Assays
September 28, 2010
You don't see an awful lot of chemistry publications from Vietnam. So in a way, I'm reluctant to call attention to this one, in the way that I'm about to. But it's in the preprint section of Bioorganic and Medicinal Chemistry Letters, and some of my far-flung correspondents have already picked up on it. So it's a bit too late to let it pass, I suppose.
The authors isolate a number of natural products from Wisteria (yep, the flowering woody vine one), and most of them are perfectly fine, if unremarkable. But their compound 1 (wisterone) is something else again.
Man, is that thing strained. Nothing with that carbon skeleton has ever been reported before (I just checked), outside of things that you can draw as part of the walls of fullerenes. I have a lot of trouble believing that this compound exists as shown - and if it does, then it deserves a lot more publicity than being tossed into a list inside a BOMCL paper - even though that journal is now getting a reputation for. . .interesting structural assignments.
This thing could get you into Angewandte Chemie or JACS, no problem. But the authors don't make much of it, just calling it a new compound, and presenting mass spec and NMR evidence for it. The 13C spectrum is perfectly reasonable for some sort of para-substituted aryl ring, but this compound would not give a perfectly reasonable spectrum, I would think. Surely all that strain would show up in some funny chemical shifts? Another oddity must be a misprint - they have the carbon shift of the carbonyl as 190.8, which is OK, I suppose, but they assign the methylenes as 190.8, which can't be right. (The protons come at 4.48).
No, I really think something is wrong here. I don't have a structure to propose, off the top of my head (not without resolving that weirdo methylene carbon shift), but I don't think it's this. Anyone?
Update: just noticed that this is said to be a crystalline compound, melting point of 226-228. I find it hard to imagine any structure like this taking that much heat, but. . .it's a crystal! Get an X-ray structure. No one's going to believe it without one, and BOMCL should never have let this paper through without someone asking for at least that. . .
+ TrackBacks (0) | Category: Analytical Chemistry | Chemical News
September 23, 2010
I agree with many of the commenters around here that one of the most interesting and productive research frontiers in organic chemistry is where it runs into molecular biology. There are so many extraordinary tools that have been left lying around for us by billions of years of evolution; not picking them up and using them would be crazy.
Naturally enough, the first uses have been direct biological applications - mutating genes and their associated proteins (and then splicing them into living systems), techniques for purification, detection, and amplification of biomolecules. That's what these tools do, anyway, so applying them like this isn't much of a shift (which is one reason why so many of these have been able to work so well). But there's no reason not to push things further and find our own uses for the machinery.
Chemists have been working on that for quite a while. We look at enzymes and realize that these are the catalysts that we really want: fast, efficient, selective, working at room temperature under benign conditions. If you want molecular-level nanotechnology (not quite down to atomic!), then enzymes are it. The ways that they manipulate their substrates are the stuff of synthetic organic daydreams: hold down the damn molecule so it stays in one spot, activate that one functional group because you know right where it is and make it do what you want.
All sorts of synthetic enzyme attempts have been made over the years, with varying degrees of success. None of them have really approached the biological ideals, though. And in the "if you can't beat 'em, join 'em" category, a lot of work has gone into modifying existing enzymes to change their substrate preferences, product distributions, robustness, and turnover. This isn't easy. We know the broad features that make enzymes so powerful - or we think we do - but the real details of how they work, the whole story, often isn't easy to grasp. Right, that oxyanion hole is important: but just exactly how does it change the energy profile of the reaction? How much of the rate enhancement is due to entropic factors, and how much to enthalpic ones? Is lowering the energy of the transition state the key, or is it also a subtle raising of the energy of the starting material? What energetic prices are paid (and earned back) by the conformational changes the protein goes through during the catalytic cycle? There's a lot going on in there, and each enzyme avails itself of these effects differently. If it weren't such a versatile toolbox, the tools themselves wouldn't come out being so darn versatile.
There's a very interesting paper that's recently come on on this sort of thing, to which I'll devote a post by itself. But there are other biological frontiers beside enzymes. The machinery to manipulate DNA is exquisite stuff, for example. For quite a while, it wasn't clear how we organic chemists could hijack it for our own uses - after all, we don't spend a heck of a lot of time making DNA. But over the years, the technique of adding DNA segments onto small molecules and thus getting access to tools like PCR has been refined. There are a number of applications here, and I'd like to highlight some of those as well.
Then you have things like aptamers and other recognition technologies. These are, at heart, ways to try to recapitulate the selective binding that antibodies are capable of. All sorts of synthetic-antibody schemes have been proposed - from manipulating the native immune processes themselves, to making huge random libraries of biomolecules and zeroing in on the potent ones (aptamers) to completely synthetic polymer creations. There's a lot happening in this field, too, and the applications to analytical chemistry and purification technology are clear. This stuff starts to merge with the synthetic enzyme field after a point, too, and as we understand more about enzyme mechanisms that process looks to continue.
So those are three big areas where molecular biology and synthetic chemistry are starting to merge. There are others - I haven't even touched here on in vivo reactions and activity-based proteomics, for example, which is great stuff. I want to highlight these things in some upcoming posts, both because the research itself is fascinating, and because it helps to show that our field is nowhere near played out. There's a lot to know; there's a lot to do.
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News | Chemical News | General Scientific News | Life As We (Don't) Know It
August 4, 2010
Readers will remember the extraordinary pictures of individual pentacene molecules last fall. Well, the same IBM team, working with a group at Aberdeen, has struck again.
This time they've imaged a much more complex organic molecule, cephalandole A. As that link details, the structure of this natural product has recently been revised - it's one of those structural-isomer problems that NMR won't easily solve for you. Here's a single molecule of it, imaged by the same sort of carbon-monoxide-tipped atomic force microscope probe used in the earlier work>
Now, it's not like you can just look at that and draw the structure, although it is vaguely alarming to see the bonding framework begin to emerge. If you calculate the electon densities around the structure, though, it turns out that the recently revised one is an excellent fit to what the AFM tip picks up, while the other structural possibilities lead to different expected contours.
It's quite possible that as this technique goes on that it could become a real structure-determination tool. These are early days, and it's already being applied to a perfectly reasonable organic molecule. Of course, the people applying it are the world's experts in the technique, using the best machine available (and probably spending a pretty considerable amount of time on the problem), but that's how NMR was at the start, and mass spec too. Both of those are still evolving after decades, and I fully expect this technology to do the same.
+ TrackBacks (0) | Category: Analytical Chemistry | Chemical News
May 20, 2010
This post from 2006 on the science behind Floyd Landis's suspicious steroid blood tests set my blog record for comments - the debate went on and on about Landis, about the lab that reported the results, about how the samples were handled, etc.
Well, Landis has now admitted using performance-enhancing drugs for most of his career. Widely, expensively, and thoroughly did he use them. The blood test was correct. Carbon isotopes don't lie.
+ TrackBacks (0) | Category: Analytical Chemistry | Current Events
April 27, 2010
I've said several times that I think that mass spectrometry is taking over the analytical world, and there's more evidence of that in Angewandte Chemie. A group at Justus Liebig University in Giessen has built what has to be the finest imaging mass spec I've ever seen. It's a MALDI-type machine, which means that a small laser beam does the work of zapping ions off the surface of the sample. But this one has better spatial resolution than anything reported so far, and they've hooked it up to a very nice mass spec system on the back end. The combination looks to me like something that could totally change the way people do histology.
For the non-specialist readers in the audience, mass spec is a tremendous workhorse of analytical chemistry. Basically, you use any of a whole range of techniques (lasers, beams of ions, electric charges, etc.) to blast individual molecules (or their broken parts!) down through a chamber and determine how heavy each one is. Because molecular weights are so precise, this lets you identify a lot of molecules by both their whole weights - their "molecular ions" - and by their various fragments. Imagine some sort of crazy disassembler machine that rips things - household electronic gear, for example - up into pieces and weighs every chunk, occasionally letting a whole untouched unit through. You'd see the readouts and say "Ah-hah! Big one! That was a plasma TV, nothing else is up in that weight range. . .let's see, that mix of parts coming off it means that it must have been a Phillips model so-and-so; they always break up like that, and this one has the heavier speakers on it." But mass spec isn't so wasteful, fortunately: it doesn't take much sample, since there are such gigantic numbers of molecules in anything large enough to see or weigh.
Take a look at this image. That's a section of a mouse pituitary gland - on the right is a standard toluidine-blue stain, and on the left is the same tissue slice as imaged (before staining) by the mass spec. The green and blue colors are two different mass peaks (826.5723 and 848.5566, respectively), which correspond to different types of phospholipid from the cell membranes. (For more on such profiling, see here). The red corresponds to a mass peak for the hormone vasopressin. Note that the difference in phospholipid peaks completely shows the difference between the two lobes of the gland (and also shows an unnamed zone of tissue around the posterior lobe, which you can barely pick up in the stained preparation). The vasopressin is right where it's supposed to be, in the center of the posterior lobe.
One of the most interesting things about this technique is that you don't have to know any biomarkers up front. The mass spec blasts away at each pixel's worth of data in the tissue sample and collects whatever pile of varied molecular-weight fragments that it can collect. Then the operator is free to choose ions that show useful contrasts and patterns (I can imagine software algorithms that would do the job for you - pick two parts of an image and have the machine search for whatever differentiates them). For instance, it's not at all clear (yet) why those two different phospholipid ions do such a good job at differentiating out the pituitary lobes - what particular phospholipids they correspond to, why the different tissues have this different profile, and so on. But they do, clearly, and you can use that to your advantage.
As this technique catches on, I expect to see large databases of mass-based "contrast settings" develop as histologists find particularly useful readouts. (Another nice feature is that one can go back to previously collected data and re-process for whatever interesting things are discovered later on). And each of these suggests a line of research all its own, to understand why the contrast exists in the first place.
The second image shows ductal carcinoma in situ. On the left is an optical image, and about all you can say is that the darker tissue is the carcinoma. The right-hand image is colored by green (mass of 529.3998) and red (mass of 896.6006), which correspond to healthy and cancerous tissue, respectively (and again, we don't know why, yet). But look closely and you can see that some of the dark tissue in the optical image doesn't actually appear to be cancer - and some of dark spots in the lighter tissue are indeed small red cells of trouble. We may be able to use this technology to diagnose cancer subtypes more accurately than ever before - the next step will be to try this on a number of samples from different patients to see how much these markers vary. I also wonder if it's possible to go back to stored tissue samples and try to correlate mass-based markers with the known clinical outcomes and sensitivities to various therapies.
I'd also be interested in knowing if this technique is sensitive enough to find small-molecule drugs after dosing. Could we end up doing pharmacokinetic measurements on a histology-slide scale? Ex vivo, could we possibly see uptake of our compounds once they're applied to a layer of cells in tissue culture? Oh, mass spec imaging has always been a favorite of mine, and seeing this level of resolution just brings on dozens of potential ideas. I've always had a fondness for label-free detection techniques, and for methods that don't require you to know too much about the system before being able to collect useful data. We'll be hearing a lot more about this, for sure.
Update: I should note that drug imaging has certainly been accomplished through mass spec, although it's often been quite the pain in the rear. It's clearly a technology that's coming on, though.
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News | Cancer | Drug Assays
March 1, 2010
I've been involved in a mailing list discussion that I wanted to open up to a wider audience in drug discovery, so here goes. We spend our time (well, a lot of it, when we're not filling out forms) trying to get compound to bind well to our targets. And that binding is, of course, all about energy: the lower the overall energy of the system when your compound binds, relative to the starting state, the tighter the binding.
That energy change can be broken down (all can all chemical free energy changes) into an enthalpic part and an entropic part (that latter one depends on temperature, but we'll assume that everything's being done at a constant T and ignore that part). Roughly speaking, the enthalpic component is where you see effects of hydrogen bonds, pi-pi stacking, and other such "productive" interactions, and the entropic part is where you're pushing water molecules and side chains around - hydrophobic interactions and such.
That's a gross oversimplification, but it's a place to start. It's important to remember that these things are all tangled together in most cases. If you come in with a drug molecule and displace a water molecule that was well-attached to your binding pocket, you've broken some hydrogen bonds - for which you'll pay in enthalpy. But you may well have formed some, too, to your molecule - so you'll get some enthalpy term back. And by taking a bound water and setting it free, you'll pick up some good entropy change, too. But not all waters are so tightly bound - there are a few cases where they're actually at a lower entropy state in a protein pocket then they are out in solution, so displacing one of those actually hurts you in entropy. Hmm.
And as I mentioned here, you have the motion of your drug molecule to consider. If it goes from freely rotating to stuck when it binds (as it may well), then you're paying entropy costs. (That's one reason why tying down your structure into a ring can help so dramatically, when it helps at all). And don't forget the motion of the protein overall - if it's been flopping around until it folds over and clenches down on your molecule, there's another entropy penalty for you, which you'd better be able to make up in enthalpy. And so on.
There's been a proposal, spread most vigorously by Ernesto Freire of Johns Hopkins, that drug researchers should use calorimetry to pick compounds that have the biggest fraction of their binding from enthalpic interactions. (That used to be a terrible pain to do, but recent instruments have made it much more feasible). His contention is that the "best in class" drugs in long-lived therapeutic categories tend to move in that direction, and that we can use this earlier in our decision-making process. People doing fragment-based drug discovery are also urged to start with enthalpically-biased fragments, so that the drug candidate that grows out from them will have a better chance of ending up in the same category.
One possible reason for all this is that drugs that get most of their binding from sheer greasiness, fleeing the water to dive into a protein's sheltering cave, might not be so picky about which cave they pick. There's a persistent belief, which I think is correct, that very hydrophobic compounds tend to have tox problems, because they're often just not selective enough about where they bind. And then they tend to get metabolized and chewed up more, too, which adds to the problem.
And all that's fine. . .except for one thing: is anyone actually doing this? That's the question that came up recently, and (so far), for what it's worth, no one's willing to speak up and say that they are. Perhaps all this is a new enough consideration that all the work is still under wraps. But it will be interesting to see if it holds up or not. We need all the help we can get in drug discovery, so if this is real, then it's welcome. But we also don't need to run more assays that only confuse things, either, so it would be worth knowing if drug-candidate calorimetry falls into that roomy category, too. Opinions?
+ TrackBacks (0) | Category: Analytical Chemistry | Drug Assays
January 27, 2010
Now here's a weird one. The San Diego diagnostics company Sequenom came up with a non-invasive test for Down's Syndrome,
and sold it to another outfit, Xenomics, for development. Update: I've got this transfer backwards - Xenomics licensed some of its nucleic acid technology to Sequenom, and has now regretted it. But late last year, things unraveled spectacularly. In April, Sequenom announced that there were problems with the test and announced that it had launched an internal investigation. In September came the unwelcome news that the data backing up their product were (quoting here) "inadequately substantiated". And they meant it, too, as the CEO and six other higher-ups all left the company under a cloud of confusion, recrimination, and very bad acronyms (like SEC and FBI). Last week it settled a dozen shareholder lawsuits over the whole affair.
But as that story at Bnet makes clear, the terms of the settlement were rather alarming, with Sequenom promising to do things like. . .make sure that everyone involved knew which studies were blinded and which weren't. And requiring bar-codes on the tissue sample vials. And not giving everyone access to the storage room where they were all kept. And. . .well, you get the idea. It's like seeing a sign at the burger place that says "Healthy Choice - Now With 30 Per Cent Less Aardvark Meat! And Try Our New No-Salmonella Menu!"
It can always get worse, though. Now Xenomics is suing, claiming that not only were the data weak and the controls insufficient, but that there never was a test in the first place. The complaint (available as a PDF at that link) is pretty zippy stuff by legal standards, featuring phrases such as "Defendant maintained the charade that it had. . ."
Way before all this lunacy, some people were skeptical about the company's prospects even if things went well. But hey, let's not dwell on the negatives here. If you'd like "Three Reasons to Buy Sequenom Today", this guy has them. I think I'll let this opportunity slip past, personally.
+ TrackBacks (0) | Category: Analytical Chemistry | Business and Markets | The Dark Side
January 26, 2010
Yesterday's post touched on something that all experienced drug discovery people have been through: the compound that works - until a new batch is made. Then it doesn't work so well. What to do?
You have a fork in the road here: one route is labeled "Blame the Assay" and the other one is "Blame the Compound". Neither can be ruled out at first, but the second alternative is easier to check out, thanks to modern analytical chemistry. A clean (or at least identical) LC/MS, a good NMR, even (gasp!) elemental analysis - all these can reassure you that the compound itself hasn't changed.
But sometimes it has. In my experience, the biggest mistake is to not fully characterize the original batch, particularly if it's a purchased compound, or if it comes from the dusty recesses of the archive. You really, really want to do an analytical check on these things. Labels can be mistaken, purity can be overestimated, compounds can decompose. I've seen all of these derail things. I believe I've mentioned a putative phosphatase inhibitor I worked on once, presented to me as a fine lead right out of the screening files. We resynthesized a batch of it, which promptly made the assay collapse. Despite having been told that the original compound had checked out just fine, I sent some out for elemental analysis, and marked some of the lesser-used boxes on the form while I was at it. This showed that the archive compound was, in fact, about a 1:1 zinc complex, for reasons that were lost in the mists of time, and that this (as you can imagine) did have a bit of an effect on the primary enzyme assay.
And I've seen plenty of things that have fallen apart on storage, and several commercial compounds that were clean as could be, but whose identity had no relation to what was on their labels (or their invoices for payment, dang it all). Always check, and always do that first. But what if you have, and the second lot doesn't work, and it appears to match the first in every way?
Personally, I say run the assay again, with whatever controls you can think of. I think at that point the chances of something odd happening there are greater than the chemical alternative, which is the dreaded Infinitely Active Impurity. Several times over the years, people have tried to convince me that even though some compound may look 99% clean, that all the activity is actually down there in the trace contaminants, and that if we just find it, we'll have something that'll be so potent that it'll make our heads spin. A successful conclusion to one of these snipe hunts is theoretically possible. But I have never witnessed one.
I'm willing to credit the flip side argument, the Infinitely Nasty Impurity, a bit more. It's easier to imagine something that would vigorously mess up an assay, although even then you generally need more than a trace. An equimolar amount of zinc will do. But an incredibly active compound, one that does just what you want, but in quantities so small that you've missed seeing it? Unlikely. Look for it, sure, but don't expect to find anything - and have 'em re-run that assay while you're looking.
Update: I meant to mention this, but a comment brings it up as well. One thing that may not show up so easily is a difference in the physical form of the compound, depending on how it's produced. This will mainly show up if you're (for example) dosing a suspension of powdered drug substance in an animal. A solution assay should cancel these things out (in vitro or in vivo), but you need to make sure that everything's really in solution. . .
+ TrackBacks (0) | Category: Analytical Chemistry | Drug Assays | Life in the Drug Labs
January 11, 2010
There was a natural products paper (abstract) that I missed last fall which has finally come out in Bioorganic and Medicinal Chemistry Letters. Let's have a show of hands: how many chemists out there think that this structure is the correct one?
Right. Going back through SciFinder, I don't find any anti-Bredt cyclobutene structures of this sort in the modern era - only speculations about whether or not they could even exist. I hope, for their sake, that the authors have assigned this one correctly, and it certainly would be neat and interesting if they have. But doubts afflict me.
Note - the most recent entry on the (inactive?) med-chem blog "One in Ten Thousand" was a raised eyebrow about this exact paper. Fear not, there's no curse - I'll continue posting. . .
+ TrackBacks (0) | Category: Analytical Chemistry | Chemical News
January 5, 2010
I missed this paper when it came out back in October: "Reactome Array: Forging a Link Between Metabolome and Genome". I'd like to imagine that it was the ome-heavy title itself that drove me away, but I have to admit that I would have looked it over had I noticed it.
And I probably should have, because the paper has been under steady fire since it came out. It describes a method to metabolically profile a variety of cells though the use of a novel nanoparticle assay. The authors claim to have immobilized 1675 different biomolecules (representing common metabolites and intermediates) in such a way that enzymes recognizing any of them will set off a fluorescent dye signal. It's an ingenious and tricky method - in fact, so tricky that doubts set in quickly about the feasibility of doing it on 1675 widely varying molecular species.
And the chemistry shown in the paper's main scheme looks wonky, too, which is what I wish I'd noticed. Take a look - does it make sense to describe a positively charged nitrogen as a "weakly amine region", whatever that is? Have you ever seen a quaternary aminal quite like that one before? Does that cleavage look as if it would work? What happens to the indane component, anyway? Says the Science magazine blog:
In private chats and online postings, chemists began expressing skepticism about the reactome array as soon as the article describing it was published, noting several significant errors in the initial figure depicting its creation. Some also questioned how a relatively unknown group could have synthesized so many complex compounds. The dismay grew when supplementary online material providing further information on the synthesized compounds wasn’t available as soon as promised. “We failed to put it in on time. The data is quite voluminous,” says co-corresponding author Peter Golyshin of Bangor University in Wales, a microbiologist whose team provided bacterial samples analyzed by Ferrer’s lab.
Science is also coming under fire. “It was stunning no reviewer caught [the errors],” says Kiessling. Ferrer says the paper’s peer reviewers did not raise major questions about the chemical synthesis methods described; the journal’s executive editor, Monica Bradford, acknowledged that none of the paper’s primary reviewers was a synthetic organic chemist. “We do not have evidence of fraud or fabrication. We do have concerns about the inconsistencies and have asked the authors' institutions to try to sort all of this out by examining the original data and lab notes,” she says.
The magazine published an "expression of concern" before the Christmas break, saying that in response to questions the authors had provided synthetic details that "differ substantially" from the ones in the original manuscript. An investigation is underway, and I'll be very interested to see what comes of it.
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News | Drug Assays | The Scientific Literature
December 22, 2009
Courtesy of Pharmalot (and my mail!), I note this alarming story from London. GE Healthcare makes a medical NMR contrast agent, a gadolinium complex marketed under the name of Omniscan. (They picked it up when they bought Amersham a few years ago). Henrik Thomsen, a Danish physician had noted what may be an association with its use and a serious kidney condition, nephrogenic systemic fibrosis, and he gave a short presentation on his findings two years ago at a conference in Oxford.
For which GE is suing him. For libel. Update: the documents of the case can be found here. They claim that his conference presentation was defamatory, and continue to insist on damages even though regulatory authorities in both the UK and in the rest of Europe have reviewed the evidence and issued warnings about Omniscan's use in patients with kidney trouble. Over here in the US, the FDA had issued general advisories about contrast agents, but an advisory panel recently recommended that Omniscan (and other chemically related gadolinium complexes) be singled out for special warnings. From what I can see, Thomsen should win his case - I hope he does, and I hope that he gets compensatory damages from GE for wasting his time when he could have been helping patients.
And this isn't the only case going on there right now. Author Simon Singh is being sued by the British Chiropractic Association for claiming in a published article that chiropractic claims of being able to treat things like asthma as "bogus". Good for him! But he's still in court, and the end is not in sight.
This whole business is partly a function of the way that GE and the chiropractors have chosen to conduct business, but largely one of England's libel laws. The way things are set up over there, the person who brings suit starts out with a decided edge, and over the years plenty of people have taken advantage of the tilted field. There's yet another movement underway to change the laws, but I can recall others that apparently have come to little. Let's hope this one succeeds, because I honestly can't think of a worse venue to settle a scientific dispute than a libel suit (especially one being tried in London).
So, General Electric: is it now your company policy to sue people over scientific presentations that you don't like? Anyone care to go on record with that one?
+ TrackBacks (0) | Category: Analytical Chemistry | Current Events | The Dark Side | Toxicology
December 21, 2009
Well, this isn't good: an ex-researcher at the University of Alabama-Birmingham has been accused of faking several X-ray structures of useful proteins - dengue virus protease, taq polymerase, complement proteins from immunology, etc. There have been questions surrounding H. M. Krishna Murthy's work for at least a couple of years now (here's the reply to that one). The university, after an investigation, has decided that 11 of the published structures seem to have been falsified in some way and has asked that the papers be retracted and the structures removed from the Protein Data Bank.
The first controversy with these structures was, I think, the one deposited in the PDB as 2hr0. Here's a good roundup of what's wrong with it, for those of you into X-ray crystallography. And as that post makes clear, there were also signs that some other structures from this source had been suspiciously cleaned up a bit.
So how do you go about faking an X-ray, anyway? Here's some detail - basically, you could take something that's structurally related (from a protein standpoint) but crystallographically distinct, and use that as a starting point. As that post says, add some water and some noise, and "bingo". The official statement from UAB's investigation gives you the likely recipes for all eleven faked-up structures.
As for Dr. Murthy, he left UAB earlier this year, according to this article, and the university says that they have no current contact information for him. If these accusations are true, he's spent nearly ten years generating spurious analytical data. What, then, do you do with that skill set?
+ TrackBacks (0) | Category: Analytical Chemistry | The Dark Side
December 9, 2009
Back in September, talking about the insides of cells, I said:
There's not a lot of bulk water sloshing around in there. It's all stuck to and sliding around with enzymes, structural proteins, carbohydrates, and the like. . ."
But is that right? I was reading this new paper in JACS, where a group at UNC is looking at the NMR of fluorine-labeled proteins inside E. coli bacteria. (It's pretty interesting, not least because they found that they can't reproduce some earlier work in the field, for reasons that seem to have them throwing their hands up in the air). But one reference caught my eye - this paper from PNAS last year, from researchers in Sweden.
That wasn't one that I'd read when it came out - the title may have caught my eye, but the text rapidly gets too physics-laden for me to follow very well. The UNC folks appear to have waded through it, though, and picked up some key insights which otherwise I'd have missed. The PNAS paper is a painstaking NMR analysis of the states of water molecules inside bacterial cells. They looked at both good ol' E. coli and at an extreme halophile species, figuring that that one might handle its water differently.
But in both cases, they found that about 85% of the water molecules had rotational states similar to bulk water. That surprises me (as you'd figure, given the views I expressed above). I guess my question is "how similar?", but the answer seems to be "as similar as we can detect, and that's pretty good". It looks like all the water molecules past the first layer on the proteins are more or less indistinguishable from plain water by their method. (No difference between the two types of bacteria, by the way). And given that the concentration of proteins, carbohydrates, salts, etc. inside a cell is rather different than bulk water, I have to say I'm at a loss. I wonder how different the rotational states of water are (as measured by NMR relaxation times) for samples that are, say, 1M in sodium chloride, guanidine, or phosphate?
The other thing that struck me was the Swedish group's estimate of protein dynamics. They found that roughly half of the proteins in these cells were rotationally immobile, presumably bound up in membranes or in multi-protein assemblies. It's been clear for a long time that there has to be a lot of structural order in the way proteins are arranged inside a living cell, but that might be even more orderly than I'd been picturing. At any rate, I may have to adjust my thinking about what those environments look like. . .
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News
November 30, 2009
Now here's an oddity: medicinal chemists are used to seeing the two enantiomers (mirror image compounds, for those outside the field) showing different activity. After all, proteins are chiral, and can recognize such things - in fact, it's a bit worrisome when the enantiomers don't show different profiles against a protein target.
There are a few cases known where the two enantiomers both show some kind of activity, but via different binding modes. But I've never seen a case like this, where this happens at the same time in the same binding pocket. The authors were studying inhibitors of a biosynthetic enzyme from Burkholderia, and seeing the usual sorts of things in their crystal structures - that is, only one enantiomer of a racemic mixture showing up in the enzyme. But suddenly of their analogs showed both enantiomers simultaneously, each binding to different parts of the active site.
Interestingly, when they obtained crystal structures of the two pure enantiomers, the R compound looks pretty much exactly as it does in the two-at-once structure, but the S compound flips around to another orientation, one that it couldn't have adopted in the presence of the R enantiomer. The S compound is tighter-binding in general, and calorimetry experiments showed a complicated profile as the concentration of the two compounds was changed. So this does appear to be a real effect, and not just some weirdo artifact of the crystallization conditions.
The authors point out that many other proteins have binding sites that are large enough to permit this sort of craziness (P450 enzymes are a likely candidate, and I'd add PPAR binding sites to the list, too). We still do an awful lot of in vitro testing using racemic mixtures, and this makes a person wonder how many times this behavior has been seen before and not understood. . .
+ TrackBacks (0) | Category: Analytical Chemistry | Chemical News | Drug Assays
October 7, 2009
This was another Biology-for-Chemistry year for the Nobel Committee. Venkatraman Ramakrishnan (Cambridge), Thomas Steitz (Yale) and Ada Yonath (Weizmann Inst.) have won for X-ray crystallographic studies of the ribosome.
Ribosomes are indeed significant, to put it lightly. For those outside the field, these are the complex machines that ratchet along a strand of messenger RNA, reading off its three-letter codons, matching these with the appropriate transfer RNA that's bringing in an amino acid, then attaching that amino acid to the growing protein chain that emerges from the other side. This is where the cell biology rubber hits the road, where the process moves from nucleic acids (DNA going to RNA) and into the world of proteins, the fundamental working units of a day-to-day living cell.
The ribosome has a lot of work to do, and it does it spectacularly quickly and well. It's been obvious for decades that there was a lot of finely balanced stuff going on there. Some of the three-letter codons (and some of the tRNAs) look very much like some of the others, so the accuracy of the whole process is very impressive. If more proofs were needed, it turned out that several antibiotics worked by disrupting the process in bacteria, which showed that a relatively small molecule could throw a wrench into this much larger machinery.
Ribosomes are made out of smaller subunits. A huge amount of work in the earlier days of molecular biology showed that the smaller subunit (known as 30S for how it spun down in a centrifuge tube) seemed to be involved in reading the mRNA, and the larger subunit (50S) was where the protein synthesis was taking place. Most of this work was done on bacterial ribosomes, which are relatively easy to get ahold of. They work in the same fashion as those in higher organisms, but have enough key differences to make them of interest by themselves (see below).
During the 1980s and early 1990s, Yonath and her collaborators turned out the first X-ray structures of any of the ribosomal subunits. Fuzzy and primitive by today's standards, those first data sets got better year by year, thanks in part to techniques that her group worked out first. (The use of CCD detectors for X-ray crystallography, a technology that was behind part of Tuesday's Nobel in Physics, was another big help, as was the development of much brighter and more focused X-ray sources). Later in the 1990s, Steitz and Ramakrishnan both led teams that produced much higher-resolution structures of various ribosomal subunits, and solved what's known as the "phase problem" for these. That's a key to really reconstructing the structure of a complex molecule from X-ray data, and it is very much nontrivial as you start heading into territory like this. (If you want more on the phase problem, here's a thorough and comprehensive teaching site on X-ray crystallography from Cambridge itself).
By the early 2000s, all three groups were turning out ever-sharper X-ray structures of different ribosomal subunits from various organisms. The illustration above, courtesy of the Nobel folks, shows the 50S subunit at 9-angstrom (1998), 5-angstrom (1999) and 2.4-angstrom (2000) resolution, and shows you how quickly this field was advancing. Ramakrishnan's group teased out many of the fine details of codon recognition, and showed how some antibiotics known to cause the ribosome to start bungling the process were able to to work. It turned out that the opening and closing behavior of the 30S piece was a key for this whole process, with error-inducing antibiotics causing it to go out of synch. And here's a place where the differences between bacterial ribosomes and eukaryotic ones really show up. The same antibiotics can't quite bind to mammalian ribosomes, fortunately. Having the protein synthesis machinery jerkily crank out garbled products is just what you'd wish for the bacteria that are infecting you, but isn't something that you'd want happening in your own cells.
At the same time, Steitz's group was turning out better and better structures of the 50S subunit, and helping to explain how it worked. One surprise was that there was a highly ordered set of water molecules and hydrogen bonds involved - in fact, protein synthesis seems to be driven (energetically) almost entirely by changes in entropy, rather than enthalpy. Both his group and Ramakrishnan's have been actively turning out structures of the ribosome subunits in complex with various proteins that are known to be key parts of the process, and those mechanisms of action are still being unraveled as we speak.
The Nobel citation makes reference to the implications of all this for drug design. I'm of two minds on that. It's certainly true that many important antibiotics work at the ribosomal level, and understanding how they do that has been a major advance. But we're not quite to the point where we can design new drugs to slide right in there and do what we want. I personally don't think we're really at that stage with most drug targets of any type, and trying to do it against structures with a lot of nucleic acid character is particularly hard. The computational methods for those are at an earlier stage than the ones we have for proteins.
One other note: every time a Nobel is awarded, the thoughts go to the people who worked in the same area, but missed out on the citation. The three-recipients-max stipulation makes this a perpetual problem. This is outside my area of specialization, but if I had to list some people that just missed out here, I'd have to cite Harry Noller of UC-Santa Cruz and Marina Rodnina of Göttingen. Update: add Peter Moore of Yale as well. All of them work in this exact same area, and have made many real contributions to it - and I'm sure that there are others who could go on this list as well.
One last note: five Chemistry awards out of the last seven, by my count, have gone to fundamental discoveries in cell or protein biology. That's probably a reasonable reflection of the real world, but it does rather cut down on the number of chemists who can expect to have their accomplishments recognized. The arguing about this issue is not be expected to cease any time soon.
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News | Current Events | Infectious Diseases
March 6, 2009
There are a huge number of techniques in the protein world that relay on tying down some binding partner onto some kind of solid support. When you’re talking about immobilizing proteins, that’s one thing – they’re large beasts, and presumably there’s some tether that can be bonded to them to string off to a solid bead or chip. It’s certainly not always easy, but generally can be done, often after some experimentation with the length of the linker, its composition, and the chemistry used to attach it.
But there are also plenty of ideas out there that call for doing the same sort of thing to small molecules. The first thing that comes to mind is affinity chromatography – take some small molecule that you know binds to a given protein or class of proteins well, attach it to some solid resin or the like, and then pour a bunch of mixed proteins over it. In theory, the binding partner will stick to its ligand as it finds it, everything else will wash off, and now you’ve got pure protein (or a pure group of related proteins) isolated and ready to be analyzed. Well, maybe after you find a way to get them off the solid support as well.
That illustrates one experimental consideration with these ideas. You want the association between the binding partners to be strong enough to be useful, but (in many cases) not so incredibly strong that it can never be broken up again. There are a lot of biomolecule purification methods that rely on just these sorts of interactions, but those often use some well-worked-out binding pair that you introduce into the proteins artificially. Doing it on native proteins, with small molecules that you just dreamed up, is quite another thing.
But that would be very useful indeed, if you could get it work reliably. There are techniques available like surface plasmon resonance, which can tell with great sensitivity if something is sticking close to a solid surface. At least one whole company (Graffinity) has been trying to make a living by (among other things) attaching screening libraries of small molecules to SPR chips, and flowing proteins of interest over them to look for structural lead ideas.
And Stuart Schreiber and his collaborators at the Broad Institute have been working on the immobilized-small-molecule idea as well, trying different methods of attaching compound libraries to various solid supports. They’re looking for molecules that disrupt some very tough (but very interesting) biological processes, and have reported some successes in protein-protein interactions, a notoriously tempting (and notoriously hard) area for small-molecule drug discovery.
The big problem that people tend to have with all these ideas – and I’m one of those people, in the end – is that it’s hard to see how you can rope small molecules to a solid support without changing their character. After all, we don’t have anything smaller than atoms to make the ropes out of. It’s one thing to do this to a protein – that’ll look like a tangle of yarn with a small length of it stretching out to the side. But on the small molecule scale, it’s a bit like putting a hamster on a collar and leash designed for a Doberman. Mr. Hamster is not going to be able to enjoy his former freedom of movement, and a blindfolded person might, on picking him up, have difficulty recognizing his essential hamsterhood.
There's also the problem of how you attach that leash and collar, even if you decide that you can put up with it once it's on. Making an array of peptides on a solid support is all well and good - peptides have convenient handles at both ends, and there are a lot of well-worked-out reactions to attach things to them. But small molecules come in all sorts of shapes, sizes, and combinations of functional groups (at least, they'd better if you're hoping to see some screening hits with them). Trying to attach such a heterogeneous lot of stuff through a defined chemical ligation is challenging, and I think that the challenge is too often met by making the compound set less diverse. And after seeing how much my molecules can be affected by adding just one methyl group in the right (or wrong) place, I’m not so sure that I understand the best way to attach them to beads.
So I’m going to keep reading the tethered-small-molecule-library literature, and keep an eye on its progress. But I worry that I’m just reading about the successes, and not hearing as much about the dead ends. (That’s how the rest of the literature tends to work, anyway). For those who want to catch up with this area, here's a Royal Society review from Angela Koehler and co-workers at the Broad that'll get you up to speed. It's a high-risk, high-reward research area, for sure, so I'll always have some sympathy for it.
+ TrackBacks (0) | Category: Analytical Chemistry | Drug Assays | General Scientific News
January 22, 2009
Now here’s a news item that I’m pretty sure you haven’t heard about unless you work in or near a laboratory. We’re in the middle of an extreme shortage of acetonitrile, a common solvent. This has been going on since back in the fall, but instead of gradually getting better, it’s been gradually getting worse: major suppliers are sending out letters like this one (PDF).
What’s the stuff good for? Well, it’s used on a manufacturing scale in some processes, so they’re in trouble for sure. Acetonitrile is a good solvent, since it’s fairly powerful at dissolving things but still reasonable low-boiling. (That’s the nitrile functional group for you; there’s nothing else quite like it). It’s no DMSO, but then again, DMSO’s boiling point is
three times a lot higher, and compared to acetonitrile it pours like pancake syrup. Nobody does industrial-scale chemistry in DMSO if they can possibly help it.
Those properties mean that acetonitrile/water mixtures are ubiquitous in analytical and prep-sized chromatography systems. This is surely its most widespread use, and is causing the most widespread consternation as the shortage becomes more acute. Many people are switching to methanol/water, which usually works, but can be a bit jumpier. But that’s not always an option. Labs working under regulatory-agency controls (GLP / GMP) have a very hard time changing analytical methods without triggering a blizzard of paperwork and major delays. In many companies, it’s those people who are first in line for what acetonitrile may turn up.
So why are we going dry on the stuff? There seem to be several reasons, one of which, interestingly, is the summer Olympics. The industrial production that the Chinese government shut down to improve Beijing’s air quality seems to have included a disproportionate amount of the country’s acetonitrile production (for example). A US facility on the Gulf Coast was shut down during Hurricane Ike as well. But on top of these acute reasons, there's a secular one: yep, the global economic slowdown. A lot of acetonitrile comes as a byproduct of acrylonitrile production, which is used in a lot of industrial resins and plastics. Those go into making car parts, electronic housings, all sorts of things that are piling up in inventory and thus not being turned out at the rates of a year ago.
So taken together, there’s not much acetonitrile to be had out there. We’ve seen some glitches like this in the past, naturally, since chemical production can depend on a limited number of plants and on raw material prices. When I was an undergraduate, I remember professors complaining aboiut the price of silver reagents during the attempted Hunt brothers corner of that market, for example. But this one will definitely be near the top of the list, and it could be months before the Great Acetonitrile Drought lifts. If you've been saving some in your basement, it’s time to break it out.
+ TrackBacks (0) | Category: Analytical Chemistry | Chemical News
October 9, 2008
I’ve spoken before about the acetylene-azide “click” reaction popularized by Barry Sharpless and his co-workers out at Scripps. This has been taken up by the chemical biology field in a big way, and all sorts of ingenious applications are starting to emerge. The tight, specific ligation reaction that forms the triazole lets you modify biomolecules with minimal disruption (by hanging an azide or acetylene from them, both rather small groups), and tag them later on in a very controlled way.
Adrian Salic and co-worker Cindy Yao have just reported an impressive example. They’ve been looking at ethynyluracil (EU), the acetylene-modified form of the ubiquitous nucleotide found in RNA. If you feed this to living organisms, they take it up just as if it were uracil, and incorporate it into their RNA. (It’s uracil-like enough to not be taken up into DNA, as they’ve shown by control experiments). Exposing cells or tissue samples later on to a fluorescent-tagged azide (and the copper catalyst needed for quick triazole formation) lets you light up all the RNA in sight. You can choose the timing, the tissue, and your other parameters as you wish.
For example, Salic and Yao have exposed cultured cells to EU for varying lengths of time, and watched the time course of transcription. Even ten minutes of EU exposure is enough to see the nuclei start to light up, and a half hour clearly shows plenty of incoporation into RNA, with the cytoplasm starting to show as well. (The signal increases strongly over the first three hours or so, and then more slowly).
Isolating the RNA and looking at it with LC/MS lets you calibrate your fluorescence assays, and also check to see just how much EU is getting taken up. Overall, after a 24-hour exposure to the acetylene uracil, it looks like about one out of every 35 uracils in the total RNA content has been replaced with the label. There’s a bit less in the RNA species produced by the RNAPol1 enzyme as compared to the others, interestingly.
There are some other tricks you can run with this system. If you expose the cells for 3 hours, then wash the EU out of the medium and let them continue growing under normal conditions, you can watch the labeled RNA disappear as it turns over. As it turns out, most of it drops out of the nucleus during the first hour, while the cytoplasmic RNA seems to have a longer lifetime. If you expose the cells to EU for 24 hours, though, the nuclear fluorescence is still visible – barely – after 24 hours of washout, but the cytoplasmic RNA fluorescence never really goes away at all. There seems to be some stable RNA species out there – what exactly that is, we don’t know yet.
Finally, the authors tried this out on whole animals. Injecting a mouse with EU and harvesting organs five hours later gave some very interesting results. It worked wonderfully - whole tissue slices could be examined, as well as individual cells. Every organ they checked showed nuclear staining, at the very least. Some of the really transcriptionally active populations (hepatocytes, kidney tubules, and the crypt cells in the small intestine) were lit up very brightly indeed. Oddly, the most intense staining was in the spleen. What appear to be lymphocytes glowed powerfully, but other areas next to them were almost completely dark. The reason for this is unknown, and that’s very good news indeed.
That’s because when you come up with a new technique, you want it to tell you things that you didn’t know before. If it just does a better or more convenient job of telling you what you could have found out, that’s still OK, but it’s definitely second best. (And, naturally, if it just tells you what you already knew with the same amount of work, you’ve wasted your time). Clearly, this click-RNA method is telling us a lot of things that we don’t understand yet, and the variety of experiments that can be done with it has barely been sampled.
Closely related to this work is what’s going on in Carolyn Bertozzi’s lab in Berkeley. She’s gone a step further, getting rid of the copper catalyst for the triazole-forming reaction by ingeniously making strained, reactive acetylenes. They’ll spontaneously react if they see a nearby azide, but they’re still inert enough to be compatible with biomolecules. In a recent Science paper, her group reports feeded azide-substituted galactosamine to developing zebrafish. That amino sugar is well known to be used in the synthesis of glycoproteins, and the zebrafish embryos seemed to have no problem accepting the azide variant as a building block.
And they were able to run these same sorts of experiments – exposing the embryos to different concentrations of azido sugar, for different times, with different washout periods before labeling all gave a wealth of information about the development of mucin-type glycans. Using differently labled fluorescent acetylene reagents, they could stain different populations of glycan, and watch time courses and developmental trafficking – that’s the source of the spectacular images shown.
Losing the copper step is convenient, and also opens up possibilities for doing these reactions inside living cells (which is definitely something that Bertozzi’s lab is working on). The number of experiments you can imagine is staggering – here, I’ll do one off the top of my head to give you the idea. Azide-containing amino acids can be incorporated at specific places in bacterial proteins – here’s one where they replaced a phenylalanine in urate oxidase with para-azidophenylalanine. Can that be done in larger, more tractable cells? If so, why not try that on some proteins of interest – there are thousands of possibilities – then micro-inject one of the Bertozzi acetylene fluorescence reagents? Watching that diffuse through the cell, lighting things up as it found azide to react with would surely be of interest – wouldn’t it?
I’m writing about this the day after the green fluorescent protein Nobel for a reason, of course. This is a similar approach, but taken down to the size of individual molecules – you can’t label uracil with GFP and expect it to be taken up into RNA, that’s for sure. Advances in labeling and detection are one of the main things driving biology these days, and this will just accelerate things. (It’s also killing off a lot of traditional radioactive isotope labeling work, too, not that anyone’s going to miss it). For the foreseeable future, we’re going to be bombarded with more information than we know what to do with. It’ll be great – enjoy it!
+ TrackBacks (0) | Category: Analytical Chemistry | Biological News
September 23, 2008
Over the years, when some puzzling feature of a drug candidate’s binding to a target came up, I’ve often said “Well, we’re not going to know what’s happening until some lunatic builds a femtosecond X-ray laser”. Various lunatics are now pitching in to build some. I’m going to have to revise my lines.
The reason I’d say such a mouthful is that we already, of course, get a lot of structural information from X-ray beams. Shining them through crystals of various substances can, after a good deal of number-crunching in the background, give you a three-dimensional picture of how the unit molecules have packed together. Proteins can be crystallized, too, although it can be something of a black art, and they can be either crystallized with or soaked with our small molecules, giving us a picture of how they’re actually binding.
There are, as mentioned earlier around here, plenty of ways for this process to go wrong. For starters, a lot of things – many of them especially interesting – just don’t crystallize. And the crystals themselves may or may not be showing you a structure that’s relevant to the question you’re trying to answer – that’s particularly true in the case of those ligand-bound protein structures. And the whole process is only good for static pictures of things that aren’t moving around. It used to take many days to collect enough data for a good crystal structure. That moved down to hours as X-ray sources got brighter and detectors got better, and now X-ray synchrotrons will blast away at your crystals and give you enough reflections inside of twenty minutes. And that’s great, but molecules move around a trillion times faster than that, so we’re necessarily seeing an average of where they hang out the most.
Enter the femtosecond X-ray laser. A laser will put out the cleanest X-ray beam that anyone’s ever seen, a completely coherent one at an exact (and short) wavelength which should give wonderful reflection data. The only ways we know how to do that are on large scale, too, so it’s going to be a relatively bright source as well. The data should come so quickly, in fact, that several things which are now impossible are within reach: X-ray structures of single molecules, for one. X-rays of things that aren’t in a crystalline state at all, for another. And femtosecond-scale sequential X-ray structures – in effect, well-resolved high-speed movies of molecular motions.
Now that will be something to see. Getting all that to work is going to be quite a job, not least because X-ray bursts of this sort will probably destroy the sample that they're analyzing. But there are two free-electron X-ray lasers under construction – one set to complete next year at Stanford’s SLAC facility and a larger one that will be built in Hamburg. “Large” is the word here. The smaller SLAC instrument is already two kilometerslong. According to an article in Nature, though, a Japanese group have proposed some ways to make future instruments smaller and more efficient – all the way down, to, um, the size of a couple of football fields. But there’s another completely different technology coming along (laser-plasma wakefield instruments) that could produce far shorter X-rays in one hundredth the space, which is more like it.
I don’t think we’re going to see a benchtop-sized X-ray laser any time soon, especially since these things are going to need to be large just to get up to the brightness that will be needed. But I’m very interested to see what even the first generation machine at Stanford will be able to do. There are a lot of mysteries in the way that molecules move and interact, and we may finally be about to get a look at some of them.
+ TrackBacks (0) | Category: Analytical Chemistry
September 4, 2008
X-ray crystallography is wonderful stuff – I think you’ll get chemists to generally agree on that. There’s no other technique that can provide such certainty about the structure of a compound – and for medicinal chemists, it has the invaluable ability to show you a snapshot of your drug candidate bound to its protein target. Of course, not all proteins can be crystallized, and not all of them can be crystallized with drug ligands in them. But an X-ray structure is usually considered the last word, when you can get one – and thanks to automation, computing power, and to brighter X-ray sources, we get more of them than ever.
But there are a surprising number of ways that X-ray data can mislead you. For an excellent treatment of these, complete with plenty of references to the recent literature, see an excellent paper coming out in Drug Discovery Today from researchers at Astra-Zeneca (Andy Davis and Stephen St.-Gallay) and Uppsala University (Gerard Kleywegt). These folks all know their computational and structural biology, and they’re willing to tell you how much they don’t know, either.
For starters, a small (but significant) number of protein structures derived from X-ray data are just plain wrong. Medicinal chemists should always look first at the resolution of an X-ray structure, since the tighter the data, the better the chance there is of things being as they seem. The authors make the important point that there’s some subjective judgment involved on the part of a crystallographer interpreting raw electron-density maps, and the poorer the resolution, the more judgment calls there are to be made:
Nevertheless, most chemists who undertake structure-based design treat a protein crystal structure reverently as if it was determined at very high resolution, regardless of the resolution at which the structure was actually determined (admittedly, crystallographers themselves are not immune to this practice either). Also, the fact that the crystallographer is bound to have made certain assumptions, to have had certain biases and perhaps even to have made mistakes is usually ignored. Assumptions, biases, ambiguities and mistakes may manifest themselves (even in high-resolution structures) at the level of individual atoms, of residues (e.g. sidechain conformations) and beyond.
Then there’s the problem of interpreting how your drug candidate interacts with the protein. The ability to get an X-ray structure doesn’t always correlate well with the binding potency of a given compound, so it’s not like you can necessarily count on a lot of clear signals about why the compound is binding. Hydrogen bonds may be perfectly obvious, or they can be rather hard to interpret. Binding through (or through displacement of) water molecules is extremely important, too, and that can be hard to get a handle on as well.
And not least, there’s the assumption that your structure is going to do you good once you’ve got it nailed down:
It is usually tacitly assumed that the conditions under which the complex was crystallised are relevant, that the observed protein conformation is relevant for interaction with the ligand (i.e. no flexibility in the active-site residues) and that the structure actually contributes insights that will lead to the design of better compounds. While these assumptions seem perfectly reasonable at first sight, they are not all necessarily true. . .
That’s a key point, because that’s the sort of error that can really lead you into trouble. After all, everything looks good, and you can start to think that you really understand the system, that is until none of your wonderful X-ray-based analogs work out they way you thought they would. The authors make the point that when your X-ray data and your structure-activity data seem to diverge, it’s often a sign that you don’t understand some key points about the thermodynamics of binding. (An X-ray is a static picture, and says nothing about what energetic tradeoffs were made along the way). Instead of an irritating disconnect or distraction, it should be looked at as a chance to find out what’s really going on. . .
+ TrackBacks (0) | Category: Analytical Chemistry | Drug Assays | In Silico
March 27, 2008
There’s an excellent paper in the most recent issue of Chemistry and Biology that illustrates some of what fragment-based drug discovery is all about. The authors (the van Aalten group at Dundee) are looking at a known inhibitor of the enzyme chitinase, a natural product called argifin. It’s an odd-looking thing – five amino acids bonded together into a ring, with one of them (an arginine) further functionalized with a urea into a sort of side-chain tail. It’s about a 27 nM inhibitor of the enzyme.
(For the non-chemists, that number is a binding affinity, a measure of what concentration of the compound is needed to shut down the enzyme. The lower, the better, other things being equal. Most drugs are down in the nanomolar range – below that are the ulta-potent picomolar and femtomolar ranges, where few compounds venture. And above that, once you get up to 1000 nanomolar, is micromolar, and then 1000 micromolar is one millimolar. By traditional med-chem standards, single-digit nanomolar = good, double-digit nanomolar = not bad, triple-digit nanomolar or low micromolar = starting point to make something better, high micromolar = ignore, and millimolar = can do better with stuff off the bottom of your shoe.
What the authors did was break this argifin beast up, piece by piece, measuring what that did to the chitinase affinity. And each time they were able to get an X-ray structure of the truncated versions, which turned out to be a key part of the story. Taking one amino acid out of the ring (and thus breaking it open) lowered the binding by about 200-fold – but you wouldn’t have guessed that from the X-ray structure. It looks to be fitting into the enzyme in almost exactly the same way as the parent.
And that brings up a good point about X-ray crystal structures. You can’t really tell how well something binds by looking at one. For one thing, it can be hard to see how favorable the various visible interactions might actually be. And for another, you don’t get any information at all about what the compound had to pay, energetically, to get there.
In the broken argifin case, a lot of the affinity loss can probably be put down to entropy: the molecule now has a lot more freedom of movement, which has to be overcome in order to bind in the right spot. The cyclic natural product, on the other hand, was already pretty much there. This fits in with the classic med-chem trick of tying back side chains and cyclizing structures. Often you’ll kill activity completely by doing that (because you narrowed down on the wrong shape for the final molecule), but when you hit, you hit big.
The structure was chopped down further. Losing another amino acid only hurt the activity a bit more, and losing still another one gave a dipeptide that was still only about three times less potent than the first cut-down compound. Slicing that down to a monopeptide, basically just a well-decorated arginine, sent the activity down another sixfold or so – but by now we’re up to about 80 micromolar, which most medicinal chemists would regard as the amount of activity you could get by testing the lint in your pocket.
But they went further, making just the little dimethylguanylurea that’s hanging off the far end. That thing is around 500 micromolar, a level of potency that would normally get you laughed at. But wait. . .they have the X-ray structures all along the way, and what becomes clear is that this guanylurea piece is binding to the same site on the protein, in the same manner, all the way down. So if you’re wondering if you can get an X-ray structure of some 500 micromolar dust bunny, the answer is that you sure can, if it has a defined binding site.
And the value of these various derivatives almost completely inverts if you look at them from a binding efficiency standpoint. (One common way to measure that is to take the minus log of the binding constant and divide by the molecular weight in kilodaltons). That’s a “bang for the buck” index, a test of how much affinity you’re getting for the weight of your molecule. As it turns out, argifin – 27 nanomolar though it be – isn’t that efficient a binder, because it weighs a hefty 676. The binding efficiency index comes out to just under 12, which is nothing to get revved up about. The truncated analogs, for the most part, aren’t much better, ranging from 9 to 15.
But that guanylurea piece is another story. It doesn’t bind very tightly, but it bats way above its scrawny size, with a BEI of nearly 28. That’s much more impressive. If the whole argifin molecule bound that efficiently, it would be down in the ten-to-the-minus nineteenth range, and I don’t even know the name of that order of magnitude. If you wanted to make a more reasonably sized molecule, and you should, a compound of MW 400 would be about ten femtomolar with a binding efficiency like that. There’s plenty of room to do better than argifin.
So the thing to do, clearly, is to start from the guanylurea and build out, checking the binding efficiency along the way to make sure that you’re getting the most out of your additions. And that is exactly the point of fragment-based drug discovery. You can do it this way, cutting down a larger molecule to find what parts of it are worth the most, or you can screen to find small fragments which, though not very potent in the absolute sense, bind very efficiently. Either way, you take that small, efficient piece as your anchor and work from there. And either way, some sort of structural read on your compounds (X-ray or NMR) is very useful. That’ll give you confidence that your important binding piece really is acting the same way as you go forward, and give you some clues about where to build out in the next round of analogs.
This particular story may be about as good an illustration as one could possibly find - here's hoping that there are more that can work out this way. Congratulations to van Aalten and his co-workers at Dundee and Bath for one of the best papers I've read in quite a while.
+ TrackBacks (0) | Category: Analytical Chemistry | Drug Assays | In Silico
January 22, 2008
There’s been a big trend the last few years in the industry to try to build our molecules up from much smaller pieces than usual. “Fragment-based” drug discovery is the subject of many conferences and review articles these days, and I’d guess that most decent-sized companies have some sort of fragment effort going on. (Recent reviews on the topic, for those who want them).
Many different approaches come under that heading, though. Generally, the theme is to screen a collection of small molecules, half the size or less of what you’d consider a reasonable molecular weight for a final compound, and look for something that binds. At those sizes, you’re not going to find the high affinities that you usually look for, though. We usually want our clinical candidates to be down in the single-digit nanomolar range for binding constants, and our screening hits to be as far under one micromolar as we can get. In the fragment world, though, from what I can see, people regard micromolar compounds as pretty hot stuff, and are just glad not to be up in the millimolar range. (For people outside the field, it’s worth noting that a nanomolar compound binds about a million times better than a millimolar one).
Not all the traditional methods of screening molecules will pick up weak binders like that. (Some assays are actually designed not to read out at those levels, but to only tell you about the really hot compounds). For the others, you’d think you could just run things like you usually do, just by loading up on the test compounds, but that’s problematic. For one thing, you’ll start to chew up a lot of compound supplies at that rate. Another problem is that not everything stays in solution for the assay when you try to run things at that concentration. And if you try to compensate by using more DMSO or whatever to dissolve your compounds, you can kill your protein targets with the stuff when it goes in. Proteins are happy in water (well, not pure distilled water, but water with lots of buffer and salts and junk like the inside of a cell has). They can take some DMSO, but it’ll eventually make even the sturdiest of them unhappy at some point. (More literature on fragment screening).
And once you’ve got your weak-binding low-molecular weight stuff, what then? First, you have to overcome the feeling, natural among experienced chemists, that you’re working on stuff you should be throwing away. Traditional medicinal chemistry – analog this part, add to that part, keep plugging away – may not be the appropriate thing to do for these leads. There are just too many possibilities – you could easily spend years wandering around. So many companies depend on structural information about the protein target and the fragments themselves to tell them where these little guys are binding and where the best places to build from might be. That can come from NMR studies or X-ray crystal determinations, most commonly.
Another hope, for some time now, has been that if you could discover two fragments that bound to different sites, but not that far from each other, that you could then stitch them together to make a far better compound. (See here for more on this idea). That’s been very hard to realize in practice, though. Finding suitable pairs of compounds is not easy, for starters. And getting them linked, as far as I can see, can be a real nightmare. A lot of the linking groups you can try will alter the binding of the fragments themselves – so instead of going from two weak compounds to one strong one, you go from two weak ones to something that’s worse than ever. Rather than linking two things up, a lot of fragment work seems to involve building out from a single piece.
But that brings up another problem, exemplified by this paper. These folks took a known beta-lactamase inhibitor, a fine nanomolar compound, and broke it up into plausible-looking fragments, to see if it could have been discovered that way. But what they found, each time they checked the individual pieces, was that each of them bound in a completely different way than it did when it was part of the finished molecule. The binding mode was emergent, not additive, and it seems clear that most (all?) of the current fragment approaches would have been unable to arrive at the final structure. The authors admit that this may be a special case, but there’s no reason to assume that it’s all that special.
So fragment approaches, although they seem to be working out in some cases, are probably always going to miss things. But hey, we miss plenty of things with the traditional methods, too. Overall, I’m for trying out all kinds of odd things, because we need all the help we can get. Good luck to the fragment folks.
+ TrackBacks (0) | Category: Analytical Chemistry | Drug Assays
October 25, 2006
There's an interesting analytical chemistry paper in the preprint section of PNAS (open access if you want to read it) that may reopen an old controversy. It's from a large multinational team (Mexico, Spain, France, NASA-Ames) investigating the GC-mass spec instrumentation that was flown to Mars on the Viking landers in 1976. That's a key instrument in the life-on-Mars debate, so an attack on it is significant. First, though, some background - it's a tangled story.
The Viking landers each had three biology experiments to look for possible signs of Martian life, whose results were famously difficult to interpret. They produced both excitement and confusion at the time (scroll down in that NASA history page) and they've been fuel for arguments ever since.
There was the pyrolytic release experiment, which incubated Martian soil with 14C-labled carbon monoxide and carbon dioxide. After several days, the sample was purged, then heated to 650C and analyzed for the release of any labeled carbon compounds that might have been formed by living organisms. A control sample was heated before incubation, to kill off any such life forms. Seven out of the nine runs of this experiment seemed to produce positive results - that is, volatile labeled carbon was produced after pyrolysis.
The gas-exchange experiment used the same sort of apparatus, exposing the soil to either water vapor or nutrient solution under a mixed atmosphere of gases. The headspace was analyzed for changes in the concentrations of the various components, which could be due to biological uptake or release. This one showed a strong release of oxygen and carbon dioxide from the samples once moisture was added, but the amount decreased over time, leading to theories that this was the product of an inorganic reaction rather than a signature of life.
The labeled release experiment put Martian soil into a dilute nutrient broth, with several small organic compounds which were all labeled with 14C. After incubation, the headspace of the experimental cell was analyzed for any released labeled gases and again, a control experiment was done with pre-heated soil. This one produced exciting data, with release of labeled gas in the experimental samples well over those in the controls. One odd result, though, was that the subsequent injection(s) of nutrient solution did not produce a further spike of released gas. The final curves ended up looking neither like what you'd have expected from a classic bacterial positive, nor from a simple chemical reaction. This ambiguity has meant that the LR results have been re-analyzed and re-fought ever since the 1970s, with the experiment's designer, Gilbert Levin, leading the effort to rescue the data as a case for Martian life.
But then there were the GC-MS data, from an experiment considered to be the backstop test in case the biology experiments were difficult to interpret. Since they certainly were that, from beginning to end, this experiment became for many people the most important one on the landers. (It already had been for the people - a not insignificant group - who thought from the start that the biology tests were unlikely to provide a conclusive answer). This one heated soil samples directly and looked for volatile organics. Heating to 200C showed little or nothing in the way of carbon compounds, and very little water besides. By contrast, another sample taken up to 500 degrees released a comparative flood of water, but still showed no evidence of organic molecules.
And that, for most observers, was that. No organic molecules, no life. Explanations after the GC-MS results mainly turned to what sorts of inorganic chemistry might have given the behavior seen in the three other experiments. Martian soil was thus hypothesized to be a sterile mixture of interesting chemicals (iron peroxides? carbon suboxide polymers?) that had fooled the biology test packages, but couldn't fool the GC-MS.
There's always been an underground, though, that has held that the results were indeed the result of life. Gilbert Levin has never given up. In 1981, he pointed out that tests of a Viking-style GC-MS instrument had shown that it was insensitive to organics in a particular Antarctic soil sample, but that this same soil nonetheless gave a positive result in the LR experiment. And he really put his opinions out in the store window in 1997, with a paper that flatly concluded that the 1976 LR experiments had indeed detected Martian life.
In the last few years, others have joined the battle. Steven Benner at Florida, whose work I wrote about here, published a PNAS paper in 2000 which maintained that organic molecules on Mars would likely be retained as higher molecular weight carboxylates, which would not have been volatile enough for the Viking GC/MS instrument to detect. And now this latest group has weighed in.
They've also analyzed various Antarctic and temperate desert samples, and found that all of them contain organic matter that cannot be detected by thermal GC-MS analysis. And the ones that contain iron, including the NASA reference simulated Mars soil (a weathered basalt sample from near Mauna Kea), tend to oxidize their organics quickly under heating. The conclusion is that while much of the water and carbon dioxide produced in the Viking experiment from heating the Martian soil was surely inorganic, some of it could have been from the oxidation of organic material. The paper concludes that the Viking GC-MS results are. . .inconclusive, and should not be taken as evidence either way for the presence of organic molecules or life. The question, they feel, is still completely open.
The good news is that future missions are relying on other technologies. In addition to good ol' thermal volatilization/GC-MS, there are also plans for solvent extractions, laser desorption mass spec, short-path sublimation, and other nifty ideas. If these various US and European missions get off the ground (and on the Martian ground), we're going to have some very interesting data to look at. And argue about.
+ TrackBacks (0) | Category: Analytical Chemistry | Life As We (Don't) Know It
August 27, 2006
After my article on the role of carbon isotope testing in the Floyd Landis case, a question has come up several times in the comments and in my e-mail: since it's well-known that Landis was taking cortisone for his hip, could this have skewed the isotope ratios in his testosterone?
I doubt it very much, and here's why: first off, around 95% of the circulating testosterone in the male body is produced in the testes. For Landis's isotope ratios to be off a significant amount through something involving his own metabolic pathways, this is the only place that's worth looking.Testosterone and the other steroids are produced from cholesterol. The testes and other steroidogenic tissues have a stockpile of cholesteryl esters ready to be used for steroid synthesis, so it's going to be an uphill fight to alter things by any route, given that reserve.
Now it's time to dive into some biochemistry for the next few paragraphs - follow along if you like, or jump down to near the end if you don't want to see a lot of structures. OK, in steroid synthesis the first thing that happens is the chewing off of a side chain on the D ring to form pregnenolone, which is then turned into progesterone. That's the starting material for both testosterone and cortisol/cortisone. (Note that those last two are interconverted in the body by the 11-HSD enzymes).
Going down these different pathways, testosterone and cortisol end up with rather different structures. Cortisol's more complex. If you flip back and forth between those links in the previous paragraph, you'll see that the A and B rings are the same in both, but the C ring of cortisol has an extra hydroxyl group at C11, and it also has some oxidized side chain left at C17, which has been completely chopped off in testosterone. The question is, can you get from cortisol back to something that could be used to make testosterone?
I can believe the side-chain transformation much easier than the C-11 deoxygenation. Here's the metabolic fate of cortisol. Note that all these metabolites still have an oxidized C-11 - if anything is going to be recycled into testosterone, that C-11 is going to have to be reduced back down. And if there's a metabolic pathway that does that to any degree, I can't seem to find out anything about it. If it's a feasible pathway at all, it must be very minor indeed. If any steroid experts can shed light on this, I'd be glad to hear the details. (There's also the question of how long such intermediates would be available, versus their half-life before further metabolism and excretion, but that's a whole other issue).
No, if Landis's carbon isotope ratios are off significantly - and we haven't seen the official numbers yet - then it's hard for me to see how the cortisone injections could have much to do with it. We'll be stuck, in that case, with either conspiracy theories or with the conclusion that Landis used testosterone, and if it comes to that, I know which one I'm most likely to believe.
+ TrackBacks (0) | Category: Analytical Chemistry | Current Events
August 1, 2006
The New York Times broke the story today that the testosterone found in Tour de France champion Floyd Landis's blood was not from a natural source. Just how do they know that, and how reliable is the test?
The first thing an anti-doping lab looks for in such a case is the ratio of testosterone to the isomeric epitestosterone - too high an imbalance is physiologically unlikely and arouses suspicion. Landis already is in trouble from that reading, but the subject of the Times scoop is the isotopic ratio of the testosterone itself. And that one is going to be hard to get away from, if it's true.
Update: people are asking me why athletes don't just take extra epistestosterone to even things out. That they do - that's the most basic form of masking, and if Landis's ratio was as far off as is being reported, it's one of the odd features about this case. But the isotope test will spot either one, if it's not the kind your body produces itself - read on.
Steroids, by weight, are mostly carbon atoms. Most of the carbon in the world is the C-12 isotope, six protons and six neutrons, but around one per cent of it has an extra neutron to make it C-13. Those are the only stable isotopes of carbon. You can find tiny bits of radioactive C-14, though, and you can also get C-11 if you have access to a particle accelerator. Work fast, though, because it's hot as a pistol.
So, testosterone has 19 carbon atoms, and if on average every one out of a hundred carbon atoms is a C-13, you can calculate the spread of molecular weights you could expect, and their relative abundance. One out of every ten thousand molecules would have two C-13 atoms in there somewhere, one out of every million or so would have three, and so on. A good mass spectrometer will lay this data out for you like a deck of cards.
But here's the kicker: those isotopic forms of the elements behave a bit differently in chemical reactions. The heavier ones do the same things as their lighter cousins, but if they're involved in or near key bond-breaking or bond-making steps, they do them more slowly. It's like having a heavier ball attached to the other end of a spring. This is called a kinetic isotope effect, and chemists have found all sorts of weird and ingenious ways to expoit it. But it's been showing up for a lot longer than we've been around.
The enzymatic reactions that plants and bacteria use when they take up or form carbon dioxide have been slowly and relentlessly messing with the isotope ratios of carbon for hundreds of millions of years. And since decayed plants are food for other plants, and the living plants are food for animals, which are food for other animals and fertilizer for still more plants. . .over all this time, biological systems have become enriched in the lighter, faster-reacting C-12 isotope, while the rest of the nonliving world has become a bit heavier in C-13. You can sample the air next to a bunch of plants and watch as they switch from daytime photosynthesis to nighttime respiration, just based on the carbon isotope ratios. Ridiculously tiny variations in these things can now be observed, which have led to all sorts of unlikely applications, from determining where particular batches of cocaine came from to figuring out the dietary preferences of extinct herbivores.
So, if your body is just naturally cranking out the testosterone, it's going to have a particular isotopic signature. But if you're taking the synthetic stuff, which has been
partly worked on with abiotic forms of carbon derived from a different source (see below), the fingerprints will show. (Update: yes, this means that the difference between commercial testosterone and the body's own supply isn't as large as it would be otherwise, since the commercial synthesis generally starts from plant-derived steroid backbones. But it's still nothing that a good mass spec lab would miss). If the news reports are right, that's what Landis's blood samples have shown. And if they have, there seems only one unfortunate conclusion to be drawn.
Chem-Geek Supplemental Update: for the folks who have been wondering where exactly the isotopic difference comes in, here's the story: synthetic testosterone is made from phytosterol percursors, typically derived from wild yams or soy. Those are both warm-climate C3 plants, which take up atmospheric carbon dioxide by a different route than temperate-zone C4 plants, leading to noticeably different isotope ratios. That's where all the isotope-driven studies of diet start from. The typical Western industrial-country diet is derived from a mixture of C3 and C4 stocks, so the appearance of testosterone with a C3-plant isotopic profile is diagnostic.
+ TrackBacks (0) | Category: Analytical Chemistry | Current Events
June 3, 2004
Another day spent rooting around in the archives, trying to appease the rapacious Taiwanese patent office. One more day should about do it, and not a moment too soon. I'm now unearthing NMR spectral data for compounds, and translating those to print is not enjoyable.
For those outside the field, an NMR spectrum of a typical organic molecule is a rather complex linear plot of multiple lines and peaks. After staring at it a while, it gets rendered into text as something like "1.63, t, 3H; 2.34, s, 3H; 3.1 - 3.39, m, 4H. . ." In plain text, that's "At 1.63 and 2.34, there are a triplet signals that represent three protons each, and between 3.1 and 3.39 there's a messy multiplet that adds up to four protons' worth. . ."
If you really want to get into it, you list the coupling constants, the spacings between the individual peaks of those triplets and etc. No thanks. A typical spectrum will go on for a reasonable paragraph in this way, and the Taiwanese would like nothing better than several pages of this sort of thing, or so they maintain. What they'll is get as much as I can stand.
I'll try to lead off next week with a discussion of today's news about everyone's pal, Elliot Spitzer, and his suit against GSK. It's a wide-ranging topic, and there wasn't enough time to wrestle it to the ground today.
+ TrackBacks (0) | Category: Analytical Chemistry | Patents and IP
February 1, 2002
I can't talk about rain forest drug discovery without mentioning the (pretty bad) 1992 Sean Connery movie Medicine Man. He plays an alleged biochemist who comes up with a Miracle Drug, more or less by finding it under a leaf.
Plenty of large and small stuff is misportrayed, but I did say it was a movie. (One of these days someone will have to make a list of jobs that movies actually get right.) The part of this one that drug discovery people particularly enjoyed, though, was when some crude extract is fed into an impressive device that immediately displays the structure of the active compound. "I want one of those!" was the universal reaction.
I believe that this was supposed to be the one active component of a complex mixture, the plot hinging on being able to find it and isolate it. Of course, Shaman's business model (see previous post) depended on being able to do this sort of thing, and you see where it got them.
I only wish we could find things out as suddenly and dramatically as they do in films like these. As is true in most areas of research, medicinal chemists spend a fair amount of time looking at printouts (or up at the ceiling tiles,) wondering just what the heck happened in the last experiment. Determining chemical structures is easier than it's ever been (read: in most cases, it's possible to do it), but for natural products it still isn't trivial. My Connery-ometer remains on back-order.
+ TrackBacks (0) | Category: Analytical Chemistry