About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: firstname.lastname@example.org
In the Pipeline:
Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline
December 12, 2013
As longtime readers know, one of my spare-time occupations is amateur astronomy. I often get asked by friends and colleagues for telescope recommendations, so (just as I did last year), I'd like to provide some, along with some background on the whole topic..
The key thing to remember with a telescope is that other things being equal, aperture wins. More aperture means that you will be able to see more objects and more details. It's only fair to note that not all amateur astronomers agree with this, or about which kind of scope is best. As you'll see, larger apertures involve some compromises. And keep in mind that while a bigger scope can show you more, the best telescope is the one that you'll actually haul out and use. Overbuying has not been my problem, dang it all, but it has been known to happen. These days, eight-inch reflectors are a good solid entry point, but smaller ones will be cheaper (and perhaps worth it to see if this is something you really want to get into).
There, I've mentioned reflectors. Those are on the of three main kinds of scopes to consider: the other two are refractors, and folded-path. The refractors are the classic lens-in-the-front types. They can provide very nice views, especially of the planets and other brighter objects, and many planetary observers swear by them. But per inch of aperture, they're the most expensive, especially since for good views you have to spring for high-end optics to keep from having rainbow fringes around everything. I can't recommend a refractor for a first scope, for these reasons. A cheap one is not going to be a good one. That's especially true since a lot of the refractors you see for sale out there are nearly worthless - a casual buyer would be appalled at the price tag for a decent one. (Scroll down on that link to see what I mean). No large refractors have been built for astronomical research for nearly a hundred years.
That said, refractors have very, very devoted fans. If your vision is discerning enough, you'll enjoy the views through a really good one more than through any other kind of scope. But if you're just starting out, your vision is almost certainly not good enough yet (see below), so I continue to steer people away at first.
The next type, Reflectors are all variations on Isaac Newton's design: open tube at the top, mirror at the bottom, and an angled secondary mirror back near the top to reflect the light out to the eyepiece in the side. All modern large-aperture research telescopes are some variety of reflector. They provide the most aperture per dollar, especially with a simple Dobsonian mount (more on mounts below). One disadvantage compared to the other two types is that reflectors have to be aligned (collimated) when you first get them (and every so often afterwards) to make sure the mirrors are all working together. A badly collimated reflector will provide ugly views indeed, but it's at least easy to fix. It's also true that if the primary mirror is of poor quality, you're also in trouble, but the average these days is actually quite good, and this really isn't much of a problem any more.
Finally, the folded-path (catadioptric) types (Schmidt-Cassegrain and Maksutov designs, mostly) are a hybrid. They have mirror in the back, but also a thin corrector plate covering the front, which also has a small secondary mirror in the middle of it. The light path ends up coming out the back of the tube, through a hole in the primary mirror. Like refractors, these basically never have to be aligned. They're more expensive per aperture unit than reflectors, but a lot less than refractors. Their views are pretty good, although purists argue about how they compare to a reflector of equal size. (Refractor owners would probably win that argument, but they have to drop out at about the five or six-inch mark, when the other two telescope designs are just getting started). These designs are also compact (all that light folding), which makes the more portable and easier to mount.
And that brings up the next topic: what you do mount one of these fine optical tubes on, so you can use them to actually look at things? An equatorial or a fork mount will let you follow the motion of the objects in the sky easily, especially with a motor drive - the Earth's rotation is always sweeping things out of your view, otherwise. A decent mount of this kind will definitely add to your costs, though. The "Dobsonian" mount is a favorite of reflector owners, since it's quite simple and allows you to put more of your money into the optics. You do have to manually grab the telescope tube and move it, though, which takes some practice (and often some home-brew messing around with the mount). Some people don't mind this, others are driven nuts by it. You can put a motorized platform under a Dobsonian (my own setup) to motor-drive it, which some consider the best of both worlds. This is, though, suitable only for visual observing; a platform is almost never good enough for real astrophotography (see below for more).
On the topic of motorized telescope mounts, I should say something about "Go-to" models. These are not only motorized to track objects, they will slew the scope around to find them from a database or by manual entry. I'm very much of two minds on these. For an experienced observer, an astrophotographer, or a researcher, they can be an indispensable tool to spend more time observing and less time hunting around. For a total beginner, they can ease a lot of frustration when first learning the sky. But at the same time, they also can keep you from learning the sky at all, and they can very definitely encourage hopping around too quickly from one object to another. If you do that, you can "see" all sorts of stuff in one evening, while at the same time hardly seeing anything at all.
That's because visual observing is all about training yourself to see things. One thing every new telescope owner should know is that Very Little Ever Looks Like the Photographs. Especially since the photos are long exposures on wildly sensitive CCD chips, usually through big instruments, and under excellent conditions. Through the eyepiece, I am very sad to report, nebulae are not tapestries of red, pink, green, and purple: they range from greenish grey to bluish grey. And although with practice you'll pick up really surprising and beautiful amounts of detail in deep-sky objects, at first, everything can look like a blob. Or a smear. Or not appear to even be there at all, even when a practiced observer can see it right smack in the center of the eyepiece field. I really enjoy seeing these things with my own eyes, and trying to find out just how much detail I can pick out and how faint I can go, but it's not for everyone. This is one of the single biggest things that needs to be emphasize to anyone planning to buy a telescope. Even the planets need practice: you'd be surprised how small Saturn is in a budget eyepiece, although it's striking at almost any magnification. If conditions are bad, Mars and Jupiter can look like they're at the bottom of a pot of boiling water. And you need time and patience to see all the details there are to see on them.
Now, photography is another story. Astrophotography is an expensive word, although thanks to webcams and the like, getting into it is not quite as bad as it used to be. But for most purposes, you'll need one of those motorized mounts that'll track objects across the sky. That's very convenient for visual observing, too, naturally, but a really good one for long-exposure photography can cost more than the telescope itself! I'm not an astrophotographer myself, so I won't go into great detail, but if you want to try this part of the hobby, prepare to think about the telescope mount as much as you think about the optics. Imaging equipment ranges from simple webcams all the way up to wonderful stuff that easily costs as much as a new car, or perhaps a small house. And you'll also need to be prepared to learn a lot about digital post-processing. That's another thing that all those great astrophotos have in common: someone spent a lot of time working on them, after they spent a lot of time gathering the data in the first place.
So, what to buy? I've scattered some Amazon links in the above to representative scopes. In general, Meade and Celestron are the two brands you'll see the most, and if you stay away from their cheap refractors, you should be fine. And Orion also sells good stuff of their own brand, (on Amazon and from their own site). (Again, I'd stay away from inexpensive refractors there, too). Other good sources are Astronomics and Anacortes.
There are a lot of excellent resources for specific opinions on different models, and on telescopes in general, at Scopereviews. Cloudy Nights is also a huge resource, full of message boards on every amateur astronomy topic you can think of (and classified ads for used equipment as well). Rod Mollise has a lot of good stuff, if you can handle his folksy dialect style. For the truly hard-core visual observer, Alvin Huey at Faint Fuzzies is a great source for downloadable observing guides (many of them free). I use them, although there are plenty of objects in them that are outside my range (I use an 11-inch Dobsonian reflector). He has observing guides for sale, too, but every single thing in every one of them is outside my observing range. Dang it all. And I can recommend the free software Cartes du Ciel (Sky Charts) for printing out charts of your own.
+ TrackBacks (0) | Category: Science Gifts
I'm sure that many readers here will be interested to hear that (according to some outfit called CareerBliss.com) the happiest company in America to work for is. . .Pfizer. Probably makes you happy just to hear about it, doesn't it?
+ TrackBacks (0) | Category: Business and Markets
I always enjoy this one: time to vote on Adam Feuerstein's "Worst Biotech CEO" awards for this year. There are, as is so often the case, some real stinkers on the list. Tough choices await.
+ TrackBacks (0) | Category: Business and Markets
Chemjobber has a good post on a set of papers from Pfizer's process chemists. They're preparing filibuvir, and a key step along the way is a Dieckmann cyclization. Well, no problem, say the folks who've never run one of these things - just hit the diester compound with some base, right?
But which base? The example in CJ's post is a good one to show how much variation you can get in these things. As it turned out, LiHMDS was the base of choice, much better than NaHMDS or KHMDS. Potassium t-butoxide was just awful. But the hexamethyldisilazide was even much better than LDA, and those two are normally pretty close. But there were even finer distinctions to be made: it turned out that the reaction was (reproducibly) slightly better or slightly worse with LiHMDS from different suppliers. The difference came down to two processes used to prepare the reagent - via n-BuLi or via lithium metal, and the Pfizer team still isn't sure what the difference is that's making all the difference (see the link for more details).
That's pure, 100-proof process chemistry for you, chasing down these details. It's a good thing for people who don't do that kind of work at all, though, to read some of these papers, because it'll give you an appreciation of variables that otherwise you might not think of at all. When you get down to it, a lot of our reactions are balancing on some fairly wobbly tightropes strung across the energy-surface landscape, and it doesn't take much of a push to send them sliding off in different directions. Choice of cation, of Lewis acid, of solvent, of temperature, order of addition - these and other factors can be thermodynamic and kinetic game-changers. We really don't know too many details about what happens in our reaction flasks.
And a brief med-chem note, for context: filibuvir, into which all this work was put, was dropped from development earlier this year. Sometimes you have to do all the work just to get to the point where you can drop these things - that's the business.
+ TrackBacks (0) | Category: Chemical News | Infectious Diseases
December 11, 2013
One should be cheering the news that Great Britain will double funding for Alzheimer's and dementia research. But there's something odd about the way it's being presented, at least to my eyes. Here's a story from the Guardian that might illustrate what I mean:
The health secretary, Jeremy Hunt, said he hoped the dementia summit would have the same effect as the G8 summit in Gleneagles on HIV/Aids in 2005.
"Today should be an optimistic day," he told BBC Breakfast. "Tony Blair had the G8 summit in Gleneagles in 2005 on HIV/Aids and actually that did turn out in retrospect to be a turning point in the battle against Aids.
"I think if you bring the world's leaders together, health ministers from across the world, and we are all resolved that we really are going to do something about this as we face up to an ageing society."
If 2005 was some sort of widely-recognized turning point in HIV control, I must have missed it. I'll be glad to be corrected, but the last sentence in that quote makes me wonder, because it isn't a sentence. Try it out: the first part isn't connected with the second. He thinks that if you bring the world's leaders together, then. . .what will happen? "If" implies some sort of resolution in a sentence, and there isn't any. How about the second part? They're all resolved that they're really going to do something - fine, but isn't that the easiest part? The simplest part? I mean, coming out and saying that you'd like to "do something" about a problem that everyone would like to see solved is not that big a step, is it?
Well, doubling research funding is certainly doing something, there's no taking away from that. Much is made in the various press articles about Lilly's Alzheimer's scan, which Britain's National Heath Service is going to make available to some patients. Now, Lilly has been talking bravely about Alzheimer's for some time now, and to be fair to them, they've been spending pretty bravely, too. No doubt their hope has been that their imaging agent would match up with some successful therapy they'd develop, but the "successful therapy" part has been the hard one.
But British Prime Minister David Cameron has also been talking about finding a cure by 2025. I hope we do - I may need it by then - but it's going to take a generous slug of luck for that to happen. I don't hold out much hope for anything currently in development as a cure, although I'd like to be wrong about that. And something that's not in development would barely make it through, on an optimistic timetable, by 2025. We certainly don't know enough about Alzheimer's to say that we're on track, so someone will have to get lucky. You wouldn't know that from the British newspapers, though. They've also been excited about the potential of Eli Lilly's solanezumab, which must make the UK the only area outside of Indianapolis where that state of mind obtains.
That's the part that worries me about the public statements in this area. Politicians (and CEOs) are prone to ringing declarations that make it sound as if all that's really needed is gumption and willpower - good faith will carry the day. But that just isn't true in research. It really isn't. Nerve and perseverance are necessary, and how, but they're nowhere near sufficient. To pretend otherwise is to engage in magical thinking, and the history of Big Proclamations in the biomedical field should be enough to prove that to anyone.
Back in 2003, we were supposedly going to eliminate death and suffering from cancer by 2015 (and Senator Arlen Spector asked if maybe we couldn't move the timetable up to 2010). On a lesser level, back in 2009, there were statements that a cure for the common cold was at hand. Sorry about that. The British press has a particular weakness for proclaimed Alzheimer's cures, not that the US press doesn't go for them, too.
No, saying it will not make it so. I don't know how to make it so, other than by spending a lot of money and a lot of time, and working really hard, and hoping for the best. But that's not the stuff of headlines.
+ TrackBacks (0) | Category: Alzheimer's Disease | Cancer
Here's a roundup of the top chem-blog posts of the year, as picked by Nature's Sceptical Chymist blog. I made the list, but a lot of other good stuff did, too - have a look. Edit - link fixed now - sorry!
+ TrackBacks (0) | Category: Chemical News
Nobel laureate Randy Schekman has stirred up a lot of controversy with his public declaration that he will send no more manuscripts to Nature, Science, Cell and such "luxury journals".
. . .The prevailing structures of personal reputation and career advancement mean the biggest rewards often follow the flashiest work, not the best. Those of us who follow these incentives are being entirely rational – I have followed them myself – but we do not always best serve our profession's interests, let alone those of humanity and society.
We all know what distorting incentives have done to finance and banking. The incentives my colleagues face are not huge bonuses, but the professional rewards that accompany publication in prestigious journals – chiefly Nature, Cell and Science.
These luxury journals are supposed to be the epitome of quality, publishing only the best research. Because funding and appointment panels often use place of publication as a proxy for quality of science, appearing in these titles often leads to grants and professorships. But the big journals' reputations are only partly warranted. While they publish many outstanding papers, they do not publish only outstanding papers. Neither are they the only publishers of outstanding research.
These journals aggressively curate their brands, in ways more conducive to selling subscriptions than to stimulating the most important research. Like fashion designers who create limited-edition handbags or suits, they know scarcity stokes demand, so they artificially restrict the number of papers they accept. The exclusive brands are then marketed with a gimmick called "impact factor". . .
The editorial staffs at these journals have been quick to point out that this is not necessarily a disinterested move, since Schekman is editor of eLife, a high-end open access journal. And no doubt some colleagues thought to themselves that it's much easier to abjure publication in the big journals after you've won your Nobel prize.
But there's a flip side to both those arguments. Scheckman is indeed editor of eLife, but that also means that he's willing to put his time and effort where his mouth is when he says that the current top-tier journals are a problem. And he's also willing to use his Nobelist profile in the service of that idea - many of the people complaining about how Scheckman is already famous would have produced no headlines at all had they announced that they would no longer publish in Nature. I seem to have taken that vow myself, at least so far in my career, without even realizing it.
Here's the response from Phillip Campbell at Nature Publishing Group. He says that he doesn't think it's helpful to mix the idea of open access with selectivity in publication, since open access is more of a business model. I see his point - there are selective open-access journals (like eLife), and there are nonselective ones, all the way down to the outright frauds. The same applies to the traditional journals, although the opportunities for fraud are not as lucrative. Campbell also decries the emphasis on impact factors, although the same objections can be made as with Scheckman, that it's easy for the editor of a high-impact-factor journal to play them down.
But impact factors really are pernicious. There will always be more- and less-presigious places to publish your research - any attempt to legislate otherwise is on a par with the creation of the New Soviet Man. But the advent of the impact factor gives everyone a lazy way to make the worst of it. It's especially tempting for people who don't even understand what someone else's research means, or might mean, but who have to judge them anyway. Why bother with the technical details? You have everything you need, to three idiotic decimal places, right there.
That actually sums up what I think about impact factors, although I've never put it quite so succinctly before: they're a moral hazard.
Update: more from Retraction Watch on the subject.
+ TrackBacks (0) | Category: The Scientific Literature
December 10, 2013
I've mentioned Theodore Gray's book The Elements before as an fine gift for anyone's who's interested in science or chemistry. I have a copy at home, although I don't have the follow-up, the Elements Vault, which apparently also has some chemical samples in it (doubtless of some of the less offensive elements!)
Last year I ordered the companion Elements Jigsaw Puzzle, which I did with the kids during January and February, to produce a three-foot-wide periodic table with information and photographs of each element. I did not miss the opportunity to mention some of the ones that I'd worked with (and I'm soon to add a couple of new ones to that list - more later). Gray also has a deck of element cards and a calendar, for your decorating needs.
There are other good entries in this area. The Disappearing Spoon is an entertaining book on various odd properties of the elements (chemists will have said "Gallium!" by now for the spoon of the title). I haven't seen Periodic Tales, but it comes well recommended.
A slightly different note is struck by another book I've long recommended, Oliver Sacks' Uncle Tungsten, which is a memoir as well as a meditation on chemistry (and the love of chemistry). Another memoir, an episodic one, is of course the late Primo Levi's The Periodic Table. It's somber at times, but also amusing, and when I read in it the phrase "Chlorides are rabble", I knew I was in the presence of a good writer, a good chemist, and a good translator.
+ TrackBacks (0) | Category: Science Gifts
Here are some slides from Anthony Nicholls of OpenEye, from his recent presentation here in Cambridge on his problems with molecular dynamics calcuations. Here's his cri du coeur (note: fixed a French typo from the original post there):
. . .as a technique MD has many attractive attributes that have nothing to do with its actual predictive capabilities (it makes great movies, it’s “Physics”, calculations take a long time, it takes skill to do right, “important” people develop it, etc). As I repeatedly mentioned in the talk, I would love MD to be a reliable tool - many of the things modelers try to do would become much easier. I just see little objective, scientific evidence for this as yet. In particular, it bothers me that MD is not held to the same standards of proof that many simpler, empirical approaches are - and this can’t be good for the field or MD.
I suspect he'd agree with the general principle that while most things that are worthwhile are hard, not everything that's hard is worthwhile. His slides are definitely fun to read, and worthwhile even if you don't give a hoot about molecular dynamics. The errors he's warning about apply to all fields of science. For example,he starts off with the definition of cognitive dissonance from Wikipedia, and proposes that a lot of the behavior you see in the molecular dynamics field fits the definitions of how people deal with this. He also maintains that the field seems to spend too much of its time justifying data retrospectively, and that this isn't a good sign.
I especially enjoyed his section on the "Tanimoto of Truth". That's comparing reality to experimental results. You have the cases where there should have been a result and the experiment showed it, and there shouldn't have been one, and the experiment reproduced that, too : great! But there are many more cases where only that first part applies, or gets published (heads I win, tails just didn't happen). And you have the inverse of that, where there was nothing, in reality, but your experiment told you that there was something. These false positives get stuck in the drawer, and no one hears about them at all. The next case, the false negatives, often end up in the "parameterize until publishable" category (as Nicholls puts it), or they get buried as well. The last category (should have been negative, experiment says they're negative) are considered so routine and boring that no one talks about them at all, although logically they're quite important.
All this can impart a heavy, heavy publication bias: you only hear about the stuff that worked, even if some of the examples you hear about really didn't. And unless you do a lot of runs yourself, you don't usually have a chance to see how robust the system really is, because the data you'd need aren't available. The organic synthesis equivalent is when you read one of those papers that do, in fact, work on the compounds in Table 1, but hardly any others. And you have to play close attention to Table 1 to realize that you know, there aren't any basic amines on that list (or esters, or amides, or what have you), are there?
The rest of the slides get into the details of molecular dynamic simulations, but he has some interesting comments on the paper I blogged about here, on modeling of allosteric muscarinic ligands. Nicholls says that "There are things to admire about this paper- chiefly that a prospective test seems to have been done, although not by the Shaw group." That caught my eye as well; it's quite unusual to see that, although it shouldn't be. But he goes on to say that ". . .if you are a little more skeptical it is easy to ask what has really been done here. In their (vast) supplementary material they admit that GLIDE docking results agree with mutagenesis as well (only, “not quite as well’, whatever that means- no quantification, of course). There’s no sense, with this data, of whether there are mutagenesis results NOT concordant with the simulations." And that gets back to his Tanimoto of Truth argument, which is a valid one.
He also points out that the predictions ended up being used to make one compound, which is not a very robust standard of proof. The reason, says Nicholls, is that molecular dynamics papers are held to a lower standard, and that's doing the field no good.
+ TrackBacks (0) | Category: In Silico | Who Discovers and Why
December 9, 2013
I've had the chance to use good old elemental bromine this morning, for the first time in several years. I can never see the stuff without thinking of this incident, a memorable part of the first synthetic scheme I ever tried that involved bromine. In the same way, every time I come across thiophenol - which isn't often, fortunately - I'm immediately taken back to this chemistry, which is a reaction I'll never forget either, despite numerous attempts to expunge it from my memory.
So here's a good question for a Monday: what reagents immediately recall something from your chemical past, and why? I'd assume that most working organic chemists have a few of these in their past. The common reagents all tend to blur together, but there will always be a few that have shown up only in one or two memorable instances. So what are yours?
+ TrackBacks (0) | Category: Life in the Drug Labs
Pick an empirical formula. Now, what's the most stable compound that fits it? Not an easy question, for sure, and it's the topic of this paper in Angewandte Chemie. Most chemists will immediately realize that the first problem is the sheer number of possibilities, and the second one is figuring out their energies. A nonscientist might think that this is the sort of thing that would have been worked out a long time ago, but that definitely isn't the case. Why think about these things?
What is this “Guinness” molecule isomer search good for? Some astrochemists think in such terms when they look for molecules in interstellar space. A rule with exceptions says that the most stable isomers have a higher abundance (Astrophys. J. 2009, 696, L133), although kinetic control undoubtedly has a say in this. Pyrolysis or biotechnology processes, for example, in anaerobic biomass-to-fuel conversions, may be classified on the energy scale of their products. The fate of organic aerosols upon excitation with highly energetic radiation appears to be strongly influenced by such sequences because of ion-catalyzed chain reactions (Phys. Chem. Chem. Phys. 2013, 15, 940). The magic of protein folding is tied to the most stable atomic arrangement, although one must keep in mind that this is a minimum-energy search with hardly any chemical-bond rearrangement. We should rather not think about what happens to our proteins in a global search for their minimum-energy structure, although the peptide bond is not so bad in globally minimizing interatomic energy. Regularity can help and ab initio crystal structure prediction for organic compounds is slowly coming into reach. Again, the integrity of the underlying molecule is usually preserved in such searches.
Things get even trickier when you don't restrict yourself to single compounds. It's pointed out that the low-energy form of the hexose empirical formula (C6H12O6) might well be a mixture of methane and carbon dioxide (which sounds like the inside of a comet to me). That brings up another reason this sort of thinking is useful: if you want to sequester carbon dioxide, what's the best way to do it? What molecular assemblies are most energetically favorable, and at what temperatures do they exist, and what level of complexity? At larger scales, we'll also need to think about such things in the making of supramolecular assemblies for nanotechnology.
The author, Martin Suhm of Göttingen, calls for a database of the lowest-energy species for each given formula as an invitation for people to break the records. I'd like to see someone give it a try. It would provide challenges for synthesis, spectroscopy and (especially) modeling and computational chemistry.
+ TrackBacks (0) | Category: Chemical News | In Silico
December 6, 2013
There have been many accusations over the years of people duplicating and fudging gels in biology papers. The Science-Fraud.org site made quite an impression with some of these, and there are others. But as in so many other fields, manual labor is giving way to software and automation.
Nature News has the story of an Italian company that has come up with an automated way of searching images in scientific papers for duplication. The first scalp has already been claimed, but how bad is the problem?
Now midway through the analysis, he estimates that around one-quarter of the thousands of papers featuring gels that he has analysed so far potentially breached widely accepted guidelines on reproducing gel images. And around 10% seem to include very obvious breaches, such as cutting and pasting of gel bands. Some journals were more affected than others, he says. Those with a high impact factor tended to be slightly less affected. He plans to publish his results.
I'll be happy to see the paper, and glad to see this sort of technique applied more broadly. I wonder if it can be adapted to published NMR spectra?
+ TrackBacks (0) | Category: The Dark Side | The Scientific Literature
Well, to go along with that recent paper on confounding cell assays, here's a column by John LaMattina on the problem of confounding clinical results. For some years now, the regulatory and development trend has been away from surrogate markers and towards outcome studies. You'd think that lowering LDL would be helpful - is it? You'd think that combining two different mechanisms to lower blood pressure would be a good thing - is it? The only way to answer the questions is by looking at a large number of patients in as close to a real-world setting as possible.
And in many cases, we're finding out that some very reasonable-sounding ideas don't, in fact, work out in practice. These aren't just findings with new or experimental drugs, either - as LaMattina shows, we're finding out things about drugs that have been on the market for years. This illustrates several important points: (1) There's a limit to what you can find out in clinical trials. (2) There is a limit to what reasonable medical hypotheses are worth. (3) We do not understand as much as we need to about human biology, in either the healthy or diseased state. (4) A drug, even when it's been approved, even when it's been on the market for years, is always an experimental medication.
LaMattina also points out just how crazily expensive the outcomes trials are that can generate the data that we really need. He's hoping that companies that spend that sort of money will emerge with a compelling enough case to be able to recoup it. I certainly hope that, too - but I'm absolutely 50/50 on whether I think it's true.
+ TrackBacks (0) | Category: Clinical Trials | Drug Prices
December 5, 2013
I've been meaning to link to this piece by Lauren Wolf in C&E News on the connections between Parkinson's disease and environmental exposure to mitochondrial toxins. (PDF version available here). Links between environmental toxins and disease are drawn all the time, of course, sometimes with very good reason, but often when there seems to be little evidence. In this case, though, since we have the incontrovertible example of MPTP to work from, things have to be taken seriously. Wolf's article is long, detailed, and covers a lot of ground.
The conclusion seems to be that some people may well be genetically more susceptible to such exposures. A lot of people with Parkinson's have never really had much pesticide exposure, and a lot of people who've worked with pesticides never show any signs of Parkinson's. But there could well be a vulnerable population that bridges these two.
+ TrackBacks (0) | Category: The Central Nervous System | Toxicology
The tension between drug companies and regulatory agencies is constant. It would be there even if no one cared a bit what drugs cost, because you can talk about safety and efficacy without ever mentioning a price. But since we do care about drug prices, and are coming to care more about them with every passing month, the tension is higher than ever. The questions aren't just "Is this drug safe?" or "Is this drug efficacious", or even "Is this drug relatively safe compared to its level of efficacy". You get into the really hard ones like "Is this drug's level of efficacy worth its price?"
Great Britain's NICE is at the forefront of these arguments, and some of the largest drug companies have recently been arguing back. In a public letter, they're calling on the Prime Minister to do something about the way the NICE approves new compounds:
We all accept there are increasing pressures on NHS spending, but there is a prevailing myth that medicines are expensive. In fact, spend on medicines was less than 10pc of total NHS expenditure in 2011. Britain pays less for its medicines than almost anywhere else in Europe. Over £7bn has already been saved from the medicines bill. No other part of the NHS is saving the taxpayer money on such a scale.
Medicines should not just be seen as a cost. They are an investment and an essential part of improving patient outcomes. Yet, fewer than one in three medicines have been recommended by the National Institute for Health and Care Excellence (NICE) for use in the NHS in line with their licence since 2005. And the proportion of medicines refused by NICE is only increasing.
The last straw seems to have been a recently-announced "voluntary" agreement with drug companies to freeze the costs of supplying drugs to the National Health Service. That was voluntary, in the sense of there would be mandatory price cuts if they didn't comply. That kind of voluntary. In turn, the drug companies' letter also has a part that might be titled "And If You Don't. . ." It goes like this:
. . .At a time when there is fierce global competition to attract investment in life sciences, the commercial environment is critical. The Government must work harder to get this right. It’s important that the impact that our medicines have on saving and improving people’s lives, and the innovation, economic stimulus and jobs that they deliver, is valued.
The key part is the "fierce global competition" phrase. I'd translate that as "You know, we don't have to employ people in this country. There are other places to go." Mind you, at least as far as medicinal chemistry is concerned, most companies in the UK seem to have already made good on that threat, even before the threat was made. But there are plenty of other jobs besides med-chem. We'll see how the UK government reacts.
+ TrackBacks (0) | Category: Business and Markets
December 4, 2013
Trying to find tenants for the former AstraZeneca campus at Charnwood. A few buildings are being demolished to make room, and they're hoping for biomedical researchers to move in. I hope that works; it seems like a good research site. I'm not sure that trying to sell it as ". . .perfectly located between Leicester, Nottingham and Derby" is as good a pitch as can be made, but there are worse ones.
+ TrackBacks (0) | Category: Drug Industry History
Seth Mnookin's The Panic Virus is an excellent overview of the vaccine/autism arguments that raged for many years (and rage still in the heads of the ignorant - sorry, it's gotten to the point where there's no reason to spare anyone's feelings about this issue). Now in this post at PLOS Blogs, he's alerting people to another round of the same stuff, this time about the HPV vaccine:
Over a period of about a month, (Katie Couric's) producer and I spoke for a period of several hours before she told me that the show was no longer interesting in hearing from me on air. Still, I came away from the interaction somewhat heartened: The producer seemed to have a true grasp of the dangers of declining vaccination rates and she stressed repeatedly that her co-workers, including Couric herself, did not view this as an “on the one hand, on the other hand” issue but one in which facts and evidence clearly lined up on one side — the side that overwhelmingly supports the importance and efficacy of vaccines.
Apparently, that was all a load of crap.
Read on for more. One piece of anecdotal data trumps hundreds of thousands of patients worth of actual data, you know. Especially if it's sad. Especially if it gets ratings.
+ TrackBacks (0) | Category: Autism | Infectious Diseases | Snake Oil
Here's some work that gets right to the heart of modern drug discovery: how are we supposed to deal with the variety of patients we're trying to treat? And the variety in the diseases themselves? And how does that correlate with our models of disease?
This new paper, a collaboration between eight institutions in the US and Europe, is itself a look at two other recent large efforts. One of these, the Cancer Genome Project, tested 138 anticancer drugs against 727 cell lines. Its authors said at the time (last year) that "By linking drug activity to the functional complexity of cancer genomes, systematic pharmacogenomic profiling in cancer cell lines provides a powerful biomarker discovery platform to guide rational cancer therapeutic strategies". The other study, the Cancer Cell Line Encyclopedia, tested 24 drugs against 1,036 cell lines. That one appeared at about the same time, and its authors said ". . .our results indicate that large, annotated cell-line collections may help to enable preclinical stratification schemata for anticancer agents. The generation of genetic predictions of drug response in the preclinical setting and their incorporation into cancer clinical trial design could speed the emergence of ‘personalized’ therapeutic regimens."
Well, will they? As the latest paper shows, the two earlier efforts overlap to the extent of 15 drugs, 471 cell lines, 64 genes and the expression of 12,153 genes. How well do they match up? Unfortunately, the answer is "Not too well at all". The discrepancies really come out in the drug sensitivity data. The authors tried controlling for all the variables they could think of - cell line origins, dosing protocols, assay readout technologies, methods of estimating IC50s (and/or AUCs), specific mechanistic pathways, and so on. Nothing really helped. The two studies were internally consistent, but their cross-correlation was relentlessly poor.
It gets worse. The authors tried the same sort of analysis on several drugs and cell lines themselves, and couldn't match their own data to either of the published studies. Their take on the situation:
Our analysis of these three large-scale pharmacogenomic studies points to a fundamental problem in assessment of pharmacological drug response. Although gene expression analysis has long been seen as a source of ‘noisy’ data, extensive work has led to standardized approaches to data collection and analysis and the development of robust platforms for measuring expression levels. This standardization has led to substantially higher quality, more reproducible expression data sets, and this is evident in the CCLE and CGP data where we found excellent correlation between expression profiles in cell lines profiled in both studies.
The poor correlation between drug response phenotypes is troubling and may represent a lack of standardization in experimental assays and data analysis methods. However, there may be other factors driving the discrepancy. As reported by the CGP, there was only a fair correlation (rs < 0.6) between camptothecin IC50 measurements generated at two sites using matched cell line collections and identical experimental protocols. Although this might lead to speculation that the cell lines could be the source of the observed phenotypic differences, this is highly unlikely as the gene expression profiles are well correlated between studies.
Although our analysis has been limited to common cell lines and drugs between studies, it is not unreasonable to assume that the measured pharmacogenomic response for other drugs and cell lines assayed are also questionable. Ultimately, the poor correlation in these published studies presents an obstacle to using the associated resources to build or validate predictive models of drug response. Because there is no clear concordance, predictive models of response developed using data from one study are almost guaranteed to fail when validated on data from another study, and there is no way with available data to determine which study is more accurate. This suggests that users of both data sets should be cautious in their interpretation of results derived from their analyses.
"Cautious" is one way to put it. These are the sorts of testing platforms that drug companies are using to sort out their early-stage compounds and projects, and very large amounts of time and money are riding on those decisions. What if they're gibberish? A number of warning sirens have gone off in the whole biomarker field over the last few years, and this one should be so loud that it can't be ignored. We have a lot of issues to sort out in our cell assays, and I'd advise anyone who thinks that their own data are totally solid to devote some serious thought to the possibility that they're wrong.
Here's a Nature News summary of the paper, if you don't have access. It notes that the authors of the two original studies don't necessarily agree that they conflict! I wonder if that's as much a psychological response as a statistical one. . .
+ TrackBacks (0) | Category: Biological News | Cancer | Chemical Biology | Drug Assays
December 3, 2013
Interesting science-gift ideas can be found in the "home experiments" area. There's been a small boom in this sort of book in recent years, which I think is a good thing all the way around. I believe that there's a good audience out there of people who are interested in science, but have no particular training in it, either because they're young enough not to have encountered much (or much that was any good), or because they missed out on it while they were in school themselves.
Last year I mentioned Robert Bruce (and Barbara) Thompson's Illustrated Guide to Home Chemistry Experiments along with its sequels, the Illustrated Guide to Home Biology Experiments and the Illustrated Guide to Home Forensic Science Experiments. Similar books are Hands-On Chemistry Activities and its companion Hands-On Physics Activities.
Related to these are two from Theodore Gray: Theo Gray's Mad Science, and its new sequel, Mad Science 2. Both of these are subtitles "Experiments that you can do at home - but probably shouldn't", and I'd say that's pretty accurate. Many of these use equipment and materials that most people probably won't have sitting around, and some of the experiments are on the hazardous side (which, I should mention, is something that's fully noted in the book). But they're well-illustrated from Gray's own demonstration runs, so you can at least see what they look like, and learn about the concepts behind them.
And there's copious chemistry available in a series of books by Bassam Shakhashiri, whose web site is here. These are aimed at people teaching chemistry who would like clear, tested demonstrations for their students, but if you know someone who's seriously into home science experimentation, they'll find a lot here. The most recent, Chemical Demonstrations, Volume 5, concentrates on colors and light. The previous ones are also available, and cover a range of topics in each book: Volume 4, Volume 3, Volume 2, and Volume 1.
+ TrackBacks (0) | Category: Book Recommendations | Science Gifts
The sleazy scientific publishing racket continues to plumb new depths in its well-provisioned submarine. Now comes word of "Stringer Open" - nope, not Springer Open, that one's a real publisher of real journals. This outfit is Stringer, which is a bit like finding a list of journals published by the American Comical Society. The ScholarlyOA blog noticed that the same person appears on multiple editorial boards across their various journals. When contacted, she turned out to be a secretary who's never heard of "Stringer". Class all the way. The journals themselves will be populated by the work of dupes and/or con artists - maybe some of those Chinese papers-for-rent can be stuffed in there to make a real lasagna of larceny out of the whole effort.
+ TrackBacks (0) | Category: The Dark Side | The Scientific Literature
The New Yorker has an article about Merck's discovery and development of suvorexant, their orexin inhibitor for insomnia. It also goes into the (not completely reassuring) history of zolpidem (known under the brand name of Ambien), which is the main (and generic) competitor for any new sleep drug.
The piece is pretty accurate about drug research, I have to say:
John Renger, the Merck neuroscientist, has a homemade, mocked-up advertisement for suvorexant pinned to the wall outside his ground-floor office, on a Merck campus in West Point, Pennsylvania. A woman in a darkened room looks unhappily at an alarm clock. It’s 4 a.m. The ad reads, “Restoring Balance.”
The shelves of Renger’s office are filled with small glass trophies. At Merck, these are handed out when chemicals in drug development hit various points on the path to market: they’re celebrations in the face of likely failure. Renger showed me one. Engraved “MK-4305 PCC 2006,” it commemorated the day, seven years ago, when a promising compound was honored with an MK code; it had been cleared for testing on humans. Two years later, MK-4305 became suvorexant. If suvorexant reaches pharmacies, it will have been renamed again—perhaps with three soothing syllables (Valium, Halcion, Ambien).
“We fail so often, even the milestones count for us,” Renger said, laughing. “Think of the number of people who work in the industry. How many get to develop a drug that goes all the way? Probably fewer than ten per cent.”
I well recall when my last company closed up shop - people in one wing were taking those things and lining them up out on a window shelf in the hallway, trying to see how far they could make them reach. Admittedly, they bulked out the lineup with Employee Recognition Awards and Extra Teamwork awards, but there were plenty of oddly shaped clear resin thingies out there, too.
The article also has a good short history of orexin drug development, and it happens just the way I remember it - first, a potential obesity therapy, then sleep disorders (after it was discovered that a strain of narcoleptic dogs lacked functional orexin receptors).
Mignot recently recalled a videoconference that he had with Merck scientists in 1999, a day or two before he published a paper on narcoleptic dogs. (He has never worked for Merck, but at that point he was contemplating a commercial partnership.) When he shared his results, it created an instant commotion, as if he’d “put a foot into an ants’ nest.” Not long afterward, Mignot and his team reported that narcoleptic humans lacked not orexin receptors, like dogs, but orexin itself. In narcoleptic humans, the cells that produce orexin have been destroyed, probably because of an autoimmune response.
Orexin seemed to be essential for fending off sleep, and this changed how one might think of sleep. We know why we eat, drink, and breathe—to keep the internal state of the body adjusted. But sleep is a scientific puzzle. It may enable next-day activity, but that doesn’t explain why rats deprived of sleep don’t just tire; they die, within a couple of weeks. Orexin seemed to turn notions of sleep and arousal upside down. If orexin turns on a light in the brain, then perhaps one could think of dark as the brain’s natural state. “What is sleep?” might be a less profitable question than “What is awake?”
There's also a lot of good coverage of the drug's passage through the FDA, particularly the hearing where the agency and Merck argued about the dose. (The FDA was inclined towards a lower 10-mg tablet, but Merck feared that this wouldn't be enough to be effective in enough patients, and had no desire to launch a drug that would get the reputation of not doing very much).
few weeks later, the F.D.A. wrote to Merck. The letter encouraged the company to revise its application, making ten milligrams the drug’s starting dose. Merck could also include doses of fifteen and twenty milligrams, for people who tried the starting dose and found it unhelpful. This summer, Rick Derrickson designed a ten-milligram tablet: small, round, and green. Several hundred of these tablets now sit on shelves, in rooms set at various temperatures and humidity levels; the tablets are regularly inspected for signs of disintegration.
The F.D.A.’s decision left Merck facing an unusual challenge. In the Phase II trial, this dose of suvorexant had helped to turn off the orexin system in the brains of insomniacs, and it had extended sleep, but its impact didn’t register with users. It worked, but who would notice? Still, suvorexant had a good story—the brain was being targeted in a genuinely innovative way—and pharmaceutical companies are very skilled at selling stories.
Merck has told investors that it intends to seek approval for the new doses next year. I recently asked John Renger how everyday insomniacs would respond to ten milligrams of suvorexant. He responded, “This is a great question.”
There are, naturally, a few shots at the drug industry throughout the article. But it's not like our industry doesn't deserve a few now and then. Overall, it's a good writeup, I'd say, and gets across the later stages of drug development pretty well. The earlier stages are glossed over a bit, by comparison. If the New Yorker would like for me to tell them about those parts sometime, I'm game.
+ TrackBacks (0) | Category: Clinical Trials | Drug Development | Drug Industry History | The Central Nervous System
December 2, 2013
Academic publishing fraud in China has come up here before, but Science has an in-depth look at the problem. And a big problem it is:
"There are some authors who don't have much use for their papers after they're published, and they can be transferred to you," a sales agent for a company called Wanfang Huizhi told a Science reporter posing as a scientist. Wanfang Huizhi, the agent explained, acts as an intermediary between researchers with forthcoming papers in good journals and scientists needing to snag publications. The company would sell the title of co–first author on the cancer paper for 90,000 yuan ($14,800). Adding two names—co–first author and co–corresponding author—would run $26,300, with a deposit due upon acceptance and the rest on publication. A purported sales document from Wanfang Huizhi obtained by Science touts the convenience of this kind of arrangement: "You only need to pay attention to your academic research. The heavy labor can be left to us. Our service can help you make progress in your academic path!"
For anyone who cares about science and research, this is revolting. If you care a lot more about climbing that slippery ladder up to a lucrative position, though, it might be just the thing, right? There are all sorts of people ready to help you realize your dreams, too:
The options include not just paying for an author's slot on a paper written by other scientists but also self-plagiarizing by translating a paper already published in Chinese and resubmitting it in English; hiring a ghostwriter to compose a paper from faked or independently gathered data; or simply buying a paper from an online catalog of manuscripts—often with a guarantee of publication.
Offering these services are brokers who hawk titles and SCI paper abstracts from their perches in China; individuals such as a Chinese graduate student who keeps a blog listing unpublished papers for sale; fly-by-night operations that advertise online; and established companies like Wanfang Huizhi that also offer an array of above-board services, such as arranging conferences and producing tailor-made coins and commemorative stamps. Agencies boast at conferences that they can write papers for scientists who lack data. They cold-call journal editors. They troll for customers in chat programs. . .
The journal contacted 27 agencies in China, with reporters posing as graduate students or other scientists, and asked about paying to get on a list of authors or paying to have a paper written up from scratch. 22 of them were ready to help with either or both. Many of these were to be placed in Chinese-language journals, but for a higher fee you could get into more international titles as well. Because of Chinese institutional insistence on high-impact-factor journal publications, people who can deliver that kind of publication can charge as much as a young professor's salary. (Since some institutions turn around and pay a bonus for such publications, though, it can still be feasible).
Some agencies claim they not only prepare and submit papers for a client: They furnish the data as well. "IT'S UNBELIEVABLE: YOU CAN PUBLISH SCI PAPERS WITHOUT DOING EXPERIMENTS," boasts a flashing banner on Sciedit's website.
One timesaver: a ready stock of abstracts at hand for clients who need to get published fast. Jiecheng Editing and Translation entices clients on its website with titles of papers that only lack authors. An agency representative told an undercover Science reporter that the company buys data from a national laboratory in Hunan province.
The article goes on to show that there are many Chinese scientists that are trying to do something about all this. I hope that they succeed, but it's going to take a lot of work to realign the incentives. Unless this happens, though, the Chinese-language scientific literature risks finding itself devolving into a bad joke, and papers from Chinese institutions risk having to go through extra levels of scrutiny when submitted abroad.
+ TrackBacks (0) | Category: The Dark Side | The Scientific Literature
Getting the week off to a bad start is this news from Eisai. They're stopping small-molecule work at their site in Andover, and (like everyone else, it seems) chopping med-chem at their UK site as well. Worldwide, it looks like a loss of 130 positions.
+ TrackBacks (0) | Category: Business and Markets
November 29, 2013
I hope my readers who celebrated Thanksgiving yesterday had a good one. Everything went well here, and there are plenty of turkey leftovers today. My wife always looks forward to a sandwich of turkey in a flour tortilla with hoisin sauce and fresh scallions. I can endorse that one, and I'm also a fan of turkey on pumpernickel with mayonnaise and horseradish. But to each their own! It's a big country, and can accommodate turkey quesadillas, turkey with mango pickle and naan, turkey with barbecue sauce, and who knows what else.
Over the next week or two, as I did last year, I'll be posting some science-themed gift ideas along with my regular postings. I should mention, as I do from time to time, that this blog is an Amazon affiliate, so links to Amazon from here will earn a small commission, at no change in the price on the buyer's end. So if you have some big online shopping to do, I encourage you to pick a blog or site that you've enjoyed during the year and use their affiliate links if they have them - everything that's ordered after such a redirect will send some money back to the site's owner. In my own case, I pledge to use a significant part of any proceeds to buy still more books, thereby stuffing my head with even more marginally useful knowledge.
I'll start off with gifts that you might well be ordering for yourself - books on medicinal chemistry and related fields. This is an updated version of the list I posted last year, with some additions.
At various times, I've asked the readership for the best books on the practice of medicinal chemistry and drug discovery. Here are the favorites mentioned by readers over the last few years (nominations for others are welcome):
For general medicinal chemistry, you have Bob Rydzewski's Real World Drug Discovery: A Chemist's Guide to Biotech and Pharmaceutical Research. Another recommendation is Textbook of Drug Design and Discovery by Krogsgaard-Larsen et al. Many votes also were cast for Camille Wermuth's The Practice of Medicinal Chemistry. For getting up to speed, several readers recommend Graham Patrick's An Introduction to Medicinal Chemistry. And an older text that has some fans is Richard Silverman's The Organic Chemistry of Drug Design and Drug Action.
Process chemistry is its own world with its own issues. Recommended texts here are Practical Process Research & Development by Neal Anderson, Repic's Principles of Process Research and Chemical Development in the Pharmaceutical Industry, and Process Development: Fine Chemicals from Grams to Kilograms by Stan Lee (no, not that Stan Lee) and Graham Robinson. On an even larger scale, McConville's The Pilot Plant Real Book comes recommended by readers here, too.
Case histories of successful past projects can be found in Drugs: From Discovery to Approval by Rick Ng and also in Walter Sneader's Drug Discovery: A History.
Another book that focuses on a particular (important) area of drug discovery is Robert Copeland's Evaluation of Enzyme Inhibitors in Drug Discovery. This is a new edition of the book recommended in this post last year.
Another newer book on a particular area of med-chem is Bioisosteres in Medicinal Chemistry by Brown et al., which also comes recommended by several readers.
For chemists who want to brush up on their biology, readers recommend Terrence Kenakin's A Pharmacology Primer, Third Edition: Theory, Application and Methods, Cannon's Pharmacology for Chemists, and Molecular Biology in Medicinal Chemistry by Nogrady and Weaver.
Overall, one of the most highly recommended books across the board comes from the PK end of things: Drug-like Properties: Concepts, Structure Design and Methods: from ADME to Toxicity Optimization by Kerns and Di. Another recent PK-centric book is Lead Optimization for Medicinal Chemists. For getting up to speed in this area, there's Pharmacokinetics Made Easy by Donald Birkett.
In a related field, standard desk references for toxicology seems to be Casarett & Doull's Toxicology: The Basic Science of Poisons and Hayes' Principles and Methods of Toxicology Every medicinal chemist will end up learning a good amount toxicology, too often the hard way.
As mentioned, titles to add to the list are welcome. I'll be doing a post later on less technical general interest science books as well.
+ TrackBacks (0) | Category: Science Gifts
November 27, 2013
Here's a recipe that I'm trying out this year from The Joy of Pickling, an excellent book full of all sorts of pickle recipes. I have a good-sized batch of this going right now, and samples so far confirm that it's good stuff.
1 2 1/2 pound cabbage (1 kilo), shredded
1 tablespoon salt (17 to 18 grams, table or pickling, not kosher, unless you want to adjust the amounts)
1 medium carrot, shredded
1 apple, sliced
1/2 cup cranberries (55g)
1 tablespoon caraway seeds (7g)
Cut the core from the cabbage, save a couple of outer leaves, and shred it. Add the salt to it in a large bowl, mixing it in well and pressing it together. Add the carrot, the apple (cored and sliced into 16th, the book says), the cranberries and the caraway seeds, and mix gently. Place this mixture in some sort of deep crock or jar (jars, if need be). Press the mixture in tight and lay some of the reserved cabbage leaves (or a piece thereof) on top. Weight this down with a small plastic bag (one that's OK for food) full of brine (made from 1.5 tablespoons of salt (24g) in one quart (950 mL) water) - this will keep the cabbage under the liquid layer. If your cabbage was fresh, it should make enough liquid to submerge itself. If not, you can check it after sitting overnight and add some brine (1 tablespoon of salt (18g) in one quart (950 mL) water) to just cover the shredded cabbage.
Leave the jar or jars at room temperature. Twice a day, you'll want to stick a wooden spoon handle down in there a few times to vent the carbon dioxide that will develop. If you don't, especially at first, you're like to have an overflow, so be warned. Four or five days, at a minimum, should do the trick - after that, you can keep it in a cold room or refrigerator. If you ferment it from the start in a cooler room, it'll take longer, but may have even better flavor. According to the Joy of Pickling, the initial burst of gas is from Leuconostoc mesenteroides, which produces good anaerobic conditions for Lactobacillus plantarum, among others, whose acid fermentation products give the sour flavor.
If you like sauerkraut, you'll be very much up for this. If you're not a big kraut fan, have no fear - this is a lot milder and more delicate than the store-bought stuff, and tastes something like rye bread with all that caraway in there. Enjoy!
+ TrackBacks (0) | Category: Blog Housekeeping
I wanted to note that I'm home today, and will soon be starting my traditional chocolate pecan pie. If you haven't seen it, that link will lead you to a detailed prep, with both US and metric measurements. It's based on Craig Claiborne's recipe, and he certainly knew what he was talking about when it came to Southern food (and much else besides). I've been making it for twenty years now, and if I didn't, there would be a mutiny around here.
I have a pumpkin pie to make as well, and I'd like to get the base of the gravy going, so it can be turkey-enhanced tomorrow. (As for the turkey, for some years now we've bought a kosher one, so it's already been brined. A 17-pound specimen is waiting for tomorrow efforts). I hope to also make some green beans with country ham, since that reheats just fine, and will save on stove space tomorrow. For country ham, I can recommend Burger's from the Ozarks, available through that Amazon link. Pan-fried country ham has been my traditional Christmas breakfast for my entire life, and my wife and kids now join in with me on that one, but I break it out for Thanksgiving with the green beans. For me, it's wintertime food - I wouldn't turn it down if someone served it to my in July, but it certainly would be a new experience. I grew up eating a brand called Mar-Tenn from west Tennessee, but I don't even know if they exist any more.
The rest of the Thanksgiving meal will include an Iranian basmati rice (with saffron, slivered almonds, sour dried zereshk berries, pistachios, and bits of orange zest), home-made mashed potatoes, creamed onions with sage, pan-roasted Brussels sprouts, and stuffing (my Iranian mother-in-law's own recipe, with bread cubes, cranberries, celery, onion, and pepperoni - how she thought that one up, I don't know, but it's excellent). And this year I'm trying out some Russian sour cabbage (with apples, cranberries, and caraway seeds), which is fermenting away in the basement right now. I'll post the recipe for that later on in the afternoon, after I've made some culinary headway. Update: forgot the stuffed mushrooms and the roasted acorn squash. It's hard to keep track of it all after a certain point!
+ TrackBacks (0) | Category: Blog Housekeeping
As everyone will have heard the personal-genomics company 23 and Me was told by the FDA to immediately stop selling their product, a direct-to-consumer DNA sequence readout. Reaction to this has been all over the map. I'll pick a couple of the viewpoints to give you the idea.
From one direction, here's Matthew Herper's article, with the excellent title "23 And Stupid". Here's his intro, which makes his case well:
I’d like to be able to start here by railing against our medical system, which prevents patients from getting data about our own bodies because of a paternalistic idea that people can’t look at blood test results, no less genetic information, without a doctor being involved or the government approving the exact language of the test. I’d like to be able to argue that the Food and Drug Administration is wantonly standing in the way of entrepreneurism and innovation by cracking down on 23andMe, a company that is just trying to give patients the ability to know about their own DNA, to understand their own health risks, and to participate in science.
I wish that was the story I’m about to write, but it’s not, and it all really comes down to one fact in the FDA’s brutally scathing warning letter to 23andMe, the Google-backed personal genetics startup. It’s this quote from the letter by Ileana Elder, in the agency’s diagnostics division: “ FDA has not received any communication from 23andMe since May.”
So we can call that one the practical view: "It doesn't matter what you think about 23 and Me's product, and it doesn't matter what you think about the FDA. They're supposed to be working with the FDA, they knew it, but they haven't done squat about it, so what did you expect the agency to do, anyway?". From that, let's go to the idealistic view, from economist Alex Tabarrok at Marginal Revolution, who writes just the sort of article that Herper deliberately passes up the chance to:
Let me be clear, I am not offended by all regulation of genetic tests. Indeed, genetic tests are already regulated. To be precise, the labs that perform genetic tests are regulated by the Clinical Laboratory Improvement Amendments (CLIA) as overseen by the CMS (here is an excellent primer). The CLIA requires all labs, including the labs used by 23andMe, to be inspected for quality control, record keeping and the qualifications of their personnel. The goal is to ensure that the tests are accurate, reliable, timely, confidential and not risky to patients. I am not offended when the goal of regulation is to help consumers buy the product that they have contracted to buy.
What the FDA wants to do is categorically different. The FDA wants to regulate genetic tests as a high-risk medical device that cannot be sold until and unless the FDA permits it be sold.
Moreover, the FDA wants to judge not the analytic validity of the tests, whether the tests accurately read the genetic code as the firms promise (already regulated under the CLIA) but the clinical validity, whether particular identified alleles are causal for conditions or disease. The latter requirement is the death-knell for the products because of the expense and time it takes to prove specific genes are causal for diseases. Moreover, it means that firms like 23andMe will not be able to tell consumers about their own DNA but instead will only be allowed to offer a peek at the sections of code that the FDA has deemed it ok for consumers to see.
The thing is, I can see merits in both these views. And you know, they're not mutually exclusive, either, not as much as it looks like at first glance. I don't even think that the FDA itself thinks that they're so mutually exclusive, if you read their letter (emphasis added):
The Office of In Vitro Diagnostics and Radiological Health (OIR) has a long history of working with companies to help them come into compliance with the FD&C Act. Since July of 2009, we have been diligently working to help you comply with regulatory requirements regarding safety and effectiveness and obtain marketing authorization for your PGS device. FDA has spent significant time evaluating the intended uses of the PGS to determine whether certain uses might be appropriately classified into class II, thus requiring only 510(k) clearance or de novo classification and not PMA approval, and we have proposed modifications to the device’s labeling that could mitigate risks and render certain intended uses appropriate for de novo classification. Further, we provided ample detailed feedback to 23andMe regarding the types of data it needs to submit for the intended uses of the PGS. As part of our interactions with you, including more than 14 face-to-face and teleconference meetings, hundreds of email exchanges, and dozens of written communications, we provided you with specific feedback on study protocols and clinical and analytical validation requirements, discussed potential classifications and regulatory pathways (including reasonable submission timelines), provided statistical advice, and discussed potential risk mitigation strategies. As discussed above, FDA is concerned about the public health consequences of inaccurate results from the PGS device; the main purpose of compliance with FDA’s regulatory requirements is to ensure that the tests work.
As much as I might agree with Alex Tabarrok in principle, I think he's missing a key point here. The FDA is not telling everyone that they don't own their own DNA information, and that they can't see it unless the agency lets them. The agency is saying that 23 and Me can certainly make a business out of selling people their own DNA sequence information, but if they do so by explicitly claiming medical benefits or diagnostic uses, then their business will fall under the FDA's jurisdiction. From their letter, it appears that they have been telling the company this over and over for several years now, during which 23 and Me has, apparently, been dragging their feet and trying to have it both ways. As the FDA letter notes:
For example, your company’s website at www.23andme.com/health (most recently viewed on November 6, 2013) markets the PGS for providing “health reports on 254 diseases and conditions,” including categories such as “carrier status,” “health risks,” and “drug response,” and specifically as a “first step in prevention” that enables users to “take steps toward mitigating serious diseases” such as diabetes, coronary heart disease, and breast cancer.
I'll add a bitter, cynical note: if only 23 and Me had been able to come up with some way to market their DNA test as a nutritional supplement, they'd be in the clear. Maybe some sort of sugar pill that you took before you spit in the little sample container? Then they could say "Not intended to treat, cure, or modify any disease" at the bottom of the page, in six-point microtype, and everything would have been fine, as if by magic. No one would have paid any attention to it, of course, because no one ever pays any attention to that language when they go out and buy all kinds of "supplements", and the FDA would have staggered backwards at the sight of Orrin Hatch's law, like Christopher Lee in a Hammer vampire film being hosed down with a face full of holy water.
Well, that might not have worked perfectly, but it would have worked better than what 23 and Me actually tried. They wouldn't have sold nearly as many DNA tests without talking about preventing disease and making medical decisions in their advertising, true, but those are the breaks. I think that if they'd stuck to some neutral language, rather than presenting Immediate Actionable Medical Decisions, they might well have stayed out of trouble.
Update: via Matt Herper's Twitter feed, here's an interesting take on the whole situation. 23 and Me has been hoping to get some real (and really profitable) insights into population genomics by accumulating such a large sample size. Have they? The way they're acting makes one think that nothing good has popped up yet. . .
+ TrackBacks (0) | Category: Regulatory Affairs
November 26, 2013
One way to look at a drug company's pipeline and portfolio is the "Freshness Index" - how much of its sales are coming from products approved within the past five years. Here's Bernard Munos earlier this year on this topic, where he shows that (too much) revenue lately has been coming from older products. At the time, the figures for the big companies started off with Novartis (19% "fresh" sales), GlaxoSmithKline (12%), J&J (11.8%) and Pfizer (10%).
I bring this up because there's a new look at the freshness index. This one has only products from 2010 or later, and year-to-date sales figures. Under those conditions, it's J&J in the lead (23.4% of sales), then Novartis (17.8%), and Novo Nordisk (13.6%). Now, Novo was not in Munos's list, so I can't say if there's been much of a change there or not, but I find the change in J&J's figures interesting. I don't think that's all due to new approvals - is it older stuff slipping off the list? The new list also has GSK down neat the bottom at 2.3% "fresh", which shows you how much the cutoffs matter to these assessments.
One thing both lists agree on, though, is Eli Lilly. They're at the bottom in both, showing 0.8% of their sales coming from anything approved since 2010. That can't be good, and it isn't. AstraZeneca, Pfizer, Merck, and Sanofi are all in the single digits as well. So's Roche, but their long-running Genentech-driven biotech products make up for that. AZ and Lilly don't exactly have that cushion.
+ TrackBacks (0) | Category: Business and Markets
Here's an article from Science on the problems with mouse models of disease.
or years, researchers, pharmaceutical companies, drug regulators, and even the general public have lamented how rarely therapies that cure animals do much of anything for humans. Much attention has focused on whether mice with different diseases accurately reflect what happens in sick people. But Dirnagl and some others suggest there's another equally acute problem. Many animal studies are poorly done, they say, and if conducted with greater rigor they'd be a much more reliable predictor of human biology.
The problem is that the rigor of animal studies varies widely. There are, of course, plenty of well-thought-out, well-controlled ones. But there are also a lot of studies with sample sizes that are far too small, that are poorly randomized, unblinded, etc. As the article mentions (just to give one example), sticking your gloved hand into the cage and pulling out the first mouse you can grab is not an appropriate randomization technique. They aren't lottery balls - although some of the badly run studies might as well have used those instead.
After lots of agitating and conversation within the National Institutes of Health (NIH), in the summer of 2012 [Shai] Silberberg and some allies went outside it, convening a workshop in downtown Washington, D.C. Among the attendees were journal editors, whom he considers critical to raising standards of animal research. "Initially there was a lot of finger-pointing," he says. "The editors are responsible, the reviewers are responsible, funding agencies are responsible. At the end of the day we said, 'Look, it's everyone's responsibility, can we agree on some core set of issues that need to be reported' " in animal research?
In the months since then, there's been measurable progress. The scrutiny of animal studies is one piece of an NIH effort to improve openness and reproducibility in all the science it funds. Several institutes are beginning to pilot new approaches to grant review. For an application based on animal results, this might mean requiring that the previous work describe whether blinding, randomization, and calculations about sample size were considered to minimize the risk of bias. . .
Not everyone thinks that these new rules are going to work, though, or are even the right way to approach the problem:
Some in the field consider such requirements uncalled for. "I am not pessimistic enough to believe that the entire scientific community is obfuscating results, or that there's a systematic bias," says Joseph Bass, who studies mouse models of obesity and diabetes at Northwestern University in Chicago, Illinois. Although Bass agrees that mouse studies often aren't reproducible—a problem he takes seriously—he believes that's not primarily because of statistics. Rather, he suggests the reasons vary by field, even by experiment. For example, results in Bass's area, metabolism, can be affected by temperature, to which animals are acutely sensitive. They can also be skewed if a genetic manipulation causes a side effect late in life, and researchers try to use older mice to replicate an effect observed in young animals. Applying blanket requirements across all of animal research, he argues, isn't realistic.
I think, though, that there must be some minimum requirements that could be usefully set, even with every field having its own peculiarities. After all, the same variables that Bass mentions above - which are most certainly real ones - could affect studies in completely different fields. This, of course, is one of the biggest reasons that drug companies restrict access to their animal facilities. There's always a separate system to open those doors, and if you don't have the card to do it, you're not supposed to be in there. Pace the animal rights activists, that's not because it's so terrible in there that the rest of us wouldn't be able to take it. It's because they don't want anyone coming in there and turning on lights, slamming doors, sneezing, or doing any of four dozen less obvious things that could screw up the data. This stuff is expensive, and it can be ruined quite easily. It's like waiting for a four-week-long soufflé to rise.
That brings up another question - how do the animal studies done in industry compare to those done in academia? The Science article mentions some work done recently by Lisa Bero of UCSF. She was looking at animal studies on the effects of statins, and found, actually, that industry-sponsored research was less likely to find that the drug under investigation was beneficial. The explanation she advanced is a perfectly good one: if your animal study is going to lead you to spend the big money in the clinic, you want to be quite sure that you can believe the data. That's not to say that there aren't animal studies in the drug industry that could be (or could have been) run better. It's just that there are, perhaps, more incentives to make sure that the answer is right, rather than just being interesting and publishable.
Doesn't the same reasoning apply to human studies? It certainly should. The main complicating factor I can think of is that once a company, particularly a smaller one, has made the big leap into human clinical trials, it also has an incentive to find something that's good enough to keep going with, and/or good enough to attract more investment. So perverse incentives are, I'd guess, more of a problem once you get to human trials, because it's such a make-or-break situation. People are probably more willing to get the bad news from an animal study and just groan and say "Oh well, let's try something else". Saying that after an unsuccessful Phase II trial is something else again, and takes a bit more sang-froid than most of us have available. (And, in fact, Bero's previous work on human trials of statins seems to show various forms of bias at work, although publication bias is surely not the least of them).
+ TrackBacks (0) | Category: Animal Testing
November 25, 2013
Michael Shultz of Novartis is back with more thoughts on how we assign numbers to drug candidates. Previously, he's written about the mathematical wrongness of many of the favorite metrics (such as ligand efficiency), in a paper that stirred up plenty of comment.
His new piece in ACS Medicinal Chemistry Letters is well worth a look, although I confess that (for me) it seemed to end just when it was getting started. But that's the limitation of a Viewpoint article for a subject with this much detail in it.
Shultz makes some very good points by referring to Daniel Kahneman's Thinking, Fast and Slow, a book that's come up several times around here as well (in both posts and comments). The key concept here is called "attribute substitution", which is the mental process by which we take a complex situation, which we find mentally unworkable, and try to substitute some other scheme which we can deal with. We then convince ourselves, often quickly, silently, and without realizing that we're doing it, that we now have a handle on the situation, just because we now have something in our heads that is more understandable. That "Ah, now I get it" feeling is often a sign that you're making headway on some tough subject, but you can also get it when you're understanding something that doesn't help you with it at all.
And I'd say that this is the take-home for this whole Viewpoint article, that we medicinal chemists are fooling ourselves when we use ligand efficiency and similar metrics to try to understand what's going on with our drug candidates. Shultz go on to discuss what he calls "Lipinski's Anchor". Anchoring is another concept out of Thinking Fast and Slow, and here's the application:
The authors of the ‘rules of 5’ were keenly aware of their target audience (medicinal chemists) and “deliberately excluded equations and regression coefficients...at the expense of a loss of detail.” One of the greatest misinterpretations of this paper was that these alerts were for drug-likeness. The authors examined the World Drug Index (WDI) and applied several filters to identify 2245 drugs that had at least entered phase II clinical development. Applying a roughly 90% cutoff for property distribution, the authors identified four parameters (MW, logP, hydrogen bond donors, and hydrogen bond acceptors) that were hypothesized to influence solubility and permeability based on their difference from the remainder of the WDI. When judging probability, people rely on representativeness heuristics (a description that sounds highly plausible), while base-rate frequency is often ignored. When proposing oral drug-like properties, the Gaussian distribution of properties was believed, de facto, to represent the ability to achieve oral bioavailability. An anchoring effect is when a number is considered before estimating an unknown value and the original number signiﬁcantly inﬂuences future estimates. When a simple, specific, and plausible MW of 500 was given as cutoff for oral drugs, this became the mother of all medicinal chemistry anchors.
But how valid are molecular weight cutoffs, anyway? That's a topic that's come up around here a few times, too, as well it should. Comparisons of the properties of orally available drugs across their various stages of development seem to suggest that such measurements converge on what we feel are the "right" values, but as Shultz points out, there could be other reasons for the data to look that way. And he makes this recommendation: "Since the average MW of approved oral drugs has been increasing while the failure rate due to PK/biovailability has been decreasing, the hypothesis linking size and bioavailability should be reconsidered."
I particularly like another line, which could probably serve as the take-home message for the whole piece: "A clear understanding of probabilities in drug discovery is impossible due to the large number of known and unknown variables." I agree. And I think that's the root of the problem, because a lot of people are very, very uncomfortable with that kind of talk. The more business-school training they have, the less they like the sound of it. The feeling is that if we'd just use modern management techniques, it wouldn't have to be this way. Closer to the science end of things, the feeling is that if we'd just apply the right metrics to our work, it wouldn't have to be that way, either. Are both of these mindsets just examples of attribute substitution at work?
In the past, I've said many times that if I had to work from a million compounds that were within rule-of-five cutoffs versus a million that weren't, I'd go for the former every time. And I'm still not ready to ditch that bias, but I'm certainly ready to start running up the Jolly Roger about things like molecular weight. I still think that the clinical failure rate is higher for significantly greasier compounds (both because of PK issues and because of unexpected tox). But molecular weight might not be much of a proxy for the things we care about.
This post is long enough already, so I'll address Shultz's latest thoughts on ligand efficiency in another entry. For those who want more 50,000-foot viewpoints on these issues, though, these older posts will have plenty.
+ TrackBacks (0) | Category: Drug Development | Drug Industry History
November 22, 2013
A look back at the way it used to be, courtesy of ChemTips. What did you do without NMR, without LC-mass spec? You tried all kinds of tricks to get solids that you could recrystallize, and liquids that you could distill. I missed out on that era of chemistry, and most readers here can say the same. But it's a good mental exercise to picture what things used to be like.
+ TrackBacks (0) | Category: Chemical News
Here's a podcast interview I did recently for "Science For the People" (formerly known as "Skeptically Speaking", where they quizzed me about some of the "Things I Won't Work With" compounds. The whole show is worth listening to (there's Scicurious and ZeFrank in there, but I come in at about the 38 minutes mark.
+ TrackBacks (0) | Category: Blog Housekeeping
You've seen those "Call for papers" notices, from journals or conferences? Over at Synthetic Remarks, there's a call for whistleblowers. Reacting to the recent reports of scientific fraud, Fredrik von Kieseritzky is asking those who want to get the details of such things out safely to contact him. Swedish law is very protective of sources, and he's basically making that jurisdiction available to anyone for whom it would be in a position to help. He also has some sound advice on how to communicate such things (Tor, PGP, TrueCrypt), which are the sorts of tools that, apparently, the modern world is trying to make sure that everyone stays current with whether they felt like doing so or not.
This is a sincere offer, and may well draw some sincere responses. We'll see. . .
+ TrackBacks (0) | Category: The Dark Side
November 21, 2013
Here's a very surprising idea that looks like it can be put to an experimental test. Mao-Sheng Miao (of UCSB and the Beijing Computational Sciences Research Center) has published a paper suggesting that under high-pressure conditions, some elements could show chemical bonding behavior involving their inner-shell electrons. Specific predictions include high-pressure forms of cesium fluoride - not just your plain old CsF, but CsF3 and CsF5, and man, do I feel odd writing down those formulae.
These have completely different geometries, and should be readily identifiable should they actually form. I'm thinking of this as cesium giving up its lone valence electron, and then you're left with a xenon-like arrangement. And xenon, as Neil Bartlett showed the world in 1962, can certainly go on to form fluorides. Throw in some pressure, and (perhaps) the deed it done in cesium's case. So I very much look forward to an experimental test of this idea, which I would imagine we'll see pretty shortly.
+ TrackBacks (0) | Category: Chemical News
I wanted to mention to readers here that I've agreed to write a book (for a general audience) on chemistry for Sterling Publishers (the publishing arm of Barnes and Noble). They've been putting out a series of books (Sterling Milestones) on various scientific topics, looking at 250 key concepts or historical events. There's a short essay on each of these, and an illustration on the facing page. Clifford Pickover did The Math Book, The Physics Book, and The Medical Book for them, and recently they've published The Drug Book, The Space Book, and The Psychology Book as well. So I'm doing The Chemistry Book, which occupies me on my train rides home after work and after dinner - my wife and kids have been involuntarily roped in as the test audience for the entries.
The book itself won't be out for a while - I'm delivering the manuscript next spring, and there will surely be a lot of editorial work after that. I have over 200 of the short chapters outlined so far, but I'm leaving some room for more topics as they occur to me (and as the chapters I'm writing suggest - sometimes I find that I have to include another topic to make the one I'm working on make sense to the eventual readers).
I don't want to give away the complete list of chapters just yet, not least because it's still changing around, but I would like to solicit nominations for events and ideas that anyone thinks I should be sure to cover. The book spans the whole historical record, up to the present day, in all fields of chemistry, so in one sense the challenge is narrowing it down to just 250 short essays. The other challenge is actually writing 250 short essays, of course. I'm doing OK against my list so far, but there are some topics that are difficult to do justice to in 350 words, as will be easily appreciated by the chemists around here.
So if anyone has some topics, obvious or nonobvious, that they think a book like this should be sure to include, please mention them in the comments. I'm sure some of them will already be on the list, but since I have room to add more, I certainly don't want to miss too many good opportunities. Thanks very much!
And yes, the "Things I Won't Work With" manuscript is being worked on as well. "The Chemistry Book" is giving me some practice at integrating a longer manuscript, and I've been adding some new material along the way. The trickier part of that one has been getting rid of some repetition that you notice when the original blog posts are stacked up together. But it's definitely in the hopper.
Update: a lot of good ideas in the comments! Many of them were already on my list, but I've already seem some that I wouldn't have thought of, and some others that I really should have but overlooked. Much appreciated! Anyone who hasn't added something and still wants to, though, feel free - I'll be checking this post pretty frequently.
+ TrackBacks (0) | Category: Book Recommendations
November 20, 2013
Double Nobelist Frederick Sanger has died at 95. He is, of course, the pioneer in both protein and DNA sequencing, and he lived to see these techniques, revised and optimized beyond anyone's imagining, become foundations of modern biology.
When he and his team determined the amino acid sequence of insulin in the 1950s, no one was even sure if proteins had definite sequences or not. That work, though, established the concept for sure, and started off the era of modern protein structural studies, whose importance to biology, medicine, and biochemistry is completely impossible to overstate. The amount of work needed to sequence a protein like insulin was ferocious - this feat was just barely possible given the technology of the day, and that's even with Sanger's own inventions and insights (such as Sanger's reagent) along the way. He received a well-deserved Nobel in 1958 for having accomplished it.
In the 1970s, he made fundamental advances in sequencing DNA, such as the dideoxy chain-termination method, again with effects which really can't be overstated. This led to a share of a second chemistry Nobel in 1980 - he's still only double laureate in chemistry, and every bit of that recognition was deserved.
+ TrackBacks (0) | Category: Biological News | Chemical News