Corante

About this Author
Derek Lowe
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
MedChem Buzz
Kilomentor
On Pharma
A New Merck, Reviewed
Liberal Arts Chemistry
One in Ten Thousand
Electron Pusher
Periodic Tabloid
All Things Metathesis
C&E News Blog
Propter Doc
Chemiotics II
The Chemical Notebook
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
Chirality
BBSRC/Douglas Kell
ChemBark
Drug Discovery Opinion
Realizations in Biostatistics
Chemjobber
Pharmalot
WSJ Health Blog
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Useful Chemistry
Chiral Jones
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Totally Synthetic
Fragment Literature
The F- Blog
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Carbon-Based Curiosities
Experimental Error
Business|Bytes|Genes|Molecules
Eye on FDA
Sigma-Aldrich ChemBlogs
Chemical Forums
Depth-First
Symyx Blog
P212121
ChemCafe
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
A Scientist's Life
Speculist
Cosmic Variance
The Capsule
Zeroth Order Approximation
Biology News Net


Medical Blogs
Med Tech Sentinel
DB's Medical Rants
Science-Based Medicine
GruntDoc
The Health Care Blog
Respectful Insolence
Black Triangle
Diabetes Mine


Economics and Business
Marginal Revolution
Arnold Kling
The Volokh Conspiracy
Knowledge Problem
The Stalwart


Politics / Current Events
Virginia Postrel
Tinkerty Tonk
Instapundit
Megan McArdle
Mickey Kaus
Colby Cosh
Alien Corn
No Watermelons


Belles Lettres
Two Blowhards
Critical Mass
Arts and Letters Daily
God of the Machine
Armavirumque
About Last Night
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

December 12, 2012

Pfizer's Gold Dust Makes it to the WSJ

Email This Entry

Posted by Derek

I wanted to congratulate the commenters around here - your knowledge (and your low-pH wit) have propelled the Pfizer gold dust speculation here to Peter Loftus' Corporate Intelligence blog at the Wall Street Journal. Don't expect any response from Pfizer, though. . .

Comments (2) + TrackBacks (0) | Category: Press Coverage

Sue the Nobel Committee. Yeah, That'll Work.

Email This Entry

Posted by Derek

Rongxiang Xu is upset with this year's Nobel Prize award for stem cell research. He believes that work he did is so closely related to the subject of the prize that. . .he wants his name on it? No, apparently not. That he wants some of the prize money? Nope, not that either. That he thinks the prize was wrongly awarded? No, he's not claiming that.

What he's claiming is that the Nobel Committee has defamed his reputation as a stem cell pioneer by leaving him off, and he wants damages. Now, this is a new one, as far as I know. The closest example comes from 2003, when there was an ugly controversy over the award for NMR imaging (here's a post from the early days of this blog about it). Dr. Raymond Damadian took out strongly worded (read "hopping mad") advertisement in major newspapers claiming that the Nobel Committee had gotten the award wrong, and that he should have been on it. In vain. The Nobel Committee(s) have never backed down in such a case - although there have been some where you could make a pretty good argument - and they never will, as far as I can see.

Xu, who works in Los Angeles, is founder and chairman of the Chinese regenerative medicine company MEBO International Group. The company sells a proprietary moist-exposed burn ointment (MEBO) that induces "physiological repair and regeneration of extensively wounded skin," according to the company's website. Application of the wound ointment, along with other treatments, reportedly induces embryonic epidermal stem cells to grow in adult human skin cells. . .

. . .Xu's team allegedly awakened intact mature somatic cells to turn to pluripotent stem cells without engineering in 2000. Therefore, Xu claims, the Nobel statement undermines his accomplishments, defaming his reputation.

Now, I realize that I'm helping, in my small way, to give this guy publicity, which is one of the things he most wants out of this effort. But let me make myself clear - I'm giving him publicity in order to roll my eyes at him. I look forward to following Xu's progress through the legal system, and I'll bet his legal team looks forward to it as well, as long as things are kept on a steady payment basis.

Comments (15) + TrackBacks (0) | Category: Biological News

Eli Lilly's Brave Alzheimer's Talk

Email This Entry

Posted by Derek

I'm a bit baffled by Eli Lilly's strategy on Alzheimer's. Not the scientific side of it - they're going strongly after the amyloid hypothesis, with secretase inhibitors and antibody therapies, and if I were committed to the amyloid hypothesis, that's probably what I'd be doing, too. It is, after all, the strongest idea out there for the underlying mechanism of the disease. (But is it strong enough? Whether or not amyloid is the way to go is the multibillion dollar question that can really only be answered by spending the big money in Phase III trials against it, unfortunately).

No, what puzzles me is the company's publicity effort. As detailed here and here, the company recently made too much (it seemed to me and many others) of the results for solanezumab, their leading antibody therapy. Less hopeful eyes could look at the numbers and conclude that it did not work, but Lilly kept on insisting otherwise.

And now we have things like this:

"We are on the cusp here of writing medical history again as a company, this time in Alzheimer's disease," Jan Lundberg, Lilly's research chief, said in an interview.

Just as the Indianapolis-based company made history in the 1920s by producing the first insulin when type 1 diabetes was a virtual death sentence, Lundberg said he is optimistic that the drugs Lilly is currently testing could significantly slow the ultimately fatal memory-robbing disease.

"It is no longer a question of 'if' we will get a successful medicine for this devastating disease on the market, but when," said Lundberg, 59.

Ohhh-kay. The problems here are numerous. For one thing, as Lundberg (an intelligent man) well knows, insulin-for-diabetes is a much straighter shot than anything we know of for Alzheimer's. It was clear, when Lilly got their insulin business underway, that the most devastating symptoms of type I diabetes were caused by lack of insulin production in the body, and that providing that insulin was the obvious remedy. Even if it did nothing for the underlying cause of the disease (and it doesn't), it was a huge step forward. As for Alzheimer's, I understand that what Lundberg and Lilly are trying to get across here is the idea of a "successful medicine", rather than a "cure". Something that just slows Alzheimer's down noticeably would indeed be a successful medicine.

But "when, not if"? With what Lilly has in the clinic? After raising hopes by insisting that the Phase III results for solanezumab were positive, the company now says that. . .well, no, it's not going to the FDA for approval. It will, instead, conduct a third Phase III trial. This decision came after consulting with regulators in the the US and Europe, who no doubt told them to stop living in a fantasy world. So, sometime next year, Lilly will start enrolling for another multiyear shot at achieving some reproducible hint of efficacy. Given the way solanezumab has performed so far, that's about the best that could be hoped for, that it works a bit in some people, sometimes, for a while, as far as can be told in a large statistical sample. Which sets up this situation, I fear.

And this is "on the cusp. . .of writing medical history"? Look, I would very much like for Lilly, for anyone, to write some medical history against Alzheimer's. But saying it will not make it so.

Comments (17) + TrackBacks (0) | Category: Alzheimer's Disease | Clinical Trials

December 11, 2012

Natural Products Continue to Weird Me Out

Email This Entry

Posted by Derek

acetylene%20compound.png
Here's a funny-looking compound for you - Ivorenolide A, isolated from mahogany tree bark, it has an 18-membered ring with conjugated acetylenes in it. That makes the 3-D structure quite weird; it's nearly flat. And it has biological activity, too (immunosuppression, as measured by T-cell and B-cell proliferation assay in vitro). Got anything that looks like this in your compound libraries? Me neither.

Comments (14) + TrackBacks (0) | Category: Chemical News

Free To Promote Off-Label? Not So Fast. . .

Email This Entry

Posted by Derek

Steve Usdin at BioCentury has a very interesting article (free access) following up on that surprise decision that the FDA's restrictions on off-label promotion are a violation of the First Amendment:

But companies and individuals who take the decision as a signal that the rules of the road have changed and they are now free to promote off-label indications put themselves in great legal and economic peril, attorneys who helped persuade the court to overturn Caronia's conviction told BioCentury.

At the same time, the decision by one of the country's most influential and respected courts to overturn a criminal conviction on First Amendment grounds is persuasive evidence that, in the long term, FDA will have to change some of the assumptions underpinning its regulation of medical products.

FDA, which now has lost a string of First Amendment cases, cannot forever hold on to the notion that it is empowered to prohibit drug companies and their employees from saying things that anyone else is free to say. Sooner or later, according to legal experts, the agency will have to reconcile itself with the idea that industry has the right to truthful, non-misleading speech.

Some of the people the article quotes are expecting the same thing I am - a further appeal to the Supreme Court - but no matter what, it's going to be quite a while before all the debris stops landing. Any company that tries to be the first to take advantage of what might be a new-found freedom could find itself right back in court, becoming a test case for what this ruling really means. Anyone feel like being a pioneer?

Comments (2) + TrackBacks (0) | Category: Regulatory Affairs

Did Kaggle Predict Drug Candidate Activities? Or Not?

Email This Entry

Posted by Derek

I notied this piece on Slate (originally published in New Scientist) about Kaggle, a company that's working on data-prediction algorithms. Actually, it might be more accurate to say that they're asking other people to work on data-prediction algorithems, since they structure their tasks as a series of open challenges, inviting all comers to submit their best shots via whatever computational technique they think appropriate.

PA: How exactly do these competitions work?
JH: They rely on techniques like data mining and machine learning to predict future trends from current data. Companies, governments, and researchers present data sets and problems, and offer prize money for the best solutions. Anyone can enter: We have nearly 64,000 registered users. We've discovered that creative-data scientists can solve problems in every field better than experts in those fields can.

PA: These competitions deal with very specialized subjects. Do experts enter?
JH: Oh yes. Every time a new competition comes out, the experts say: "We've built a whole industry around this. We know the answers." And after a couple of weeks, they get blown out of the water.

I have a real approach-avoidance conflict with this sort of thing. I tend to root for outsiders and underdogs, but naturally enough, when they're coming to blow up what I feel is my own field of expertise, that's a different story, right? And that's just what this looks like: the Merck Molecular Activity Challenge, which took place earlier this fall. Merck seems to have offered up a list of compounds of known activity in a given assay, and asked people to see if they could recapitulate the data through simulation.

Looking at the data that were made available, I see that there's a training set and a test set. They're furnished as a long run of molecular descriptors, but the descriptors themselves are opaque, no doubt deliberately (Merck was not interested in causing themselves any future IP problems with this exercise). The winning team was a group of machine-learning specialists from the University of Toronto and the University of Washington. If you'd like to know a bit more about how they did it, here you go. No doubt some of you will be able to make more of their description than I did.

But I would be very interested in hearing some more details on the other end of things. How did the folks at Merck feel about the results, with the doors closed and the speaker phone turned off? Was it better or worse than what they could have come up with themselves? Are they interested enough in the winning techniques that they've approached the high-ranking groups with offers to work on virtual screening techniques? Because that's what this is all about: running a (comparatively small) test set of real molecules past a target, and then switching to simulations and screening as much of small molecule chemical space as you can computationally stand. Virtual screening is always promising, always cost-attractive, and sometimes quite useful. But you never quite know when that utility is going to manifest itself, and when it's going to be another goose hunt. It's a longstanding goal of computational drug design, for good reason.

So, how good was this one? That also depends on the data set that was used, of course. All of these algorithm-hunting methods can face a crucial dependence on the training sets used, and their relations to the real data. Never was "Garbage In, Garbage Out" more appropriate. If you feed in numbers that are intrinsically too well-behaved, you can emerge with a set of rules that look rock-solid, but will take ou completely off into the weeds when faced with a more real-world situation. And if you go to the other extreme, starting with wooly multi-binding-mode SAR with a lot of outliers and singletons in it, you can end up fitting equations to noise and fantasies. That does no one any good, either.

Back last year, I talked about the types of journal article titles that make me keep on scrolling past them, and invited more. One of the comments suggested "New and Original strategies for Predictive Chemistry: Why use knowledge when fifty cross-correlated molecular descriptors and a consensus of over-fit models will tell you the same thing?". What I'd like to know is, was this the right title for this work, or not?

Comments (22) + TrackBacks (0) | Category: In Silico

December 10, 2012

Why Did Pfizer Have All That Gold Dust, Anyway?

Email This Entry

Posted by Derek

You've probably seen the story that a substantial quantity (roughly fifty pounds!) of gold dust seems to have gone missing from Pfizer's labs in St. Louis. No report I've seen has any details, though, on just what Pfizer was doing with that much gold dust - the company isn't saying. I can tell you that I've never found a laboratory use for it myself dang it all.

So let's speculate! Why would a drug company need gold dust on that scale? Buying it in that form makes you think that a large surface area might have been important, unless there was some gold refinery running Double Coupon Wednesday on the stuff. Making a proprietary catalyst? Starting material for functionalized gold nanoparticles? Solid support(s) for some biophysical assay? Classy replacement for Celite for those difficult filtrations? Your ideas are welcome in the comments. . .

Update: out of many good comments, my favorite so far is: "Knowing Pfizer, I'm guessing they were planning on turning it into lead."

Comments (57) + TrackBacks (0) | Category: Chemical News

More on Penn's T-Cell Therapy

Email This Entry

Posted by Derek

There's more news on the T-cell therapy work that I wrote about here and here. The New York Times has an update, and the news continues to be encouraging. So far about a dozen leukemia patients have been treated, and while not everyone has responded, there have been several dramatic remissions. Considering that every candidate for treatment so far has been at the edge of the grave (advanced resistant disease, multiple chemotherapy failures), there's definitely something here.

This will have to be done patient-by-patient. But leukemia varies patient by patient, too, and effective therapies are probably going to have to get this granular (or more). So be it. The challenges now are to find out how to make the success rates even higher, and how to deliver this sort of treatment to larger numbers of people. Challenge accepted, as they say. . .

Comments (6) + TrackBacks (0) | Category: Cancer

December 7, 2012

Whitesides on Discovery and Development

Email This Entry

Posted by Derek

George Whitesides of Harvard has a good editorial in the journal Lab on a Chip. He's talking about the development of microassays, but goes on to generalize about the new technologies - how they're found, and how they're taken up (or not) by a wider audience (emphasis mine below):

Lab-on-a-chip (LoC) devices were originally conceived to be useful–that is, to solve problems. For problems in analysis or synthesis (or for other applications, such as growing cells or little animals) they would be tiny – the “microcircuits of the fluidic world.” They would manipulate small volumes of scarce samples, with low requirements for expensive space, reagents and waste. They would save cost and time. They would allow parallel operation. Sensible people would flock to use such devices.

Sensible and imaginative scientists have, in fact, flocked to develop such devices, or what were imagined to be such devices, but users have not yet flocked to solve problems with them. “Build it, and they will come” has not yet worked as a strategy in LoC technology, as it has, say, with microprocessors, organic polymers and gene sequencers. Why not? One answer might seem circular, but probably is not. It is that the devices that have been developed have been elegantly imagined, immensely stimulating in their requirements for new methods of fabrication, and remarkable in their demonstrations of microtechnology and fluid physics, but they have not solved problems that are otherwise insoluble. Although they may have helped the academic scientist to produce papers, they have not yet changed the world of those with practical problems in microscale analysis or manipulation.

Where is the disconnect? One underlying problem has been remarked upon by many people interested in new technology. Users of technology are fundamentally not interested in technology—they are interested in solving their own problems. They want technology to be simple and cheap and invisible. Developers of technology, especially in universities, are often fundamentally not interested in solving real problems—they are interested in the endlessly engaging activity of building and exercising new widgets. They want technology to be technically very cool. “Simple/cheap/invisible” and “technically cool” are not exclusive categories, but they are certainly not synonymous.

That is a constant and widespread phenomenon. There are people who want to be able to do things with stuff, and people who want stuff to do things for them, and the overlap between those two is not always apparent. What happens over time, though, in the best cases, is that the tinkerers come up with things that can be used by a wider audience to solve their own problems. Look no further than the personal computer industry for one of the biggest examples ever. If you didn't live through it, you might not realize how things went from "weird hobbyist thingies" to "neat gizmos if you have the money" to "essential parts of everyday life". Here's Whitesides again:

Here are three useful, homely, rules of thumb to remember in developing products.

• The ratio of money spent to invent something, to make the invention into a prototype product, to develop the prototype to the point where it can be manufactured, and to manufacture and sell it at a large scale is, very qualitatively, 1:10:100:1000. We university folks—the inventors at the beginning of the path leading to products—are cheap dates.

• You don't really know you have solved the problem for someone until they like your solution so much they're willing to pay you to use it. Writing a check is a very meaningful human interaction.

• If the science of something is still interesting, the “something” is probably not ready to be a product.

His second rule reminds me of Stephen King's statement on whether someone has any writing talent or not: "If you wrote something for which someone sent you a check, if you cashed the check and it didn't bounce, and if you then paid the light bill with the money, I consider you talented". It's also the measure of success in the drug industry - we are, after all, trying to make things that are useful enough that people will pay us money for them. If we don't come up with enough of those things, or if they don't bring in enough money to cover what it took to find them, then we are in trouble indeed.

More comments on the Whitesides piece here. For scientists (like me, and many readers of the blog), these points are all worth keeping in mind. Some of our biggest successes are things where our contributions are invisible to the end users. . .

Comments (23) + TrackBacks (0) | Category: Business and Markets | Who Discovers and Why

The Worst Biotech CEO?

Email This Entry

Posted by Derek

Adam Feuerstein at TheStreet.com has his yearly readers' pick for "Worst Biotech CEO". This year's winner is Jim Bianco of Cell Therapeutics, and there seems to be a good case:

Bianco, a longtime worst biotech CEO nominee, broke through this year and finally shoved his way into loser's circle by managing to engineer a 77% drop in his company's price despite finally winning European approval for the its lymphoma drug.

In many ways, TheStreet's biotechnology readers are (dis)honoring Bianco for a lifetime of investor bamboozlement and self-enrichment. The numbers that define Bianco's career as chief executive of Cell Therapeutics are stunning: Total losses of more than $1.7 billion, a 99.99999999% drop in the value of company shares and total compensation for him and his hand-picked team of executive cronies in the tens of millions of dollars.

Other than that, things have been going fine. Check the post for more, and to find out who the runners-up were. You can be sure that they're not thrilled to be on the list, either. . .

Comments (3) + TrackBacks (0) | Category: Business and Markets

Pharmaceutical Shortages in Greece

Email This Entry

Posted by Derek

If you'd like to see how thoroughly a drug market can be screwed up, have a look at Greece. They're leading the way here as well:

Ten years after entering the eurozone, Greece is faced with the herculean challenge of persuading pharmaceutical companies to strike a bargain and lower the cost of the medicines they sell in the country. At present, there are fears of drug shortages in certain hospitals as a result of unpaid bills. . .During the last two decades Greece became a paradise for branded-drug producers, with generic medicines constituting only 12% of the drugs consumed in the country. Between 1997 and 2007, the amount of health spending per Greek citizen grew annually by 6.6%, bringing the country to fourth place worldwide, after South Korea, Turkey and Ireland, in terms of this growth.

The crisis comes, in part, as a result of the Greek National Health System racking up debts by treating pensioners and poorer locals with expensive branded drugs instead of generics. The government paid the pharmaceuticals mostly with state bonds that lost substantial value in the fiscal crisis, and, in response, they started turning off the faucet. . .

But there's another factor at work, too:

For many months, pharmacies have been reporting shortages of medicines as some distributors have reexported comparatively cheap drugs from Greece over to Germany and other European markets, achieving monetary gains of as much as 600%.

Yep, Greece has simultaneously managed to pay too much for pharmaceuticals and provide a lucrative opportunity to export cheap ones. If economics worked like electrical engineering, there would be huge sparks jumping across these gaps and things would be shorting out all over the place. Actually, that's pretty much what's happening as it is.

Comments (7) + TrackBacks (0) | Category: Drug Prices

December 6, 2012

Science Gifts: Microscopes

Email This Entry

Posted by Derek

Well, in that post on telescopes I put up the other day, there were plenty of manufacturers, web sites, and commercial sources that I could recommend. Microscopes, though, are another matter. There's no equivalent to the amateur telescope making/modifying community. One reason for that is that we're talking about lenses for magnification, rather than big mirrors for light-gathering, and mirrors are a lot easier to make (and test) than lenses, particularly combinations of lenses. Microscopes can also have more mechanical parts than telescopes do, and these parts are less modular, which can make the used equipment market rather tricky. The new equipment market tends to divide into "Wonderful, really expensive equipment for research" and "Cheap crap". (More thoughts on the similarities and differences between the amateur astronomers and microscopists here and here).

But not always. Here's a good site with a lot of buying advice, and here are more good sets of recommendations. You'll have heard of the brands of the most common laboratory microscopes (Nikon, Olympus, Leica, Zeiss), and there are a number of lesser-known brands, which I would assume all use Chinese optics (Omano, Motic, Accuscope, Labomed). The advice, as with telescopes is to Avoid Department Store Models, but beyond that, I'm not sure where to send people. Reputable dealers seem to include Lab Essentials and Microscope Depot, but be sure to read up on those recommendations before purchasing. An older microscope in good shape probably has the best price/performance of all, but that's not a casual purchase, for the most part. For what it's worth, I use an old "grey metal" Bausch and Lomb, purchased back in the 1970s used from around the University of Tennessee medical school.

Update: as those recommendation links say, there are two big choices: a stereo microscope or a compound one. The former is good for looking at whatever (larger) object you can put under it, while the latter is higher-magnification and needs, in most cases, to have something that light can pass through. I'm partial to protozoa and algae myself, so I have the latter, but the former is a very useful instrument, too. A great general reference for someone getting into microscopy is Exploring With the Microscope.

If you're into pond life as well, two excellent references are How to Know the Protozoa and How to Know the Freshwater Algae. I own both, but then, I'm a lunatic, so keep that in mind.

Comments (8) + TrackBacks (0) | Category: Science Gifts

Four Million Compounds to Screen

Email This Entry

Posted by Derek

There's a new paper out that does something unique: it compares the screening libraries of two large drug companies, both of which agreed to open their books to each other (up to a point) for the exercise. The closest analog that I know of is when Bayer merged with/bought Schering AG, and the companies published on the differences between the two compound collections as they worked on merging them. (As a sideline, I hope that they've culled some of the things that were in that collection when I worked there. I actually had a gallery of horrible compounds from the files that I kept around to amaze people - it was hard to come up with a functional group that wasn't represented somewhere). That combined Bayer collection 2.75 million compounds) has now been compared with AstraZeneca's (1.4 million compounds). The two of them have clearly been exploring precompetitive collaboration in high-throughput screening, and trying to figure out how much there is to gain.

The first question that comes to mind is how the companies managed this - after all, you wouldn't want another outfit to actually stroll through your structures. They used 2-D fingerprints to get around this problem, the ECFP4 system, to be exact. That's a descriptor that gives a lot of structural information without being reversible; you can't reassemble the actual compound from the fingerprint.

So what's in these collections, and how much do the two overlap? I think that the main take-away from the paper is the answer to the second question, which is "Not as much as you'd think". Using Tanimoto similarity calculations (ratio of the intersecting set to the union set) for all those molecular fingerprints (with a cutoff of 0.70 for "similar"), they found that about 144,000 compounds in the Bayer collection seem to be duplicated in the AstraZeneca collection. Not surprisingly, these turned out to be commercially available; they'd been bought from the same vendors, most likely. That's not much!

Considering that all pharmaceutical companies can access the same external vendors this number is certainly lower than expected. There are 290K compounds that are not identical but very similar between both databases, with nearest neighbors with Tanimoto values in the range of 0.7–1.0. In a joint HTS campaign this would lead to a higher coverage of the chemical space in SAR exploration. The remaining 2.3M compounds of the Bayer collection have no similar compounds in the AstraZeneca collection, as is reflected in nearest neighbors with Tanimoto values ≤0.7. Thus, a practical interpretation is that AstraZeneca would extend their available chemical space with 2.3M novel, distinct chemical entities by testing the Bayer Pharma AG collection in a HTS campaign, provided that intellectual property issues could be resolved.

One interesting effect, though, is that compounds which would be classed as "singletons" in each collection (and thus could be a bit problematic to follow up on) had closer relatives over in the other company's collection. That could be a real advantage, rescuing what might otherwise be a collection of unrelated stuff - a few legitimate leads buried in a bunch of tedious compounds that would eventually have to be discarded one by one.

The teams also compared their collections to a large public on, the ChEMBL database:

The public ChEMBL database was chosen to simulate a third-party compound collection. It consisted of 600K molecules derived from medicinal chemistry publications annotated with pharmacological/biological data. Hence, we used this source as a proxy for ‘a pharmaceutical’ compound collection. We opted to avoid the use of commercial screening collections for this assessment as it would clearly reveal the number and source of acquisitions. In Fig. 6, we display the distribution of the nearest neighbors in the ChEMBL compounds (query collection) to the target collection corresponding to the merged AstraZeneca and Bayer Pharma AG compounds. Despite the huge set of more than 3.7 million compounds to which the relatively small ChEMBL collection is compared, more than 80% of this collection has their nearest neighbor with a Tanimoto index below 0.70. Consistent with the volume of published and patented compounds this result again emphasize that even in large collections there is still relevant unexplored chemical space accessed by other groups in industry and academia.

So the question comes up, after all these comparisons: have the two companies decided to do anything about this? The conclusions of the paper seem clear. If you're interested in high-throughput screening, combining the two collections would significantly improve the results obtained from screening either one alone. How much value does either company assign to that, compared to the intellectual property risks involved? The decision (or lack of decision) that's reached on this will serve as the best answer: revealed preference always wins out over stated preference.

Comments (15) + TrackBacks (0) | Category: Drug Assays

December 5, 2012

Chemical Warfare in Syria?

Email This Entry

Posted by Derek

It's a grim topic, but I see that there are worries that the Syrian government, or what's left of it, is being warned not to use its stockpiles of chemical weapons. Back in the early days of the blog, I did a series on the chemistry of these things, and they can be found by scrolling down to the bottom of this page.

As I said at the time, "I'm prepared to argue that against a competent and prepared opponent, the known chemical weapons are essentially useless. The historical record seems to bear this out. Look at the uses of mustard gas since World War I. Morocco in the 1920s, Ethiopian villages in the 1930s, Yemen in the 1960s - a motley assortment of atrocities against people who couldn't retaliate." The uses of nerve gas are a similarly horrible roll call, mainly (and infamously) in Northern Iraq, by the Saddam Hussein government against its Kurdish population. Let's hope that no one is going to add another entry to that list.

Comments (14) + TrackBacks (0) | Category: Chem/Bio Warfare | Current Events

Off-Label Promotion Is Legal, You Say?

Email This Entry

Posted by Derek

You'll have seen the headlines about off-label promotion of drugs by pharma companies. No, not the ones that decry it as a shady marketing technique, punishable by huge fines. I mean the ones about how a federal court has ruled that it's completely legal.

This came as a surprise, at least to me. The U. S. Court of Appeals, in United States v. Caronia ruled explicitly that "government cannot prosecute pharmaceutical manufacturers and their representatives under the (Food, Drug and Cosmetic Act) for speech promoting the lawful, off-label use of an FDA-approved drug." That does go up against the previous belief that if it's off-label, it isn't lawful. So how did the court get here, and what happens next?

The case concerns Alfred Caronia, a sales rep for Orphan Medical, who was prosecuted for off-label promotion of Xyrem (the sodium salt of gamma-hydroxybutyrate, GHB) in 2005. (The company has since been acquired by Jazz Pharmaceuticals of Dublin). He appealed his conviction on First Amendment grounds, and this argument seems to have rung the bell with the appeals court. Here's a writeup at the FDA Law Blog:

The Court explained that FDA’s construction of the FDCA legalizes the outcome of off-label use by doctors, but “prohibits the free flow of information that would inform that outcome.” The Second Circuit concluded that “the government’s prohibition of off-label promotion by pharmaceutical manufacturers ‘provides only ineffective or remote support for the government’s purpose.’”

There's some case law that backs up this decision, namely Sorrell v. IMS Health Inc.. The Supreme Court decision, for those of you who are truly hard-core about this stuff, is here. In that one, the court found that a Vermont law that restricted physicians from selling information on their prescription history violated the First Amendment as well. From this earlier post at the FDA Law Blog, it appears that a lot of the maneuvering during this latest case was about whether Sorrell applied here or not. That post also makes it clear that the FDA's own statements on the legality of off-label promotion are, to put it gently, unclear.

Well, this ruling certainly clears it up. For now. Here's the 82-page decision itself, with a vigorous dissent from the third judge on the appellate panel. But I can tell you that I'm not reading it yet. That's because I expect the FDA to try to take this to the Supreme Court, and it looks (to my non-lawyer eyes) like just the sort of thing they'd grant certiorari to. So I don't think this story is done - but for now, off-label promotion cannot be prosecuted.

And that's a big change indeed. This whole issue has been a black eye for the industry over the years, because (for one thing) the FDA made it clear, over and over, that it believed the practice was illegal, and that companies (and individuals) could be prosecuted for it. In that atmosphere, a company that went ahead was doing so in knowing violation of the rules as they were understood. No drug company, as far as I know, ever tried to make a First Amendment court case out of an FDA fine for off-label promotion (if anyone knows of any examples, send 'em along). Instead, they argued about whether it had happened or not, how much of it there really was, then paid the whacking fines, and then (likely as not) went out and did it some more. And they did it not because they were free-speech activists, but because that's where a lot of big money was to be found. Not the sort of thing that covers you with glory, for sure.

So it's not like this latest ruling is going to rehabilitate many reputations in the marketing departments. It's more like "Great! Turns out to be legal after all! Who knew?"

Comments (28) + TrackBacks (0) | Category: Business and Markets | Regulatory Affairs | Why Everyone Loves Us

December 4, 2012

Science Gifts: Telescopes

Email This Entry

Posted by Derek

As I mention around here from time to time, one of my sidelines is amateur astronomy. I often get asked for telescope recommendations, so in that spirit, I wanted to put up some details in case anyone out there is thinking about one as a gift this year.

The key thing to remember with telescopes is that other things being equal, aperture wins out, because you will be able to see more objects and more details. Other things are not always equal, naturally, but that's the background of the various disputes between amateur astronomers about which kind of scope is best. And keep in mind that while a bigger scope can show you more, the best telescope is the one that you'll actually haul out and use. Overbuying has not been my problem, dang it all, but it has been known to happen. Overall, I'd say a six-inch aperture should be the starting point, although opinions vary on that, too.

You've basically got three kinds of scopes to consider: refractors, reflectors, and folded-path. The refractors are the classic lens-in-the-front types. They can provide very nice views, especially of the planets and other brighter objects. Many planetary observers swear by them. But per inch of aperture, they're the most expensive, especially since for good views you have to spring for high-end optics to keep from having rainbow fringes around everything. I can't recommend a refractor for a first scope, for these reasons. That's especially true since a lot of the refractors you see for sale out there are of the cheap/nearly worthless variety - a casual buyer would be appalled at the price tag for a decent one. No large refractors have been built for astronomical research since well before World War II.

Reflectors are variations on Isaac Newton's design, which was: open tube at the top, mirror at the bottom, and you look through the eyepiece in the side, after the light reflects back off an angled secondary mirror. All modern large-aperture research telescopes are some variety of reflector. They provide the most aperture per dollar, especially with a simple "Dobsonian" mount (more on mounts in a minute). They do have to be aligned (collimated) when you first get them, and every so often afterwards, to make sure the mirrors are all working together. A badly collimated reflector will provide ugly views indeed, but it's at least easy to fix. And if the primary mirror is of poor quality, you're also in trouble, but the average these days is actually quite good.

Finally, the folded-path (catadioptric) types (Schmidt-Cassegrain
and Maksutov designs, mostly) are a hybrid. There's a mirror in the back, but also a corrector lens plate covering the front. The light path ends up coming out the back of the tube, through a hole in the primary mirror. Like refractors, these basically never have to be aligned, but they're fairly expensive (although nowhere near as bad as refractors when you start going up in size). And their views are pretty good, although purists argue about how they compare to a reflector of equal size. (Refractor owners would probably win that argument, but they have to drop out at about the five or six-inch mark, when the other two telescope designs are just getting started). One nice thing about a scope of this kind is that it's more compact, making it an easier design to mount.

And that brings up the next topic: what you do mount one of these fine optical tubes on, so you can use them to actually look at things? An equatorial or a fork mount will let you follow the motion of the objects in the sky easily, especially with a motor drive - the Earth's rotation is always sweeping things out of your view, otherwise. A decent mount of this kind will definitely add to your costs, though. The "Dobsonian" mount is a favorite of reflector owners, since it's quite simple and allows you to put more of your money into the optics. You do have to manually grab the telescope tube and move it, though, which takes some practice (and sometimes some home-brew messing around with the mount). Some people don't mind this, others are driven nuts by it. You can put a motorized platform under a Dobsonian (my own setup) to motor-drive it, which some consider the best of both worlds.

On the topic of motorized telescope mounts, I should say something about "Go-to" models. These are not only motorized to track objects, they will slew the scope around to find objects from a database. I'm very much of two minds on these. For an experienced observer, an astrophotographer, or a researcher, they can be an indispensable tool to spend more time observing and less time hunting around. For a total beginner, they can ease a lot of frustration when first learning the sky. But at the same time, they also can keep someone from learning the sky at all, and they can also encourage hopping too quickly from one object to another. If you do that, you can see all sorts of stuff in one evening, while at the same time hardly seeing anything at all.

Visual observing is all about training yourself to see things. One thing every new telescope owner should know is that Very Little Ever Looks Like the Photographs. Especially since the photos are long exposures on wildly sensitive CCD chips, through huge instruments, and under excellent conditions. Through the eyepiece, nebulae are not tapestries of red, pink, green, and purple: they range from greenish grey to bluish grey. And although with practice you'll pick up really surprising and beautiful amounts of detail in deep-sky objects, at first, everything can look like a blob. Or a smear. Or not appear to even be there at all, even when a practiced observer can see it right smack in the center of the eyepiece field. I really enjoy seeing these things with my own eyes, and trying to find out just how much detail I can pick out and how faint I can go, but it's not for everyone.

Now, photography is another story. Astrophotography is an expensive word, although thanks to webcams and the like, getting into it is not quite as bad as it used to be. But for most purposes, you'll need one of those motorized mounts that'll track objects across the sky. That's very convenient for visual observing, too, naturally, but a really good one for long-exposure photography can cost more than the telescope itself! A motorized platform is almost never accurate enough for these purposes, I should add. I'm not an astrophotographer myself, so I won't go into great detail, but if you want to try this part of the hobby out (or know someone who does), prepare to think about the telescope mount as much as you think about the optics. As you'd imagine, all astrophotography these days is digital, with equipment ranging from simple webcams all the way up to stuff that easily costs as much as a new car, or perhaps a small house.

So, what to buy? I've scattered some Amazon links in the above to representative scopes. In general, Meade and Celestron are the two brands you'll see the most, and if you stay away from their cheap refractors, you should be fine. And Orion also sells good stuff of their own brand (On Amazonand from their own site). (Again, I'd stay away from inexpensive refractors there, too). Other good sources are Astronomics and Anacortes.

Update: as pointed out in the comments, an excellent resource for specific opinions on different models, and telescope advice in general, is Scopereviews. Cloudy Nights is also a huge resource.

Comments (17) + TrackBacks (0) | Category: Science Gifts

Merck Presses Ahead on Alzheimer's:

Email This Entry

Posted by Derek

One Alzheimer's compound recently died off in the clinic - Bristol-Myers Squibb's avagacestat, a gamma-secretase inhibitor, has been pulled from trials. The compound "did not establish a profile that supported advancement" to Phase III, says the company. Gamma-secretase has been a troubled area for some time, highlighted by the complete failure of Lilly's semagacestat. I wondered, when that one cratered, what they were thinking at BMS, and now we know.

But Merck is getting all the attention in Alzheimer's today. They've announced that their beta-secretase inhibitor, MK-8931, is moving into Phase III, and the headlines are. . .well, they're mostly just not realistic. "Hope for Alzheimer's", "Merck Becomes Bigger Alzheimer's Player", and so on. My two (least) favorites are "Merck Races to Beat Lilly Debut" and "Effective Alzheimer's Drug May Be Just Three Years Away." Let me throw the bucket of cold water here: that first headline is extremely unlikely, and the second one is insane.

As I've said here several times, I don't think that there's going to be any big Lilly debut into Alzheimer's therapy with their lead antibody candidate, solanezumab. (And if there is, we might regret it). The company does have a beta-secretase (BACE) inhibitor, but that's not what these folks are talking about. And looking at Merck's compound, you really have to wonder if there's ever going to be one there, either. I like Fierce Biotech's headline a lot better: "Merck Ignores Red Flags and Throws Dice on PhII/III Alzheimer's Gamble". That, unfortunately, is a more realistic appraisal.

It's interesting, though, that Merck is testing this approach in a patient population that includes patients with moderate cases. After solanezumab and bapineuzumab appears to have hit that target without any clear signal that they had improved symptoms for patients with more fully developed cases, there has been a growing move to shift R&D into earlier-stage patients, whose brains have not already been seriously damaged by the disease. Merck is likely to face growing skepticism that it can succeed with the amyloid hypothesis when tackling the same population that hasn't delivered positive data.

And BACE has been a rough place to work in over the years. The literature is littered with oddities, since finding a potent compound that will also be selective and get into the brain has been extremely difficult. I actually applaud Merck for having the nerve to try this, but it really is a big roll of the dice, and there's no use pretending otherwise. I wish that the headlines would get that across, as part of a campaign for a more realistic idea of what drug discovery is actually like.

Comments (17) + TrackBacks (0) | Category: Alzheimer's Disease | Clinical Trials

December 3, 2012

Marcia Angell's Interview: I Just Can't

Email This Entry

Posted by Derek

I have tried to listen to this podcast with Marcia Angell, on drug companies and their research, but I cannot seem to make it all the way through. I start shouting at the screen, at the speakers, at the air itself. In case you're wondering about whether I'm overreacting, at one point she makes the claim that drug companies don't do much innovation, because most of our R&D budget is spent on clinical trials, and "everyone knows how to do a clinical trial". See what I mean?

Angell has many very strongly held opinions on the drug business. But her take on R&D has always seemed profoundly misguided to me. From what I can see, she thinks that identifying a drug target is the key step, and that everything after that is fairly easy, fairly cheap, and very, very profitable. This is not correct. Really, really, not correct. She (and those who share this worldview, such as her co-author) believe that innovation has fallen off in the industry, but that this has happened mostly by choice. Considering the various disastrously expensive failures the industry has gone through while trying to expand into new diseases, new indications, and new targets, I find this line of argument hard to take.

So, I see, does Alex Tabarrok. I very much enjoyed that post; it does some of the objecting for me, and illustrates why I have such a hard time dealing point-by-point with Angell and her ilk. The misconceptions are large, various, and ever-shifting. Her ideas about drug marketing costs, which Tabarrok especially singles out, are a perfect example (and see some of those other links to my old posts, where I make some similar arguments to his).

So no, I don't think that Angell has changed her opinions much. I sure haven't changed mine.

Comments (59) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Drug Prices | Why Everyone Loves Us

Fluorous Technologies Is No More

Email This Entry

Posted by Derek

Word comes that Fluorous is shutting down. The company had been trying for several years to make a go of it with its polyfluorinated materials, used for purification and reaction partitioning, but the commercial side of the business has apparently been struggling for a while. It's a tough market, and there hasn't, as far as I know, been what the software people would call a "killer app" for fluorous techniques - they're interested, often useful, but it's been hard to persuade enough people to take a crack at them.

The company is still taking orders for its remaining stock, and the link above will allow you to download their database of literature references for fluorous techniques, among other things. I wish the people involved the best, and I wish that things had worked out better.

Comments (6) + TrackBacks (0) | Category: Business and Markets | Chemical News

Stanford's Free Electron Laser Blasts Away

Email This Entry

Posted by Derek

Here's another next-generation X-ray crystal paper, this time using a free electron laser X-ray source. That's powerful enough to cause very fast and significant radiation damage to any crystals you put in its way, so the team used a flow system, with a stream of small crystals of T. brucei cathepsin B enzyme being exposed in random orientations to very short pulses of extremely intense X-rays. (Here's an earlier paper where the same team used this technique to obtain a structure of the Photosystem I complex). Note that this was done at room temperature, instead of cryogenically. The other key feature is that the crystals were actually those formed inside Sf9 insect cells via baculovirus overexpression, not purified protein that was then crystallized in vitro.

Nearly 4 million of these snapshots were obtained, with almost 300,000 of them showing diffraction. 60% of these were used to refine the structure, which out at 2.1 Angstroms, and clearly showed many useful features of the enzyme. (Like others in its class, it starts out inhibited by a propeptide, which is later cleaved - that's one of the things that makes it a challenge to get an X-ray structure by traditional means).

I'm always happy to see bizarre new techniques used to generate X-ray structures. Although I'm well aware of their limitations, such structures are still tremendous opportunities to learn about protein functions and how our small molecules interact with them. I wrote about the instrument used in these papers here, before it came on line, and it's good to see data coming out of it.

Comments (7) + TrackBacks (0) | Category: Analytical Chemistry | Chemical News

November 30, 2012

Science Gifts: Actual Med-Chem Books

Email This Entry

Posted by Derek

A few years ago, I asked the readership for the best books on the practice of medicinal chemistry and drug discovery itself. These may not be exactly stocking stuffers, at least not for most people, but I wanted to mention these again, and to solicit nominations for more recent titles to add to the list. So, here's what I have at the moment:

For general medicinal chemistry, you have Bob Rydzewski's Real World Drug Discovery: A Chemist's Guide to Biotech and Pharmaceutical Research. Many votes also were cast for Camille Wermuth's The Practice of Medicinal Chemistry. For getting up to speed, several readers recommend Graham Patrick's An Introduction to Medicinal Chemistry. And an older text that has some fans is Richard Silverman's The Organic Chemistry of Drug Design and Drug Action.

Process chemistry is its own world with its own issues. Recommended texts here are Practical Process Research & Development by Neal Anderson and Process Development: Fine Chemicals from Grams to Kilograms by Stan Lee (no, not that Stan Lee) and Graham Robinson.

Case histories of successful past projects are found in Drugs: From Discovery to Approval by Rick Ng and also in Walter Sneader's Drug Discovery: A History.

Another book that focuses on a particular (important) area of drug discovery is Robert Copeland's Evaluation of Enzyme Inhibitors in Drug Discovery.

For chemists who want to brush up on their biology, readers recommend Terrence Kenakin's A Pharmacology Primer, Third Edition: Theory, Application and Methods and Molecular Biology in Medicinal Chemistry by Nogrady and Weaver.

Overall, one of the most highly recommended books across the board comes from the PK end of things: Drug-like Properties: Concepts, Structure Design and Methods: from ADME to Toxicity Optimization by Kerns and Di. For getting up to speed in this area, there's Pharmacokinetics Made Easy by Donald Birkett.

In a related field, the standard desk reference for toxicology seems to be Casarett & Doull's Toxicology: The Basic Science of Poisons. Since all of us make a fair number of poisons (as we eventually discover), it's worth a look.

As mentioned, titles to add to the list are welcome - I'll watch the comments for ideas!

Comments (13) + TrackBacks (0) | Category: Book Recommendations | Science Gifts

A Broadside Against The Way We Do Things Now

Email This Entry

Posted by Derek

There's a paper out in Drug Discovery Today with the title "Is Poor Research the Cause of Declining Productivity in the Drug Industry? After reviewing the literature on phenotypic versus target-based drug discovery, the author (Frank Sams-Dodd) asks (and has asked before):

The consensus of these studies is that drug discovery based on the target-based approach is less likely to result in an approved drug compared to projects based on the physiological- based approach. However, from a theoretical and scientific perspective, the target-based approach appears sound, so why is it not more successful?

He makes the points that the target-based approach has the advantages of (1) seeming more rational and scientific to its practitioners, especially in light of the advances in molecular biology over the last 25 years, and (2) seeming more rational and scientific to the investors:

". . .it presents drug discovery as a rational, systematic process, where the researcher is in charge and where it is possible to screen thousands of compounds every week. It gives the image of industrialisation of applied medical research. By contrast, the physiology-based approach is based on the screening of compounds in often rather complex systems with a low throughput and without a specific theory on how the drugs should act. In a commercial enterprise with investors and share-holders demanding a fast return on investment it is natural that the drug discovery efforts will drift towards the target-based approach, because it is so much easier to explain the process to others and because it is possible to make nice diagrams of the large numbers of compounds being screened.

This is the "Brute Force bias". And he goes on to another key observation: that this industrialization (or apparent industrialization) meant that there were a number of processes that could be (in theory) optimized. Anyone who's been close to a business degree knows how dear process optimization is to the heart of many management theorists, consultants, and so on. And there's something to that, if you're talking about a defined process like, say, assembling pickup trucks or packaging cat litter. This is where your six-sigma folks come in, your Pareto analysis, your Continuous Improvement people, and all the others. All these things are predicated on the idea that there is a Process out there.

See if this might sound familiar to anyone:

". . .the drug dis- covery paradigm used by the pharmaceutical industry changed from a disease-focus to a process-focus, that is, the implementation and organisation of the drug discovery process. This meant that process-arguments became very important, often to the point where they had priority over scientific considerations, and in many companies it became a requirement that projects could conform to this process to be accepted. Therefore, what started as a very sensible approach to drug discovery ended up becoming the requirement that all drug dis- covery programmes had to conform to this approach – independently of whether or not sufficient information was available to select a good target. This led to dogmatic approaches to drug discovery and a culture developed, where new projects must be presented in a certain manner, that is, the target, mode-of-action, tar- get-validation and screening cascade, and where the clinical manifestation of the disease and the biological basis of the disease at systems-level, that is, the entire organism, were deliberately left out of the process, because of its complexity and variability.

But are we asking too much when we declare that our drugs need to work through single defined targets? Beyond that, are we even asking too much when we declare that we need to understand the details of how they work at all? Many of you will have had such thoughts (and they've been expressed around here as well), but they can tend to sound heretical, especially that second one. But that gets to the real issue, the uncomfortable, foot-shuffling, rather-think-about-something-else question: are we trying to understand things, or are we trying to find drugs?

"False dichotomy!", I can hear people shouting. "We're trying to do both! Understanding how things work is the best way to find drugs!" In the abstract, I agree. But given the amount there is to understand, I think we need to be open to pushing ahead with things that look valuable, even if we're not sure why they do what they do. There were, after all, plenty of drugs discovered in just that fashion. A relentless target-based environment, though, keeps you from finding these things at all.

What it does do, though, is provide vast opportunities for keeping everyone busy. And not just "busy" in the sense of working on trivia, either: working out biological mechanisms is very, very hard, and in no area (despite decades of beavering away) can we say we've reached the end and achieved anything like a complete picture. There are plenty of areas that can and will soak up all the time and effort you can throw at them, and yield precious little in the way of drugs at the end of it. But everyone was working hard, doing good science, and doing what looked like the right thing.

This new paper spends quite a bit of time on the mode-of-action question. It makes the point that understanding the MoA is something that we've imposed on drug discovery, not an intrinsic part of it. I've gotten some funny looks over the years when I've told people that there is no FDA requirement for details of a drug's mechanism. I'm sure it helps, but in the end, it's efficacy and safety that carry the day, and both of those are determined empirically: did the people in the clinical trials get better, or worse?

And as for those times when we do have mode-of-action information, well, here are some fighting words for you:

". . .the ‘evidence’ usually involves schematic drawings and flow-diagrams of receptor complexes involving the target. How- ever, it is almost never understood how changes at the receptor or cellular level affect the phy- siology of the organism or interfere with the actual disease process. Also, interactions between components at the receptor level are known to be exceedingly complex, but a simple set of diagrams and arrows are often accepted as validation for the target and its role in disease treatment even though the true interactions are never understood. What this in real life boils down to is that we for almost all drug discovery programmes only have minimal insight into the mode-of-action of a drug and the biological basis of a disease, meaning that our choices are essentially pure guess-work.

I might add at this point that the emphasis on defined targets and mode of action has been so much a part of drug discovery in recent times that it's convinced many outside observers that target ID is really all there is to it. Finding and defining the molecular target is seen as the key step in the whole process; everything past that is just some minor engineering (and marketing, naturally). That fact that this point of view is a load of fertilizer has not slowed it down much.

I think that if one were to extract a key section from this whole paper, though, this one would be a good candidate:

". . .it is not the target-based approach itself that is flawed, but that the focus has shifted from disease to process. This has given the target-based approach a dogmatic status such that the steps of the validation process are often conducted in a highly ritualised manner without proper scientific analysis and questioning whether the target-based approach is optimal for the project in question.

That's one of those "Don't take this in the wrong way, but. . ." statements, which are, naturally, always going to be taken in just that wrong way. But how many people can deny that there's something to it? Almost no one denies that there's something not quite right, with plenty of room for improvement.

What Sams-Dodd has in mind for improvement is a shift towards looking at diseases, rather than targets or mechanisms. For many people, that's going to be one of those "Speak English, man!" moments, because for them, finding targets is looking at diseases. But that's not necessarily so. We would have to turn some things on their heads a bit, though:

In recent years there have been considerable advances in the use of automated processes for cell-culture work, automated imaging systems for in vivo models and complex cellular systems, among others, and these developments are making it increasingly possible to combine the process-strengths of the target-based approach with the disease-focus of the physiology-based approach, but again these technologies must be adapted to the research question, not the other way around.

One big question is whether the investors funding our work will put up with such a change, or with such an environment even if we did establish it. And that gets back to the discussion of Andrew Lo's securitization idea, the talk around here about private versus public financing, and many other topics. Those I'll reserve for another post. . .

Comments (29) + TrackBacks (0) | Category: Drug Assays | Drug Development | Drug Industry History | Who Discovers and Why

November 29, 2012

There Go the Lights

Email This Entry

Posted by Derek

An awful lot of people are using an awful lot of bad language in Cambridge, MA right now. At about 4:25 PM (EST), the power flickered and went out in a large swath of East Cambridge, out to somewhere near Harvard Square. That takes out MIT and more technology-based companies than you'd care to count, so everyone is getting the chance to find out how their backup power supplies work (or don't), and how their expensive, finicky equipment takes to having the current lurch around.

I was in my office when things browned down and went out, and it soon became clear that the whole area had gone dark. Public transit was working (when I got on it, anyway), and my commute home is the same as always (for better or worse!), but that won't be the case for people depending on spotty streetlights and the like. Not to mention the various homeward-bound folks who are presumably sitting, none too happily, in elevators right now.

Servers, NMR machines, LC/MS units, -80 degree freezers, lab fridges, automation of all sorts are to be found in heaps in that part of town; it's probably got one of the densest concentrations of such equipment anywhere. Getting it all running again will not be enjoyable.

Comments (32) + TrackBacks (0) | Category: Current Events

When Drug Launches Go Bad

Email This Entry

Posted by Derek

For those connoisseurs of things that have gone wrong, here's a list of the worst drug launches of recent years. And there are some rough ones in there, such as Benlysta, Provenge, and (of course) Makena. And from an aesthetic standpoint, it's hard not to think that if you name your drug Krystexxa that you deserve what you get. Read up and try to avoid being part of such a list yourself. . .

Comments (8) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Drug Prices

Science Gifts: The Elements

Email This Entry

Posted by Derek

In my post the other day on do-it-at-home science experiments and demonstrations, I left out Theo Gray's Mad Science. That's because, although it looks like a very fun book, it seems to require a number of things that most people don't have lying around the house, like a Van der Graaf generator. (If you're in the market, though, you can get one here - I'm starting to wonder what it is that Amazon doesn't sell).

But Gray's The Elements, which I've recommended before, is an excellent thing to have for anyone who's curious about the periodic table or chemistry in general. I remember as a child browsing through the old Time-Life book on the elements (my grandparents had a copy; I'd read it every time we visited them). This is the 21st century version. He's done a follow-up, the Elements Vault, which is more of a tour of the Periodic Table by columns, rather than by rows.

And I'm ordering The Elements Puzzle for the rest of the family for Christmas. (My kids don't read my site, or at least not yet). It's a 1000-piece jigsaw puzzle that produces a three-foot-wide periodic table, with information and photographs of each element. They're bound to learn something by putting it together!

This is a good time to note that this blog is an Amazon affiliate. I get a small cut of whatever's ordered through these links (at no charge to the buyer). And yes, Amazon sends me a W-2 on the yearly total, so I do pay taxes on it!

Comments (1) + TrackBacks (0) | Category: Science Gifts

Roche Repurposes

Email This Entry

Posted by Derek

Another drug repurposing initiative is underway, this one between Roche and the Broad Institute. The company is providing 300 failed clinical candidates to be run through new assays, in the hopes of finding a use for them.

I hope something falls out of this, because any such compounds will naturally have a substantial edge in further development. They should all have been through toxicity testing, they've had some formulations work done on them, a decent scale-up route has been identified, and so on. And many of these candidates fell out in Phase II, so they've even been in human pharmacokinetics.

On the other hand (there's always another hand), you could also say that this is just another set of 300 plausible-looking compounds, and what does a 300-compound screening set get you? The counterargument to this is that these structures have not only been shown to have good absorption and distribution properties (no small thing!), they've also been shown to bind well to at least one target, which means that they may well be capable of binding well to other similar motifs in other active sites. But the counterargument to that is that now you've removed some of those advantages in the paragraph above, because any hits will now come with selectivity worries, since they come with guaranteed activity against something else.

This means that the best case for any repurposed compound is for its original target to be good for something unanticipated. So that Roche collection of compounds might also be thought of as a collection of failed targets, although I doubt if there are a full 300 of those in there. Short of that, every repurposing attempt is going to come with its own issues. It's not that I think these shouldn't be tried - why not, as long as it doesn't cost too much - but things could quickly get more complicated than they might have seemed. And that's a feeling that any drug discovery researcher will recognize like an old, er, friend.

For more on the trickiness of drug repurposing, see John LaMattina here and here. And the points he raises get to the "as long as it doesn't cost too much" line in the last paragraph. There's opportunity cost involved here, too, of course. When the Broad Institute (or Stanford, or the NIH) screens old pharma candidates for new uses, they're doing what a drug company might do itself, and therefore possibly taking away from work that only they could be doing instead. Now, I think that the Broad (for example) already has a large panel of interesting screens set up, so running the Roche compounds through them couldn't hurt, and might not take that much more time or effort. So why not? But trying to push repurposing too far could end up giving us the worst of both worlds. . .

Comments (14) + TrackBacks (0) | Category: Drug Assays | Drug Development | Drug Industry History

November 28, 2012

Every Tiny Detail

Email This Entry

Posted by Derek

Via Chemjobber, we have here an excellent example of how much detail you have to get into if you're seriously making a drug for the market. When you have to account for every impurity, and come up with procedures that generate the same ones within the same tight limits every time, this is the sort of thing you have to pay attention to: how you dry your compound. And how long. And why. Because if you don't, huge amounts of money (time, lost revenue, regulatory trouble, lawsuits) are waiting. . .

Comments (5) + TrackBacks (0) | Category: Analytical Chemistry | Chemical News | Drug Development

Advice For Those Trying High-Throughput Screening

Email This Entry

Posted by Derek

So here's a question that a lot of people around here will have strong opinions on. I've heard from someone in an academic group that's looking into doing some high-throughput screening. As they put it, they don't want to end up as "one of those groups", so they're looking for advice on how to get into this sensibly.

I applaud that; I think it's an excellent idea to look over the potential pitfalls before you hop into an area like this. My first advice would be to think carefully about why you're doing the screening. Are you looking for tool compounds? Do they need to get into cells? Are you thinking of following up with in vivo experiments? Are you (God help you) looking for potential drug candidates? Each of these require somewhat different views of the world.

No matter what, I'd say that you should curate the sorts of structures that you're letting in. Consider the literature on frequent-hitter structures (here's a good starting point, blogged here), and decide how much you want to get hits versus being able to follow up on them. I'd also say to keep in mind the Shoichet work on aggregators (most recently blogged here), especially the lesson that these have to be dealt with assay-by-assay. Compounds that behave normally in one system can be trouble in others - make no assumptions.

But there's a lot more to say about this. What would all of you recommend?

Comments (13) + TrackBacks (0) | Category: Academia (vs. Industry) | Drug Assays

Think Your Drug Is Strange-Looking? Beat This.

Email This Entry

Posted by Derek

We have a late entry in this year's "Least Soluble Molecule - Dosed In Vivo Division" award. Try feeding that into your cLogP program and see what it tells you about its polarity. (This would be a good ChemDraw challenge, too). What we're looking at, I'd say, is a sort of three-dimensional asphalt, decorated around its edges with festive scoops of lard.
nanographene.png
The thing is, such structures are perfectly plausible building blocks for various sorts of nanotechnology. It would not, though, have occurred to me to feed any to a rodent. But that's what the authors of this new paper managed to do. The compound shown is wildly fluorescent (as well you might think), and the paper explores its possibilities as an imaging agent. The problem with many - well, most - fluorescent species is photobleaching. That's just the destruction of your glowing molecule by the light used to excite it, and it's a fact of life for almost all the commonly used fluorescent tags. Beat on them enough, and they'll stop emitting light for you.

But this beast is apparently more resistant to photobleaching. (I'll bet it's resistant to a lot of things). Its NMR spectrum is rather unusual - those two protons on the central trypticene show up at 8.26 and 8.91, for example. And in case you're wondering, the M+1 peak in the mass spec comes in at a good solid 2429 mass units, a region of the detector that I'm willing to bet most of us have never explored, or not willingly. The melting point is reported as ">300 C", which is sort of disappointing - I was hoping for something in the four figures.

The paper says, rather drily, that "To direct the biological application of our 3D nanographene, water solubilization is necessary", but that's no small feat. They ended up using Pluronic surfactant, which gave them 100nm particles of the stuff, and they tried these out on both cells and mice. The particles showed very low cytotoxicity (not a foregone conclusion by any means), and were actually internalized to some degree. Subcutaneous injection showed that the compound accumulated in several organs, especially the liver, which is just where you'd expect something like this to pile up. How long it would take to get out of the liver, though, is a good question.

The paper ends with the usual sort of language about using this as a platform for chemotherapy, etc., but I take that as the "insert technologically optimistic conclusion here" macro that a lot of people seem to have loaded into their word processing programs. The main reason this caught my eye is that this is quite possibly the least drug-like molecule I've ever seen actually dosed in an animal. When will we see its like again?

Comments (26) + TrackBacks (0) | Category: Chemical News | Drug Assays

November 27, 2012

Science Gifts: Experiments At Home

Email This Entry

Posted by Derek

I've recommended Robert Bruce Thompson's Illustrated Guide to Home Chemistry Experiments before, and I'd like to do so again as a science gift for anyone you know that would like to see what real chemistry is like (interested and capable middle- and high-school students are a particularly good audience). And I'm glad to report that Thompson has added to the series: you can now get his Illustrated Guide to Home Biology Experiments and Illustrated Guide to Home Forensic Science Experiments, both of which also get excellent reviews. Other good resources in this area would be Hands-On Chemistry Activities and its companion Hands-On Physics Activities. Enjoy!

Comments (6) + TrackBacks (0) | Category: Science Gifts

How Do Chemist (Think That They) Judge Compounds?

Email This Entry

Posted by Derek

There's an interesting paper out in PLoS One, called "Inside the Mind of a Medicinal Chemist". Now, that's not necessarily a place that everyone wants to go - mine is not exactly a tourist trap, I can tell you - but the authors are a group from Novartis, so they knew what they were getting into. The questions they were trying to answer on this spelunking expedition were:

1) How and to what extent do chemists simplify the problem of identifying promising chemical fragments to move forward in the discovery process? 2) Do different chemists use the same criteria for such decisions? 3) Can chemists accurately report the criteria they use for such decisions?

They took 19 lucky chemists from the Novartis labs and asked them to go through 8 batches of 500 fragments each and select the desirable compounds. For those of you outside the field, that is, unfortunately, a realistic test. We often have to work through lists of this type, for several reasons: "We have X dollars to spend on the screening collection - which compounds should we buy?" "Which of these compounds we already own should still be in the collection, and which should we get rid of?" "Here's the list of screening hits for Enzyme Y: which of these look like useful starting points?" I found myself just yesterday going through about 350 compounds for just this sort of purpose.

They also asked the chemists which of a set of factors they used to make their decisions. These included polarity, size, lipophilicity, rings versus chains, charge, particular functional groups, and so on. Interestingly, once the 19 chemists had made their choices (and reported the criteria they used in doing so), the authors went through the selections using two computational classification algorithms, semi-naïve Bayesian (SNB) and Random Forest (RF). This showed that most of the chemists actually used only one or two categories as important filters, a result that ties in with studies in other fields on how experts in a given subject make decisions. Reducing the complexity of a multifactorial problem is a key step for the human brain to deal with it; how well this reduction is done (trading accuracy for speed) is what can distinguish an expert from someone who's never faced a particular problem before.

But the chemists in this sample didn't all zoom in on the same factors. One chemist showed a strong preference away from the compounds with a higher polar surface area, for example, while another seemed to make size the most important descriptor. The ones using functional groups to pick compounds also showed some individual preferences - one chemist, for example, seemed to downgrade heteroaromatic compounds, unless they also had a carboxylic acid, in which case they moved back up the list. Overall, the most common one-factor preference was ring topology, followed by functional groups and hydrogen bond donors/acceptors.

Comparing structural preferences across the chemists revealed many differences of opinion as well. One of them seemed to like fused six-membered aromatic rings (that would not have been me, had I been in the data set!), while others marked those down. Some tricyclic structures were strongly favored by one chemist, and strongly disfavored by another, which makes me wonder if the authors were tempted to get the two of them together and let them fight it out.

How about the number of compounds passed? Here's the breakdown:

One simple metric of agreement is the fraction of compounds selected by each chemist per batch. The fraction of compounds deemed suitable to carry forward varied widely between chemists, ranging from 7% to 97% (average = 45%), though each chemist was relatively consistent from batch to batch. . .This variance between chemists was not related to their ideal library size (Fig. S7A) nor linearly related to the number of targets a chemist had previously worked on (R2 = 0.05, Fig. S7B). The fraction passed could, however, be explained by each chemist’s reported selection strategy (Fig. S7C). Chemists who reported selecting only the “best” fragments passed a lower fraction of compounds (0.13±0.07) than chemists that reported excluding only the “worst” fragments (0.61±0.34); those who reported intermediate strategies passed an intermediate fraction of compounds (0.39±0.25).

Then comes a key question: how similar were the chemists' picks to each other, or to their own previous selections? A well-known paper from a few years ago suggested that the same chemists, looking at the same list after the passage of time (and more lists!) would pick rather different sets of compounds. Update: see the comments for some interesting inside information on this work.)Here, the authors sprinkled in a couple of hundred compounds that were present in more than one list to test this out. And I'd say that the earlier results were replicated fairly well. Comparing chemists' picks to themselves, the average similarity was only 0.52, which the authors describe, perhaps charitably, as "moderately internally consistent".

But that's a unanimous chorus compared to the consensus between chemists. These had similarities ranging from 0.05 (!) to 0.52, with an average of 0.28. Overall, only 8% of the compounds had the same judgement passed on them by at least 75% of the chemists. And the great majority of those agreements were on bad compounds, as opposed to good ones: only 1% of the compounds were deemed good by at least 75% of the group!

There's one other interesting result to consider: recall that the chemists were asked to state what factors they used in making their decisions. How did those compare to what they actually seemed to find important? (An economist would call this a case of stated preference versus revealed preference). The authors call this an assessment of the chemists' self-awareness, which in my experience, is often a swampy area indeed. And that's what it turned out to be here as well: ". . .every single chemist reported properties that were never identified as important by our SNG or RF classifiers. . .chemist 3 reported that several properties were important, for failed to report that size played any role during selections. Our SNG and RF classifiers both revealed that size, an especially straightforward parameter to assess, was the most important ."

So, what to make of all this? I'd say that it's more proof that we medicinal chemists all come to the lab bench with our own sets of prejudices, based on our own experiences. We're not always aware of them, but they're certainly with us, "sewn into the lining of our lab coats", as Tom Wolfe might have put it. The tricky part is figuring out which of these quirks are actually useful, and how often. . .

Comments (19) + TrackBacks (0) | Category: Drug Assays | Life in the Drug Labs

November 26, 2012

Science Gifts: Chemistry Sets

Email This Entry

Posted by Derek

I've decided this year that I'll be posting some recommendations for science-themed gifts, since this is the season that people will be looking around for them. This article at Smithsonian has a look at the history of the good ol' chemistry set. As I mentioned in this old post, I had one as a boy, augmented by a number of extra reagents, some of which (potassium permanganate!) were in rather too high an oxidation state for a ten-year-old. I can't report that I did much in the way of systematic experiments with all my material, but I did have a good time with it. Once in a while some combination of reagents will remind me of the smell of those bottles, and I'm instantly transported back to the early 1970s, out in a corner of the shop building in back of our house. (Elemental sulfur is a component of that smell; the rest I'm not sure about).

The Smithsonian article mentions that Thames and Kosmos chemistry sets get good reviews from people who've seen them. So if you're in the market for a gift for the kids, that might be a line to try! The potassium permanganate I'll leave up to individual discretion. . .

Comments (26) + TrackBacks (0) | Category: Chemical News | Science Gifts

Chemistry Software Questions Here

Email This Entry

Posted by Derek

As mentioned the other day, this will be a post for people to ask questions directly to Philip Skinner (SDBioBrit) of Perkin-Elmer/Cambridgesoft. He's doing technical support for ChemDraw, ChemDraw4Excel, E-Notebook, Inventory, Registration, Spotfire, Chem3D, etc., and will be monitoring the comments and posting there. Hope it helps some people out!

Note - he's out on the West Coast of the US, so allow the poor guy time to get up and get some coffee in him!

Comments (75) + TrackBacks (0) | Category: Chemical News | In Silico

An Engineered Rhodium-Enzyme Catalyst

Email This Entry

Posted by Derek

I don't know how many readers have been following this, but there's been some interesting work over the last few years in using streptavidin (a protein that's an old friend of chemical biologists everywhere) as a platform for new catalyst systems. This paper in Science (from groups at Basel and Colorado State) has some new results in the area, along with a good set of leading references. (One of the authors has also published an overview in Accounts of Chemical Research). Interestingly, this whole idea seems to trace back to a George Whitesides paper from back in 1978, if you can believe that.

(Strept)avidin has an extremely well-characterized binding site, and its very tight interaction with biotin has been used as a set of molecular duct tape in more experiments than anyone can count. Whitesides realized back during the Carter administration that the site was large enough to accommodate a metal catalyst center, and this latest paper is the latest in a string of refinements of that idea, this time using a rhodium-catalyzed C-H activation reaction.
avidin%20rhodium.jpg
A biotinylated version of the catalyst did indeed bind streptavidin, but this system showed very low activity. It's known, though, that the reaction needs a base to work, so the next step was to engineer a weakly basic residue nearby in the protein. A glutamate sped things up, and an aspartate even more (with the closely related asparagine showing up just as poorly as the original system, which suggests that the carboxylate really is doing the job). A lysine/glutamate double mutant gave even better results.

The authors then fine-tuned that system for enantioselectivity, mutating other residues nearby. Introducing aromatic groups increased both the yield and the selectivity, as it turned out, and the eventual winner was run across a range of substrates. These varied quite a bit, with some combinations showing very good yields and pretty impressive enantioselectivities for this reaction, which has never until now been performed asymmetrically, but others not performing as well.

And that's promise (and the difficulty) with enzyme systems. Working on that scale, you're really bumping up against individual parts of your substrates on an atomic level, so results tend, as you push them, to bin into Wonderful and Terrible. An enzymatic reaction that delivers great results across a huge range of substrates is nearly a contradiction in terms; the great results come when everything fits just so. (Thus the Codexis-style enzyme optimization efforts). There's still a lot of brute force involved in this sort of work, which makes techniques to speed up the brutal parts very worthwhile. As this paper shows, there's still no substitute for Just Trying Things Out. The structure can give you valuable clues about where to do that empirical work (otherwise the possibilities are nearly endless), but at some point, you have to let the system tell you what's going on, rather than the other way around.

Comments (4) + TrackBacks (0) | Category: Chemical Biology | Chemical News

November 23, 2012

Chemistry Software Questions Answered Monday

Email This Entry

Posted by Derek

After that ChemDraw post from a few days ago, I had some contact from Philip Skinner, one of the Perkin-Elmer employees who helps support their chemical software (ChemDraw, ChemDraw4Excel, E-Notebook, Inventory, Registration, Spotfire, Chem3D, and so on). He's agreed to hang around on Monday here at the site to answer whatever questions people might have about the programs - I'll start a post on the subject, and he'll handle things in the comment thread. So if you have some technical or usage questions for those programs, be sure to stop by!

+ TrackBacks (0) | Category: Blog Housekeeping

More on That Crowdsourced CNS Research

Email This Entry

Posted by Derek

I wanted to mention that the crowdfunded CNS research that I mentioned here is now in its final 48 hours for donations. Money seems to be picking up, but it'll be close to see if they can make their target. If you're interested, donations can be made here.

Comments (2) + TrackBacks (0) | Category: The Central Nervous System