Home > In the Pipeline > Category Archives


RECENT COMMENTS [xml]
› Catering on
Biology By the Numbers

› tangent on
Lilly Steps In for AstraZeneca's Secretase Inhibitor

› Ocean Isle Beach Wedding photographer on
Outsource to China, Then Move There?

› mysearchdial on
Modeling in Drug Discovery: Questions?

› SEO services on
Cheese Dip and Hydrochloric Acid

› Wilmington NC Wedding Photographer on
How Long Would It Take - If Everything Worked?

"Me Too" Drugs (28) [xml]
Academia (vs. Industry) (117) [xml]
Aging and Lifespan (62) [xml]
Alzheimer's Disease (83) [xml]
Analytical Chemistry (67) [xml]
Animal Testing (31) [xml]
Autism (23) [xml]
Biological News (204) [xml]
Birth of an Idea (43) [xml]
Blink › (5) [xml]
Blog Housekeeping (225) [xml]
Book Recommendations (27) [xml]
Business and Markets (791) [xml]
Cancer (256) [xml]
Cardiovascular Disease (137) [xml]
Chem/Bio Warfare (15) [xml]
Chemical Biology (66) [xml]
Chemical News (319) [xml]
Clinical Trials (294) [xml]
Closing Time (17) [xml]
Current Events (142) [xml]
Diabetes and Obesity (121) [xml]
Drug Assays (208) [xml]
Drug Development (422) [xml]
Drug Industry History (312) [xml]
Drug Prices (106) [xml]
General Scientific News (149) [xml]
Graduate School (59) [xml]
How Not to Do It (36) [xml]
How To Get a Pharma Job (28) [xml]
In Silico (90) [xml]
Infectious Diseases (126) [xml]
Inorganic Chemistry (4) [xml]
Intelligent Design (9) [xml]
Job Postings (1) [xml]
Life As We (Don't) Know It (20) [xml]
Life in the Drug Labs (315) [xml]
Lowe's Laws of the Lab (7) [xml]
Metaphors, Good and Bad (2) [xml]
Natural Products (25) [xml]
Odd Elements in Drugs (12) [xml]
Patents and IP (141) [xml]
Pharma 101 (12) [xml]
Pharmacokinetics (53) [xml]
Press Coverage (86) [xml]
Regulatory Affairs (156) [xml]
Safety Warnings (15) [xml]
Science Gifts (11) [xml]
Snake Oil (69) [xml]
The Central Nervous System (124) [xml]
The Dark Side (173) [xml]
The Scientific Literature (345) [xml]
Things I Won't Work With (29) [xml]
Things I'm Glad I Don't Do (4) [xml]
Toxicology (147) [xml]
Who Discovers and Why (139) [xml]
Why Everyone Loves Us (82) [xml]
September 2014 (20)
August 2014 (30)
July 2014 (40)
June 2014 (44)
May 2014 (56)
April 2014 (39)
March 2014 (45)
February 2014 (42)
January 2014 (43)
December 2013 (41)
November 2013 (44)
October 2013 (50)
September 2013 (34)
August 2013 (42)
July 2013 (51)
June 2013 (39)
May 2013 (40)
April 2013 (46)
March 2013 (52)
February 2013 (45)
January 2013 (50)
December 2012 (39)
November 2012 (48)
October 2012 (42)
September 2012 (49)
August 2012 (38)
July 2012 (27)
June 2012 (37)
May 2012 (47)
April 2012 (31)
March 2012 (38)
February 2012 (46)
January 2012 (44)
December 2011 (35)
November 2011 (44)
October 2011 (35)
September 2011 (32)
August 2011 (37)
July 2011 (26)
June 2011 (35)
May 2011 (40)
April 2011 (31)
March 2011 (40)
February 2011 (27)
January 2011 (35)
December 2010 (34)
November 2010 (28)
October 2010 (39)
September 2010 (37)
August 2010 (38)
July 2010 (30)
June 2010 (42)
May 2010 (44)
April 2010 (48)
March 2010 (63)
February 2010 (36)
January 2010 (38)
December 2009 (25)
November 2009 (35)
October 2009 (30)
September 2009 (32)
August 2009 (27)
July 2009 (24)
June 2009 (40)
May 2009 (36)
April 2009 (24)
March 2009 (35)
February 2009 (33)
January 2009 (32)
December 2008 (17)
November 2008 (18)
October 2008 (24)
September 2008 (22)
August 2008 (15)
July 2008 (23)
June 2008 (20)
May 2008 (20)
April 2008 (22)
March 2008 (21)
February 2008 (20)
January 2008 (21)
December 2007 (15)
November 2007 (23)
October 2007 (23)
September 2007 (19)
August 2007 (21)
July 2007 (17)
June 2007 (12)
May 2007 (19)
April 2007 (23)
March 2007 (19)
February 2007 (27)
January 2007 (23)
December 2006 (16)
November 2006 (20)
October 2006 (24)
September 2006 (21)
August 2006 (19)
July 2006 (21)
June 2006 (24)
May 2006 (27)
April 2006 (20)
March 2006 (27)
February 2006 (21)
January 2006 (24)
December 2005 (21)
November 2005 (25)
October 2005 (26)
September 2005 (21)
August 2005 (29)
July 2005 (18)
June 2005 (24)
May 2005 (22)
April 2005 (18)
March 2005 (21)
February 2005 (21)
January 2005 (24)
December 2004 (24)
November 2004 (20)
October 2004 (23)
September 2004 (18)
August 2004 (21)
July 2004 (15)
June 2004 (19)
May 2004 (20)
April 2004 (20)
March 2004 (20)
February 2004 (23)
January 2004 (18)
December 2003 (1)
November 2003 (1)
August 2003 (1)
January 2003 (9)
December 2002 (24)
November 2002 (19)
October 2002 (26)
September 2002 (18)
August 2002 (18)
July 2002 (21)
June 2002 (12)
May 2002 (18)
April 2002 (9)
March 2002 (7)
February 2002 (14)



Subscribe with Bloglines
Alpha Index | Date Index | Category Index | Comment Index
In the Pipeline
Academia (vs. Industry)


August 22, 2005

Mutual SuspicionsEmail This EntryPrint This Article

I'm not saying these are all true, or true all the time. But here are three things that industrial pharma researchers tend to believe about academic ones:

1. They talk too darn much. Don't even think about sharing any proprietary material with them, because it'll show up in a PowerPoint show at their next Gordon conference. How'd that get in there?

2. They wouldn't know a real deadline if it crawled up their trouser legs. Just a few weeks, just a few months, just a couple of years more and they'll have it all figured out. Trust 'em.

3. They have no idea of how hard it is to develop a new compound. First compound they make that's under a micromolar IC50, and they think they've just discovered Wonder Drug.

And (fair's fair), here are three things that academic researchers tend to believe about industrial ones:

1. They have so much money that they don't know what to do with it. They waste it in every direction, because they've never had to fight for funding. If they had to write grant applications, they'd faint.

2. They wouldn't know basic research if it bonked them on the head. They think everything has to have a payoff in (at most) six months, so they only discover things that are in front of their noses.

3. They're obsessed with secrecy, which is a convenient way to avoid ever having to write up anything for publication. They seem to think patent applications count for something, when any fool can send one in. Try telling Nature that you're sending in a "provisional publication", details to come later, and see how far that gets you.

August 09, 2005

Differences Between Academia and Industry, Pt. 4Email This EntryPrint This Article

You hear an awful lot about teamwork when you're in industry. (Personally, my fist clenches up whenever I here the phrase "team player", but perhaps that's just me.) But there's a bit of truth in all this talk, and it's something that you generally don't encounter during graduate training.

As a chemistry grad student, you're imbedded in a chemistry department, and most outside groups will either be irrelevant or there to service things for you. Getting along with people outside your immediate sphere is useful, but not so useful that everyone makes the effort. But pharmaceutical companies have a lot of different departments, and they're all pretty much equal, and they are all supposed to get along. You've got your med-chem, your pharmacology, the in vivo group (or groups, who may be stepping on each other's toes), metabolism, PK, toxicology, formulations. . .as a project matures, everybody gets dragged in.

These other folks do not see themselves, to put it mildly, as being put on earth to service the medicinal chemistry group. They are very good at detecting the scent of that attitude, and will adjust theirs accordingly. (Some of them already have filed chemists in the "necessary evil" category.) For the most part, no one is supposed to be able to pull rank on anyone else, so in order to get things done, you'll have to play nicely with others.

Not everyone figures this out. I watched someone once whose technique of speeding up the assay results for his compounds was to march down to the screening lab and demand to know where his procreating numbers were, already. No doubt he thought of himself as a hard-hitting, take-charge kind of guy, but the biologists thought of him, unsurprisingly, as a self-propelled cloaca. His assay submissions automatically got moved to the "think about it until next Tuesday" pile, naturally.

Earlier entries in the series can be found here.

April 20, 2005

Sneaking Out for an InterviewEmail This EntryPrint This Article

There was a good question asked in the comments to the previous post on first job interviews: what do you talk about when you work at one company and you're interviewing at another?

Well, I've done that myself, more than once (note to my current co-workers: not in the last few years, folks.) And it can be tricky. But there are some rules that people follow, and if you stay within their bounds you won't cause any trouble. That's not to say that my managers wouldn't have had a cow if they'd seen my old interview slides at the time, but I was at least in the clear legally. Here's how you make sure of that:

First off, it would be best if you could confine your interview talk to work that's been published in the open literature. That stuff is, by definition, completely sterilized from an intellectual property standpoint, and you can yammer on about it all day if you want. The downside is that published work tends to be pretty ancient stuff by the time it shows up in a journal, and you've may have done a lot more interesting things since then. (The other downside is that published projects are almost always failed projects.) Work that's appeared in issued patents is also bulletproof, of course, but it suffers from the same time-lag disadvantages.

Second best is work that's appeared in patent applications. This stuff hasn't been blessed by the patent office yet, so things could always change, but it's at least been disclosed. When you talk about it, you're not giving away anything that couldn't have already been downloaded and read. (Of course, you do have to resist the temptation to add lots of interesting details that don't appear in the application.)

If you've at least filed the applications, then you can still be sort of OK, since they're going to publish in a few months, anyway. This is a case-by-case thing. If the company you're interviewing at is competing with you in that very field, you'd better not give them a head start. But if you're talking antivirals at a company that does nothing but cardiovascular and cancer, you should be able to get away with it. It would be best if you didn't disclose full structures - leave parts of the molecules cut off as big "R" groups and just talk about the parts that make you look like the dynamic medicinal chemist you are.

The worst case is "none of the above." No published work worth talking about, no patent applications, no nothing. I actually did go out and give an interview seminar under those conditions once, and it was an unpleasant experience. I had to talk about ancient stuff from my post-doc, and it was a real challenge convincing people that I knew what was going on in a drug company. I don't recommend trying it.

But I don't recommend spilling the beans in that situation, either. I've seen a job interview talk where it became clear that the speaker was telling us more than he really should have, and we all thought the same thing: he'll do the same thing to us if he gets a job here. No offer.

April 19, 2005

Getting A JobEmail This EntryPrint This Article

I've been seeing quite a few candidate seminars recently, so allow me to pass on some advice to those of you out on the first-job-in-the-drug-industry trail.

First off, some presentation tips: Speak up, if possible. I hear ten too-soft seminars for every too-loud one. Don't give your talk to the screen - either the one on your laptop or the one on the wall. Give it to the people in the room. Look up, turn around, do what you need to do to give them the sense that you're passing information on to them. Find a way to sound somewhere between the extremes of here-is-my-script and gosh-I-don't-remember-this-slide.

As for that information, slides in a scientific presentation should have a medium amount of information on them. A whole slide with one big reaction on it is OK during the introduction, but you'd better fill things out a bit as you move on in the talk. Your audience can tell if you're padding things out.

But don't make the opposite error, putting all your information on one slide in One Big Table. You might think it looks more impressive that way, but it's just irritatingly illegible and uninterpretable. Spread those big data heaps out a bit into coherent piles - put all the aliphatic examples on a slide, followed by the aromatic ones, and so on. You'll find more things to talk about that way, too.

Be honest. If you have to come in with a thin talk, for whatever reason, admit it to yourself and be prepared to admit it in some fashion to your audience. Find some ways to show them that you know more than your slides can illustrate. And don't try to pretend that your results are groundbreaking and exciting, unless they really, really are. Exciting results usually speak for themselves, and your audience will know 'em when they see 'em.

Be prepared for the obvious. If you put a weird reaction up on the screen, someone is going to ask you about the mechanism. If you have some unusual results in a series, someone's going to ask you why you think they came out that way. Be ready with some ideas - it can be fine to not know the answer yet, as long as you've shown that you've thought about what the answer might be. Looking unprepared for down-the-middle pitchs like these will get you crossed off the list very quickly.

And look as if you can learn. No one comes into the drug industry knowing what they really need to know. It comes with experience, and you need to make it clear that you're the sort of person that experience is not wasted on.

That should help. I'll settle for a fee of 10% of your first year's salary, OK?

March 30, 2005

More on Question FourEmail This EntryPrint This Article

I thought I'd briefly explain one of my "Ten Questions" from the other day. The old-fashioned qualitative organic tests that I mentioned in #4 are things that were used in the 1960s and before to identify classes of compounds. Various brews can give you color indicators for the presence of double bonds, methyl ketones, aldehydes and the like. Some of them are quite dramatic - Tollens reagent, for example, suddenly deposits a silver mirror layer (scroll down on that link to see it) on the inside of the flask when it goes right.

But no one uses this stuff any more. No one at all, at least not if they can help it. Modern methods like NMR and routine HPLC/mass spectrometry have completely destroyed the usefulness of the old chemical tests, because you can now find out far more about your compound with little or no destruction of the sample.

Some undergraduate courses apparently still have these reactions in their curricula, and the only reason I can see is inertia. I've heard rationalizations about using them to teach reaction mechanisms and so on, but you can do that just as easily with reactions that real chemists actually run in the real world. And why wouldn't you? If you're a student that's been asked to run a battery of qualitative organic tests, you should ask for a refund of your tuition. You're being had.

February 24, 2005

Getting a Faster PhD?Email This EntryPrint This Article

Being a harmless science blogger, I've stayed out of the whole Harvard/Summers/women-versus-men tar pit. (Proof that I don't spend all my time fishing for traffic, as if posts on patent law weren't enough evidence already.) If you want, you can find more discussion of that controversy than you could want on any of the current-affairs blogs. But, still, I was struck by a comment from Virginia Postrel. She's discussing what might be done to increase the female presence in the sciences, given that biological clocks for reproduction work very differently for women and men (i.e., fathering a child at 45 is a lot easier than getting pregnant at that age. Neither Virginia nor I make any claims about the wisdom of doing either one; we're just talking biological feasibility):

"If, however, you spend six years in grad school and another two as a postdoc, you'll be 30 when you get your first tenure-track post--and that's assuming you don't work between college and grad school. I don't have the numbers, but science training is notorious for stretching out the doctoral/postdoc process, in part because the researchers heading labs benefit from having all that cheap, talented help. Female scientists who want kids are in trouble, even assuming they have husbands who'll take on the bulk of family responsibilities."

Fortunately, that long a stint in academia is unusual by chemistry standards, but molecular biology is notorious in just the way she's talking about. I've seen biology postdoctoral positions break up marriages, because the other partner eventually just wanted to finally, finally move on with life. Her suggested remedies?

"So, if a university like Harvard wants to foster the careers of female scientists, this is my advice: Speed up the training process so people get their first professorial jobs as early as possible--ideally, by 25 or 26. Accelerate undergraduate and graduate education; summer breaks are great for students who want to travel or take professional internships, but maybe science students should spend them in school. Penalize senior researchers whose grad students take forever to finish their Ph.D.s. Spend more of those huge endowments on reducing (or eliminating) teaching assistant loads and other distractions from a grad student's own research and training."

I got my first real PhD-level job at 27, after a year's post-doc, but that's a year or so younger than average for organic chemistry. I spent my undergraduate summer breaks doing research internships (of greater and lesser value), but I should make clear to those outside the field that graduate students in the sciences already work all through the summer. When I was in grad school, we watched the law students across the street pack up and leave in the spring while we cranked away in the lab days, nights, weekends, and holidays. I treasure a memo in my files from the chemistry department head, pointing out that the university vacation calendar did not apply to grad students - and he wasn't just talking about summers, of course. Do not, the memo warned, attempt to take all these holidays, things with names like "spring break", even though you may hear people talking about them.

As for Virginia's other prescriptions, I think penalizing slowpoke professors is a great idea. I know that some schools talk about doing this, but I've never seen any of them follow through. I think that the inverse idea, rewarding those research groups with a high percentage of students finishing on time, would be worth looking into as well. There are plenty of groups that could use a better work ethic - not in terms of the number of hours put in, but in terms of making sure that everything the students do is devoted to the great and holy cause of getting the hell out of graduate school. That's something you should do on general principles, man or woman, whether you plan to start a family or not. Grad school is for getting through, not for lingering.

Reducing TA assignments would also help. I know that many professors, if they have enough grant money, try to get their students out of teaching assistant positions as early as the university will let them (I did one year of it, the minimum.) But if you work for someone without as much of the ready cash, you can be TA-ing until your last year, and in an increasingly bitter mood about it, too.

Speeding up graduate education can be done. You don't want to turn out a bunch of unprepared losers, but as far as I can see, the system we have now does that anyway, but often too slowly. It's true that real research projects take time - you're never going to get well-trained chemistry PhDs out the door in two and a half years. But you shouldn't be expecting five and six years out of people as the norm.

February 10, 2005

The Bones of the WorldEmail This EntryPrint This Article

These two posts (here and here) over at Uncertain Principles are well worth reading if you like discussions of the divide between people who understand science and people who don't. Chad Orzel, being a physicist, instantly translates "doesn't understand science" to "doesn't understand math", which is fair enough, especially for physics. His analogy to the language of critical theory, as found in English literature classes and the like, has threatened to turn the comments threads for both posts into debates about that instead, but Chad's doing a good job of trying to keep things on topic.

What he's wondering about, from his academic perspective, is how to teach people about science if they're not scientists. Can it be really done without math? He's right that a fear of mathematics isn't seen as nearly as much of a handicap as it really is, and he's also right that physics (especially) can't truly be taught without it. But I have to say that I think that a lot of biology (and a good swath of chemistry) can.

Or can they? Perhaps I'm not thinking this through. It's true that subjects like organic chemistry and molecular biology are notably non-mathematical. You can go through entire advanced courses in either field without seeing a single equation on a blackboard. But note that I said "advanced". I can go for months in my work without overtly using mathematics, but my understanding of what I'm doing is built on an understanding of math and its uses. It's just become such a part of my thinking that I don't notice it any more.

Here are some examples from the past couple of weeks: a colleague of mine spoke about a reaction that goes through a reactive intermediate, an electrically charged species which is in equilibrium with a far less reactive one (which doesn't do much at all.) That equilibrium is hugely shifted toward the inert one, but pretty much all the product is found to have gone through the path that involves the minor species. That might seem odd, but it's not surprising at all to someone who knows organic chemistry well. A less reactive species is, other things being equal, usually more energetically stable than a more reactive one, and the more stable one is (fittingly) present in greater amount. But since the two can interconvert, when the more reactive one goes on to the product, it drains off the less reactive one like opening a tap. There's a good way to sketch this out on a napkin, where the energy of the system is the Y coordinate of a graph - anyone who's taken physical chemistry will have done just that, and plenty of times.

Here's another: a fellow down the hall was telling us about a reaction that gave a wide range of products. Every time he ran one of these, he'd get a mix, and bery minor changes in the structure of the starting material would give you very different ratios of the final compounds. That's not too uncommon, but it only happens in a particular situation, when the energetic pathways a reaction can take are all pretty close to each other. The picture that came to my mind instantly was of the energy surface of the reaction system. Now, that's not a real object, but in my mental picture it was a kind of lumpy, rubbery sheet with gentle hills and curving valleys running between them. Rolling a ball across this landscape could send it down any of several paths, many of them taking it to a completely different resting place. Small adjustments from underneath the sheet (changing the height and position of the starting point, or the curvature of the hills) would alter the landscape completely. Those are your changes in the starting material structure, altering the energy profile of all the chemical species. A handful of balls, dropped one after the other, would pile up in completely different patterns at the end after such changes - and there are your product ratios.

Well, as you can see, I can explain these things in words, but it takes a few paragraphs. But there's a level of mathematical facility that makes it much easier to work with. For example, without a grounding in basic mathematics, I don't think that that picture of an energy surface would even occur to a person. I believe that a good grasp of the graphical representation of data is essential even for seemingly nonmathematical sciences like mine. If you have that, you've also earned a familiarity with things like exponential growth and decay, asymptotes, superposition of curves, comparison of the areas under curves and other foundations of basic mathematical understanding. These are constant themes in the natural world, and unless they're your old friends, you're going to have a hard time doing science.

That said, I can also see the point of one of his commentators that for many people, it would be a step up to be told that mathematics really is the underpinning of the natural world, even if some of the details have to be glossed over. Even if some of them don't hit you completely without the math, a quick exposure to, say, atomic theory, Newtonian mechanics, the laws of thermodynamics, simple molecular biology and the evidence for evolution would do a lot of folks good, particularly those who would style themselves well-educated.

January 17, 2005

Don't Become A Scientist?Email This EntryPrint This Article

Over at Sean Carroll's "Preposterous Universe", there's a post on a physicist's advice to students who want to become scientists. Don't even try, he tells them. No jobs, no money, no thrill, no hope. It's depressing stuff. Carroll is a physicist himself, so he has quite a bit to say on the topic. (Link found via yet another physicist.)

Reading the whole thing, though, I was struck by how far from my own experience it is. The drug industry's going through a rough patch, for sure, but there are companies still hiring. And although we've had some layoffs, and more are in the offing, there are still thousands upon thousands of us out here. We're gainfully employed, working on very difficult and challenging problems with large real-world implications. (And hey, we're getting paid an honest wage while we're doing it, too.)

That's when it hit me: the article that Carroll's referring to isn't warning people away from becoming scientists. It's warning them away from becoming physics professors. Very different! Those categories intersect, all right, but they're not identical. There are other sciences besides physics (no matter what Rutherford said), and in many of them, there's this other world called industry. (The original article doesn't even mention it, and Carroll disposes of in his first paragraph.)

Some of this is (doubtless unconscious) snobbery - academic science is pure science, after all, while industry is mostly full of projects on how to keep cat litter from clumping up in the bag or finding new preservatives for canned ravioli. Right? And some of it reflects the real differences between physics and chemistry. To pick a big one, research (and funding) in physics has been dominated for a long time by some Really Big Problems. The situation's exacerbated by the way that many of these big problems are of intense theoretical but hazy practical interest.

I am not knocking them for that, either, and I'll enter my recent effusions about the weather on Titan as evidence. I'd love to hear that, say, an empirically testable theory of quantum gravity has made the cut. But that kind of work is going to be the domain of academia. I think that it's a sign of an advanced civilization to work on problems like that, but advanced civilization or not, it's not likely to be a profit center. Meanwhile, chemistry doesn't have any Huge Questions at the moment, but what it has are many more immediately applicable areas of research. Naturally, there are a lot more chemists employed in industry (working on a much wider range of applications.)

Many of the other differences between the fields stem from that basic one. Chemistry has a larger cohort of the industrially employed, so the academic end of the business, while not a jolly sight, isn't the war of all against all that you find in physics, astronomy, or (the worst possible example) the humanities. The American Chemical Society's idea of worrisome unemployment among its members would be clear evidence of divine intervention in many other fields. So those of us who get paid, get paid pretty well. And we don't do three, four, five-year post-docs, either, which is something you find more of in fields where there aren't enough places for everyone to sit down. Two years, maximum, or people will think that there's something wrong with you.

All of this places us, on the average, in a sunnier mood than the physics prof who started this whole discussion (whose article, to be sure, was written four or five years ago.) I was rather surly during grad school, but for the most part I'm happy as the proverbial clam. As I've said, if someone had come to me when I was seven years old and shown me the work I do now, I would have been overjoyed. Who can complain?

January 05, 2005

Like Moving Furniture Across a TightropeEmail This EntryPrint This Article

You know what I don't miss about chemistry after years in the drug industry? Big, long, multi-step syntheses. Oh, we'll gear up to do eight- and ten- and thirteen-steppers here, even though some of those steps are just things like hydrolyzing methyl esters, stuff that blindfolded grannies should be able to do. But what I'm happy to leave the mighty academic natural product synthetic schemes behind, the ones where step fourteen finds you just getting warmed up.

As I've mentioned here before, I did that kind of thing in graduate school, and I swear it's scarred me for life. I pulled the plug on my total synthesis at step 27, about six steps short of the end (this is, if everything had worked perfectly, obese chance.) I've never regretted it. The benefits of getting out of grad school are huge, spacious, and well-appointed compared to the benefits of being able to say that I finished my natural product. Any of my readers in grad school, take note.

Long linear sequences are a slog. You have to start them in the largest buckets you can find, because you're never, ever going to have enough material. Now, we do large scale work in the drug industry, yes indeed, but that's because we intend to finish on large scale. If you're going to do six-week toxicity testing, you'd better have a fine keg of material on hand before you start. But those academic syntheses need huge amounts at the beginning in order to have anything at all by the time they finish. You work until you can't handle or characterize the stuff any more, then you trudge back down the mountain and start porting the loads back up the trail.

An example: I got to the point where I needed to take an optical rotation on the material from about step 25 or so. For those outside the field, this is an analytical technique that involves shining polarized light through a solution of your compound. If it's not an even mix of left-handed and right-handed isomers, that is to say, if there's some chiral character to the sample, the light will rotate. The degree of rotation can be used as an indicator of compound purity - I'm tempted to add "if you're a fool." They're not the most reliable numbers in the world, because some things just don't make the light twist much. And in those cases, a small amount of an impurity that rotates light like crazy will throw everything off. It's happened more than once.

Well, in my case, I loaded a half milligram or so of my precious stuff into the smallest polarimeter tube we had and jammed it into the machine. Hmm, I thought, a rotation of 0.00 degrees. A singular result, since I knew for certain that the molecule had six pure chiral centers. So I went back upstairs and loaded the whole batch into the tube, walking very carefully down the hall with this investment of several months of my life held in both sweaty hands. This time I got a specific rotation of about 1.2 degrees, which means that all those chiral carbons were roughly canceling each other out. Did I believe that number? Not at all! Did I put it in my dissertation? You bet! Gotta have a number, you know.

And that's how you work - purifying things through increasingly tinier columns, collecting them in slowly shrinking vials, running all the instruments for longer and longer with the gain turned up higher and higher, trying to prove that it's really still in there and really still what it's supposed to be. Then it's back to the buckets. Never again!

November 09, 2004

Gumming Up the Amyloid WorksEmail This EntryPrint This Article

The October 29th issue of Science has an interesting article from a team at Stanford on a possible approach for Alzheimer's therapy. The dominant Alzheimer's hypothesis, as everyone will probably have heard, is that the aggregation of amyloid protein into plaques in the brain is the driving force of the disease. There's some well-thought-out dissent from that view, but there's a lot of evidence on its side, too.

So you'd figure that keeping the amyloid from clumping up would be a good way to treat Alzheimer's, and in theory you'd be correct. In practice, though, amyloid is extremely prone to aggregation - you could pick a lot of easier protein-protein interactions to try to disrupt, for sure. And protein-protein targets are tough ones to work on in general, because it's so hard to find a reasonable-sized molecule that can disrupt them. It's been done, in a few well-publicized cases, but it's still a long shot. Proteins are just too big, and in most cases so are the surfaces that they're interacting with.

The Stanford team tried a useful bounce-shot approach. Instead of keeping the amyloid strands off each other directly, they found a molecule that will cause another unrelated protein to stick to them. This damps down the tendency of the amyloid to self-aggregate. The way they did this was, by medicinal chemistry standards, simplicity itself. There's a well-known dye, the exotically named Congo Red, that stains amyloid very powerfully - which must mean that it has a strong molecular interaction with the protein. They took the dye structure and attached a spacer group coming off one end of it, and at the other end they put a synthetic ligand which is known to have high affinity for the FK506 binding protein (FKBP). That one is expressed in just about all cell types, and there are a number of small molecules that are known to bind to it.

The hybrid molecule does just what you'd expect: the Congo Red end of it sticks to amyloid, and the other end sticks to FKBP, which brings the two proteins together. And this does indeed seem to inhibit amyloid's powerful tendency for self-aggregation. And what's more the aggregates that do form appear to be less toxic when cells are exposed to them. It's a fine result, although I'd caution the folks involved not to expect things to make this much sense very often. That stich-em-together technique works sometimes, but it's not a sure thing.

So. . .(and you knew that there was going to be a paragraph like this one coming). . .do we have a drug here? The authors suggest that "Analogs based on (this) model may have potential as therapeutics for Alzheimer's disease." I hate to say it, but I'd be very surprised if that were true. All the work in this paper was done in vitro, and it's a big leap into an animal. For one thing, I'm about ready to eat my own socks if this hybrid compound can cross the blood-brain barrier. Actually, I'm about ready to sit down for a plateful of hosiery if the compound even shows reasonable blood levels after oral dosing.

It's just too huge. Congo Red isn't a particularly small molecule, and by the time you add the linking group and the FKBP ligand end, the hybrid is a real whopper - two or three times the size of a reasonable drug candidate. The dye part of the structure has some very polar sulfonate groups on it, as many dyes do, and they're vital to the amyloid binding. But they're just the sort of thing you want to avoid when you need to get a compound into the brain. No, if this structure came up in a random screen in the drug industry, we'd have to be pretty desperate to use it as a starting point.

Science's commentary on the paper quotes a molecular biologist as saying that this approach shows how ". . .a small drug becomes a large drug that can push away the protein. . ." But that's wrong. You can tell he's from a university, just by that statement. I'm not trying to be offensive about it, but neither Congo Red nor the new hybrid molecule are drugs. Drugs are effective against a disease, and this molecule isn't going to work against Alzheimer's unless it's administered with a drill press. If that's a drug, then I must have single-handedly made a thousand of them. The distance between this thing and a drug is a good illustration of the distance between academia and industry.

To be fair, this general approach could have value against other protein-protein interaction targets. I think that it's worth pursuing. But I'd attack something other than a CNS disease, and I'd pick some other molecule than Congo Red as a starting point.

September 16, 2004

The NIH in the ClinicEmail This EntryPrint This Article

OK, I couldn't resist. Let me reiterate that I completely admire the NIH's commitment to basic research; it's one of the real drivers of science in this country. But they're not a huge factor in clinical trials. Academia does more basic research than pharma; pharma does more clinical work than academia. Here are some statistics from a reader e-mail:

"As a person who was an NIH staffer (funding clinical trials, no less) and is now on the pharma side (mostly spending on manufacturing development; we will spend more on clinical trials as we get bigger), I have seen both sides.

Most of NIH spending is very far from clinical utility. Last time I checked (and it has been a while), more than 90% of NIH funds went to what most people would consider non-clinical research, e.g., studies of animals and cells, etc. (If the NIH was named by its major function, it would probably be called the National Institutes of Molecular Biology ;-) The reason NIH is able to claim that half of its money goes to clinical research' is that any study that involves a human or *human tissues* counts. So a bench study looking at receptors on human renal cells counts as 'clinical research.' The number of studies examining 'whole' humans is in the 5% range.

On the other hand, pharma, as you know, spends a lot of money on research with legal (protecting patent claims), manufacturing (cGMP issues, etc.) and marketing goals that don't necessarily help anyone's health.

Regarding the clinicaltrials.gov numbers, by my reckoning the 8000 NIH studies and the 2400 'industry' studies probably represent about the same investment in *therapeutic* clinical trials. If you break down the NIH trials, about 1800 (22%) are Phase I, 3000 (37%) are Phase II, 1100 (14%) are Phase III, and the rest (2150, 27%) are observational and other. (If you want to check, I did a search within the results for the appropriate phrases and subtracted from the total for the remainder). Figures for industry are 460 (19%) Phase I, 1060 (44%) Phase II, 770 (32%) Phase III, and 133 (5%) other.

In my experience each phase of clinical trials multiplies costs by about 10 times (e.g., Phase I = X; Phase II = 10X, Phase III = 100X), so the clinicaltrials.gov figures imply that the costs of Phase I, II, and III trials funded by industry are over 80% of those funded by NIH (costs are overwhelmingly driven by Phase III trials). And this is despite the close to 100% capture of NIH trials versus the unknown percentage capture of industry trials that you noted in your post."

September 14, 2004

One More On Basic Research and the ClinicEmail This EntryPrint This Article

OK, one more on this topic before moving on to other things for a while. The Bedside Matters medblog has a better roundup of the reactions to my post than I could have done myself. And "Encephalon" there also has one of the longer replies I've seen to my initial post, worth reading in full.

I wanted to address a few of the issues that it raises. Encephalon says:

"Dr. Lowe makes his point with the sort of persuasive skill one suspects is borne of practice - I shouldn't be surprised if he has had to make his case to the unbelieving on a very regular basis. And that case is this: that pharmaceutical companies do in fact spend enormous sums of money in developing the basic science breakthroughs first made in academic labs to the point where meaningful therapeutic products (ie, '$800 mil' pills) can be held in the palms of our doctors' hands, ready to be dispensed to the next ailing patient.

So far as that claim goes, I don't think any reasonably informed individual would dispute it. . ."

It tickles me to be called "Doctor" by someone with a medical degree. On the flip side, though, it's a nearly infallible sign of personality problems when a PhD insists on the honorific. And I appreciate the compliment, but it's only fairly recently that I've had to defend this point at all; I didn't even know it was a matter of debate. The thing is, you'd expect that a former editor of the New England Journal of Medicine would be a "reasonably informed individual", wouldn't you? I don't think we can take anything for granted here. . .

He then spends a lot of time on the next point:

"It is a myth, and I would argue a more prevalent one than the myth that Big Pharma simply leaches off government-funded research, that the NIH does little to bring scientific breakthroughs to the bedside (once they have made them at the bench). . .Using arguably one of the best (databases) we've got (the NIH's ClinicalTrials.gov**) we get the following figures: of the 15,466 trials currently in the database, 8008 are registered as sponsored by NIH, 380 by 'other federal agency', 4656 by 'University/Organization', and 2422 by Industry. While I am suspicious that the designation 'university/organization' is not wholly accurate, and may represent funding from diverse sources, and while the clinical trials in the registry are by no stretch of the imagination only pharmaceutical studies, the 8388 recent trials sponsored by Federal agencies are no negligeable matter. I think Dr. Lowe will agree.""

I agree that NIH has a real role in clinical trials, but I don't think it's a large as these figures would make you think. Clinicaltrials.gov, since it's an NIH initiative, is sure to include everything with NIH funding, but there are many industry studies that have never shown up there. (And I share the scepticism about the "University" designation.) When the Grand Clinical Trial Registry finally gets going, in whatever form it takes, we can get a better idea of what's going on. I also think that if we could somehow compare the size and expense of these various trials, the Pharma share would loom larger than the absolute number of trials would indicate.

Encephalon goes on to worry that I'm denigrating basic research: "The impression a lay person would get reading Dr. Lowe's 'How it really works' is that basic science work done by the NIH is really quite trivial. I don't think he meant this. . ."

Believe me, I certainly didn't. Without basic biological studies, there would be nothing for us to get our teeth into in the drug industry. If we had to do them all ourselves, the cost of the drugs we make would be vastly greater than it is now. It's like the joking arguments that chemist and pharmacologists have in industry: "Hey, you guys wouldn't have anything to work on if it weren't for us chemists!" "Well, you'd never know if anything worked if it weren't for us, y'know!" Academia and industry are like that: we need each other.

September 13, 2004

A Real-World Can O' WormsEmail This EntryPrint This Article

Here's another example of academica and industry, and how it can be hard to divide out the credit. There's a family of nuclear receptor proteins known as PPARs, a very important (and difficult to unravel) group. The whole field got started years ago, when it was noticed that some compounds had a very particular effect on the livers of rats and mice: they made the cells in them produce a huge number of organelles called peroxisomes.

Eventually, a protein was found that seemed to mediate this effect, and it was called the Peroxisome Proliferator-Activated Receptor, thus PPAR. It was thought that there might be some other similar proteins. At this point, their functions were completely unknown.

Meanwhile, off at a Japanese drug company, a class of compounds (thiazolidinediones) had been found to lower glucose in diabetic animal models. The original plan, if I recall correctly, had been to stich together a dione compound with a Vitamin E structure, and as it turns out the reasoning behind this idea was faulty in every way. But the Japanese group had hit on a whole series of interesting structures that lowered glucose in a way that had never been seen before. No one had a clue about how they worked, but all sorts of theories were proposed, tested, and discarded.

The activity was unusual enough that many other drug companies jumped into the thiazolidinedione game. It turned out, as various companies sought out patentable chemical space, that the Vitamin-E-like side chain wasn't essential, but the thiazolidinedione head group was a good thing to have. (It's since been superseded.) The Japanese group was in the lead, with a compound that was eventually named trogliazone, but SmithKline Beecham (as it was then) and Eli Lilly weren't far behind, with rosiglitazone and pioglitazone. There were a number of contenders from other companies fell out of the race for various reasons. The three left standing went all the way into human trials, and no one still had any idea of how they worked.

We're up to the early 1990s now. Off in another part of the scientific world, a number of research groups were digging into PPAR biology. It looked like there were three PPARs, designated alpha, gamma, and delta (known as PPAR beta in Europe.) They all had binding sites that looked like small molecules in the cell should fit into them, but no one had really established what they might be. All three seemed as if they might be important in pathways dealing with fatty acids, not that that narrows it down very much.

As best I can reconstruct things, in a very short period in the mid-1990s, it became clear that PPAR gamma was a big player in fat cells (adipocytes). Many labs were working on this, but two academic groups that were very much in the thick of things (and still are) were those of Bruce Spiegelman from Harvard and Ron Evans from the Salk Institute. Then a group at Glaxo Wellcome (as it was then), also doing research in the field, found out that the glitazone drugs were actually ligands for PPAR-gamma, and immediately hypothesized that it was the mechanism by which they lowered glucose. From what I've been told, Glaxo's management didn't immediately believe this, but it turned out to be right on the money. Glaxo is still a major player in the PPAR world, turning out a huge volume of both basic and applied research.

All three PPAR-gamma drugs made it to market. So, who gets the credit? It's hard enough to figure out even inside the academic sphere - the two groups I mentioned had plenty of competition here and abroad, and insights came from all over. But (as far as I can tell) none of them were the first to make the connection between PPAR-gamma and diabetes therapy. So does Glaxo get the credit (they do have a few key patents to show for it all.)

And if we're doling out credit, who's going to line up for blame? As it happened, the very first PPAR-gamma compound to market, troglitazone, showed some unexpected liver toxicity once it found a broader audience. It was eventually pulled from the market in a hail of lawsuits. Rosiglitazone and pioglitazone (Avandia and Actos, by brand) are still out there, having survived the loss of the first compound, but not without a period of suspicion and breath-holding.

Any more troubles to share? Later PPAR drugs have shown all kinds of weird effects, including some massive clinical failures late in human trials. The money that's been made from the two on the market probably hasn't made up yet for all the cash that the industry has spent trying to figure out what's going on, and the story takes on more complexity every year. (Glaxo, for their trouble, has never made a dime off one of their own PPAR compounds.)

It's to the point now that some companies are, it seems, throwing up their hands about the whole field, while others continue to plow ahead. And by now, the number of research papers from academia will make your head hurt. PPARs seem to be involved in everything you can imagine, from diabetes to cancer to wound healing, and who knows what else. The whole thing is going to keep a lot of people busy for a long time yet. And anyone who thinks they can clearly and fairly apportion the credit, the spoils, the blame and the Bronx cheers is dreaming.

September 12, 2004

How Much Basic Research?Email This EntryPrint This Article

My long cri de couer last week continues to bring in a number of comments, which I appreciate. Matthew Holt of the Health Care Blog asks:

How much money does the NIH spend on basic research and how much does the pharma business spend on it (and you can include development if you like)? I don't have these numbers but I suspect they are closer to each than it would appear from a reader of your article who might think that it's about 90-10 on pharma's side."

Well, I hope that's not how I came across. I'm sure that more basic research goes on in academia, of course. That's what they're funded for, and what they're equipped for. Some basic work goes on in the drug industry, too, but most of our time and effort is spent on applied research. It's confusion about the differences between those two (or an assumption that the basic kind is the only kind that counts) that leads to the whole "NIH-ripoff" idea.

It's easy to get NIH's budget figures, but it's next to impossible to get the drug industry's. One good reason is that companies don't release the numbers, but there's a more fundamental problem. It would even hard to figure it out from inside a given company, with access to all the numbers, because you can easily slip back and forth between working on something that applies only to the drug candidate at hand and working on something that would be of broader use.

Some years ago, several companies (particularly some European ones) had "blue-sky" basic research arms that cranked away more or less independently of what went on in the drug development labs. I can think of Ciba-Geigy (pre-Novartis) and Bayer as examples, and I know that Roche funded a lot of this sort of thing, too. In the US, DuPont's old pharma division had a section doing this kind of thing as well. I'm not sure if anyone does this any more, though. In many cases, the research that went on tended to either be too far from something useful, or so close that it might as well be part of the rest of the company.

So without a separate budget item marked "basic research", what happens is that it gets done here and there, as necessary. I can give a fairly trivial example: at my previous company, I spent a lot of time making amine compounds through a reaction called reductive amination. I used a procedure that had been published in the Journal of Organic Chemistry, a general method to improve these reactions using titianium isopropoxide. It worked well for me, too, giving better yields of reactions that otherwise could be hard to force to completion.

The original paper on it came from a research group at Bristol-Meyers Squibb. They had been looking for a way to get some of these recalcitrant aminations to go, and worked this one out. That is a small example of basic research - not on the most exalted scale, but still on a useful one. It's not like BMS had a group that did nothing but search for new chemical reactions, though. They were trying to make specific new compounds, applied research if there ever was some, but they had to invent a better way to do it.

Meanwhile, I needed some branched amines that this reaction wouldn't give me, and there wasn't a good way to make them. I thought about the proposed mechanism of the BMS reaction and realized that it could be modified as well. Adding an organometallic reagent at the end of the process might form a new carbon-carbon bond right where I needed it. I tried it out, and after a few tweaks and variations I got it to work. As far as I could see from searching the chemical literature, no one had ever done this in this way before, and we got a lot of use out of this variation, making a list of compounds that probably went into the low thousands.

When I was messing around with the conditions of my new reaction, trying to get it to work, I was doing it with intermediate compounds from our drug discovery program, and when the reactions produced compounds I submitted them for testing against the Alzheimer's disease target we were working on. Basic research or applied? Even though there are clear differences between the two, taken as classes, the border can be fuzzy. One's blue and one's yellow, but there's green in between.

Tomorrow I'll go over a more important example - it's pretty much basic research all the way, but untangling who figured out what isn't easy. My readers who work in science will be familiar with that problem. . .

One other thing, in response to another comment: I didn't go wild about the NIH argument because I'm trying to prove that drug companies are blameless servants of the public good or something. We're businesses, and we do all kinds of things for all kinds of reasons, which vary from the altruistic to the purely venal. You know, like they do in all other businesses. Nor is it, frankly, the largest or most pressing argument about the drug industry right now.

No, the reason I took off after it is that it's so clearly mistaken. Anyone who seriously holds this view is not, in my opinion, demonstrating any qualifications to being taken seriously. (And that goes for former editors of the New England Journal of Medicine, too, a position that otherwise would argue for being taken quite seriously indeed.) The "all-they-do-is-rip-off-academia" argument is so mistaken, and in so many ways, that it calls into question all the other arguments that a person advocating it might make. They are talking about the pharmaceutical industry, seriously and perhaps with great passion, but they do not understand what it does or how it works at the most basic level. Isn't that a bit of a problem? What other defects of knowledge or reasoning are waiting to emerge, if that one has found a home?

September 09, 2004

How It Really WorksEmail This EntryPrint This Article

So is this the attitude we're up against? Here's a thread on Slashdot on the clinical trial disclosure issue - titled, I note in light of yesterday's post, "Medical Journals Fight Burying of Inconvenient Research". My favorite verb again! The comments range from the insightful to the insipid (for another good reaction to the clinical trial controversy, go here.)

A comment to the original Slashdot item disparages the idea that NIH is the immediate source of all drugs, and recommends reading my site, both of which actions I appreciate. But the first response to that was:

"No, (NIH-funded labs) just do the basic research that results in the drug leads. The companies then do the expensive but scientifically easy trials and rake in all the money (and now it seems, the credit as well)."

Wrong as can be, and in several directions at once. In a comment below, blogger Sebastian Holsclaw urges that we take this kind of talk seriously because it's more widespread than we think. I'm afraid that he might be right. The problem is that many people don't seem to understand what it is that people like me do for a living. I think that there must be plenty who don't even grasp how science works in general. Allow me to go on for a while to explain the process - I'd appreciate any help readers can provide in herding the sceptics over to read it.

Try this: If Lab C discovers that the DooDah kinase (a name I whose actual use I expect any day now) is important in the cell cycle, and Lab D then profiles its over-expression in various cancer cell lines, you can expect that drug companies will take a look at it as a target. Now, the first thing we'll do is try to replicate some of the data to see if we believe it. I hope that I'm not going to shock anyone by noting that not all of these literature reports pan out.

But let's assume that they do this time, making DooDah a possible cancer target. What then? If we decide that the heavy lifting has been done by the NIH-funded labs C and D, then what do we have so far? We have a couple of papers in the Journal of Biological Chemistry (or, if the authors are really lucky, Cell) that, put together, say that DooDah kinase is a possible cancer target. How many terminally ill patients will be helped by this, would you say? Perhaps they can read about these interesting in vitro results on their deathbeds?

What will happen from this point? Labs C or D may go on to try to see what else the kinase interacts with and how it might be regulated. What they will not do is try to provide a drug lead, by which I mean a lead compound, a chemical starting point for something that might one day be a drug. That's not the business these labs are in. They're not equipped to do it and they don't know how.

(Note added after original post): This is where the drug industry comes in. We will try to find such a lead and see if we can turn it into a drug. If you believe that all of what follows still belongs to the NIH because they funded the original work on the kinase, then ask yourself this: who funded the work that led to the tools that Labs C and D used? What about Lab B, who refined the way to look at the tumor cell lines for kinase activity and expression? Or Lab A, the folks that discovered DooDah kinase in the first place twenty-five years ago, but didn't know what it could possibly be doing? These things end up scattered across countries and companies. And all of these built on still earlier work, as all the work that comes after what I describe will build on it in turn. That's science, and it's all connected.

Here in a drug company, we will express the kinase protein - and likely as not we'll have to figure out on our own how to produce active enzyme in a reasonably pure form - and we'll screen it against millions of our own compounds in our files. We'll develop the assay for doing that, and as you can imagine, it's usually quite different than what you'd do by hand on the benchtop. Then we'll evaluate the chemical structures that seemed to inhibit the kinase and see what we can make of them.

Sometimes nothing hits. Sometimes a host of unrelated garbage hits. For kinases, these days, these usually aren't the case - owing to medicinal chemistry breakthroughs achieved by various drug companies, let me add. So if we get some usable chemical matter, then I and my fellow med-chemists take over, modifying the initial lead to make it more potent, to increase its blood levels and plasma half-life when dosed in animal models, to optimize its clearance (metabolism by the liver, etc.), and make it selective for only the target (or targets) we want it to hit. Often there are toxic effects for reasons we don't understand, so we have to feel our way out of those with new structures, while preserving all the other good qualities. It would help a great deal if the compounds exist in a form that's suitable for making into a tablet, and if they're stable to heat, air, and light. They need to be something that can be produced by the ton, if need be. And at the same time, these all have to be structures that no one else has ever described in the history of organic chemistry. To put it very delicately, not all of these goals are necessarily compatible.

I would love to be told how any of this comes from the NIH.

Now the real work begins. If we manage to produce a compound that does everything we want, which is something we only can be sure of after trying it in every model of the disease that you trust, then we put it into two-week toxicity testing in animals. Then we test in more (and larger) animals. Then we dose them for about three months. Large whopping batchs of the compound have to be prepared for all this, and every one of them has to be exactly the same, which is no small feat. If we still haven't found toxicity problems, which is a decision based on gross observations, blood chemistry, and careful microscopic examination of every tissue we can think of, then the compound gets considered for human trials. We're a year or two past the time we've picked the compound by now, depending on how difficult the synthesis was and how tricky the animal work turned out to be. No sign of the NIH.

The regulatory filing for an Investigational New Drug needs to be seen to be appreciated. It's nothing compared to the final filing (NDA) for approval to market (we're still years and years away from that at this point), but it's substantial. The clinical trials start, cautiously, in normal volunteers at low doses, just to see if the blood levels of the compound are what we think, and to make sure that there's no crazy effect that only shows up in humans. Then we move up in dose, bit by bit, hoping that nothing really bad happens. If we make it through that, then it's time to spend some real time and money in Phase II.

Sick patients now take the drug, in small groups at first, then larger ones. Designing a study like this is not easy, because you want to be damn sure that you're going to be able to answer the question you set out to. (And you'd better be asking the right question, too!) Rounding up the patients isn't trivial, either - at the moment, for example, there are not enough breast cancer patients in the entire country to fill out all the clinical trials for the cancer drugs in development to treat it. Phase II goes on for years.

If we make it through that, then we go on to Phase III: much, much larger trials under much more real-world conditions (different kinds of patients who may be undergoing other therapy, etc.) The amount of money spent here outclasses everything that came before. You can lose a few years here and never feel them go by - the money that you're spending, though, you can feel. And then, finally, there's regulatory approval and its truckload of paperwork and months/years of further wrangling and waiting. The NIH does not assist us here, either.

None of this is the province of academic labs. None of it is easy, none of it is obvious, none of it is trivial, and not one bit of it comes cheap. We're spending our own money on the whole thing, betting that we can make it through. And if the idea doesn't work? If the drug dies in Phase II, or, God help us all, in Phase III? What do we do? We eat the expense, is what we do. That's our cost of doing business. We do not bill the NIH for our time.

And then we go do it again.

August 25, 2004

Will the Uncommon Work for the Common Good?Email This EntryPrint This Article

Yochai Benkler of the Yale Law School has an interesting policy article in a recent issue of Science. It's on the "Problems of Patents", and he's wondering about the application of open-source methods to scientific research. He has two proposals, one of which I'll talk about today.

In some sort of ideal world (which for some folks also means Back In The Good Old Days of (X) Years Ago), science would be pretty much open-source already. Everyone would be able to find out what everyone else was working on, and comment on it or contribute to it as they saw fit. In chemistry and biology, the closest things we have now, as Benkler notes, are things like the Public Library of Science (open-source publishing) and the genomics tool Ensembl. Moving over to physics and math, you have the ArXiv preprint server, which is further down this path than anything that exists in this end of the world.

Note, of course, that these are all academic projects. Benkler points out that university research departments, for all the fuss about Bayh-Dole patenting, still get the huge majority of their money from granting agencies. He proposes, then, that universities adopt some sort of Open Research License for their technologies, which would let a university use and sublicense them (with no exclusivity) for research and education. (Commercial use would be another thing entirely.) This would take us back, in a way, to the environment of the "research exemption" that was widely thought to be part of patent law until recently (a subject that I keep intending to write about, but am always turned away from by pounding headaches.)

As Benkler correctly notes, though, this would mean that universities would lose their chance for the big payoff should they discover some sort of key research tool. A good example of this would be the Cohen/Boyer recombinant DNA patent, licensed out 467 times by Stanford for hundreds of millions of dollars. And an example of a failed attempt to go for the golden gusto would be the University of Rochester's reach for a chunk of the revenues from COX-2 inhibitors, despite never having made one. (That's a slightly unfair summary of the case, I know, but not as far from reality as Rochester would wish it to be.)

That's another one I should talk about in detail some time, because the decision didn't rule out future claims of that sort - it just said that you have to be slicker about it than the University of Rochester was. As long as there's a chance to hit the winning patent lottery ticket, it's going to be hard to persuade universities to forgo their chance at it. Benkler's take is that the offsetting gains for universities, under the Open Research License, would be "reduced research impediments and improved public perception of universities as public interest organizations, not private businesses." To compensate them for the loss of the chance at the big payoff, he suggests "minor increases in public funding of university science."

Laudable. But will that really do it? As far as I can tell, most universities are pretty convinced already that they're just about the finest public interest organizations going. I'm not sure that much need for good publicity, rightly or not. And Benkler's right that a relatively small increase in funding would give universities, on average, what they would make, on average, from chasing patent licensing money. But show me a university that's willing to admit that it's just "average."

The problem gets even tougher as you get to the research departments that really aren'taverage, because they're simultaneously the ones with technologies that would be most useful to the broader research community and the ones with the best chance of hitting on something big. I'll be surprised - pleasantly, but still very surprised - if the big heavy research lifters of the world agree to any such thing.

June 02, 2004

Industry vs. Academia: The Mental AspectEmail This EntryPrint This Article

It's been a while since I returned to this topic. Many differences remain for me to talk about, but I though that it was time to address the biggest one, which is psychological. Some of you probably thought that the biggest difference was money. Can't ignore that one - it probably contributes to some of the effects I'll be talking about. But there's a separate mental component to graduate school that never really recurs, which should be good news to my readers who are working on their degrees.

Some of this is due to age, naturally enough. The research cohort out in industry ranges from fresh-out-of-school to greybeards in their fifties and sixties. (I can say that, since I'm in my early forties, the color changes in my own short beard notwithstanding.) Everyone in graduate school is a transient of one sort of another, usually someone whose life is still just getting going. But in the workplace, most people are more settled in their lives and careers. There are still some unsettling waves that move through industry, mergers and layoffs and reorganizations. But people respond to them differently than they would in their 20s - often better, sometimes worse, but differently.

And not all your co-workers in grad school are actually stable individuals, either. Some of these people wash out of the field for very good reasons, and you don't see as many of the outer fringes later on in your career. It's not that we don't have some odd people in the industrial labs, believe me. But the variance isn't as high as it is in school. Some of those folks are off by so many standard deviations that they fall right off the edge of the table.

Another factor is something I've already spoken about, the way that most graduate careers come down to one make-or-break research project. The only industrial equivalents are in the most grad-school atmospheric edge of the field, small startup companies that have one shot to make it with an important project. But in most companies, no matter how big a project gets, there's always another one coming along. Clinical candidate went down in flames? Terrible news, but you're working on another one by then. There's a flow to the research environment that gives things more stability.

The finish-the-project-or-die environment of graduate study leads to the well-known working hours in many departments. Those will derange you after a while: days, nights, weekends, holidays, Saturday nights and Sunday mornings. I worked 'em all myself when I was trying to finish my PhD, but I don't now. If a project is very interesting or important, I'll stay late, or once in a while work during a weekend. But otherwise, I arrange my work so that I go home at night. For one thing, I have a wife and two small children who'd much rather have me there, but even when I was single I found many more things to do than work grad-school hours. It took me some months after defending my dissertation before I could decompress, but I did. Having a life outside the lab is valuable, but it's a net that graduate students often have to work without.

But beyond all these, there's one great big reason for why grad school feels so strange in retrospect, and I've saved it for last: your research advisor. There's no other time when you're so dependent on one person's opinion of your work. (At least, there had better not be!) If your advisor is competent and even-tempered, your graduate studies are going to be a lot smoother. If you pick one who turns out to have some psychological sinkholes, though, then you're in for a rough ride and there's not much that can be done about it. Everyone has a fund of horror stories and cautionary tales, and there's a reason for that: there are too damn many of these people around.

Naturally, there are bad bosses in the industrial world. But, for the most part, they don't get quite as crazy as the academic ones can (there's that variance at work again). And they generally aren't the only thing running (or ruining) your life, either. There's the much-maligned HR department, which can in fact help bail you out if things get really bad. Moving from group to group is a lot easier at most companies than it can ever be in graduate school, and it's not like you lose time off the big ticking clock when you do it.

I can see in retrospect that I was a lot harder to get along with when I was in grad school. I responded to the pressure by getting more ornery, and I think that many other personalities deformed similarly. When I've met up with my fellow grad students in the years since, we seem to be different people, and with good reason. It isn't just the years.

April 20, 2004

Stuart Schreiber on Stuart SchreiberEmail This EntryPrint This Article

The April issue of Drug Discovery Today has an intriguing interview (PDF file) with Stuart Schreiber of Harvard. Schreiber is an only partially human presence in the field, as a listing of his academic appointments will make clear: chairman, with an endowed professorship, of the Department of Chemistry at Harvard, investigator at the Howard Hughes Medical Institute, director of the NIH's Initiative for Chemical Genetics, faculty member of the joint Harvard/MIT Broad Institute (a genomic medicine effort), affiliate of Harvard's Department of Molecular and Cellular Biology and Harvard Medical School's Department of Cell Biology, member of Harvard's graduate program in Biophysics and the medical school's Immunology Department, a player in the early years of Vertex, founder of ARIAD Pharmaceuticals and Infinity Pharmaceuticals, and founding editor of Chemistry and Biology. (What other name would the journal have?)

Schreiber is extremely accomplished and intelligent, but he can also be quite hard to take. A powerful pointer to this tendency comes when the interviewer asks him about who's been his greatest inspiration - he leads off with Muhammed Ali and Neil Cassidy, and for better or worse, that's just about the size of it. Mix those two together, give the resulting hybrid a burning interest in chemical biology and a chair at Harvard, and there you are.

I've not met him personally, but I've heard him lecture more than once. The first time I saw him, he was speaking on one of his big stories from past years, the immunomodulator FK-506. He hit the afterburners during the first slide and ascended into the stratosphere, leaving us ground-based observers with only a persistent vapor trail. Slide after slide came up, densely packed with years of data in a punishing, torrential rush - after a while, people in the audience were clutching their heads as their pens clattered to the floor. Some of my readers will, I think, have had similar Schreiberian experiences.

And the guy has no problem with saying just what's on his mind, although if I had those faculty positions, I'd wouldn't be feeling too many restraints myself. It's a mixed blessing. Some of what he's got to say is very sensible, even if no one else feels like saying it in so many words, but he can also come across as divorced from reality and impossibly arrogant. I would have to think that a post-doctoral position with him would be a rather stimulating experience, which would doubtless take place during days, nights, weekends, major and minor holidays, and probably during periodic flashback dreams in the years to come.

The interview starts out by asking Schreiber what he thinks of the new NIH Roadmap initiative. He sounds the alarm, correctly, about one thing it seems to emphasize:

". . .what is perhaps surprising to some people is how much emphasis the NIH has placed on small molecules and screening in an academic environment. A meeting with some senior pharma industry executives made me realize that there are many people who are unhappy with this activity. When I went back and read what is being proposed, some of the language suggests that the plan is to fund early drug discovery and development in an academic environment.

Yet some of the language also suggests that the Roadmap is about a parallel process of using chemistry and small-molecule synthesis and screening to interrogate biology. In this model, a parallel set of techniques is involved but the overall goals are very different. I am equally concerned as the pharma industry if the Roadmap were to place too much emphasis on the first model, because I think that a focus on drug discovery in academic would represent a missed opportunity. Sending the message to groups of industry-na‘ve biologists and chemists that they should now try to discover drugs in their labs could be problematic for a variety of reasons."

He's right on target there, I have to say. And what does this do to the arguments some people make that just about all the research the drug industry does is ripped off from NIH-funded work? (I've mentioned this topic before; as we get the archives working again I'll group those posts together with this one.) Schreiber goes on to point out that drug development works completely differently from academic research, and that mixing the two might well end up compromising the strengths of each.

Academia should do what it does best: exploration, discovering new islands and continents of knowledge that no one even knew were there. We in industry can do some of that, but our strong suit is finding concrete uses for such discoveries. We're good at doing the detail work of developing them into something that works feasibly, reproducibly, safely, and (dare I mention) profitably. Getting all those to happen at the same time is no mean feat, as any engineer or applied-research types will tell you at length.

I'll have more to blog on the Schreiber interview; not everything he says in it is quite so sensible. But this point was worth some craziness. I'd like to take some of the folks who try to tell me that the whole pharma industry is some sort of profit-seeking leech on the NIH-funded world and lock them in a room with the guy and a couple of projectors. As long as I could be around as his audience staggered out, groping for painkillers and rubbing their eyes. . .

April 01, 2004

Differences Between Academia and Industry, Pt. 2Email This EntryPrint This Article

One of the main things I noticed when I joined the pharmaceutical industry (other than the way my black robe itched and the way the rooster blood stained my shoes, of course) was how quickly one moved from project to project. That's in contrast to most chemistry grad-school experiences, where you end up on your Big PhD Project, and you stay on that sucker until you finish it (or until it finishes you.)

My B.PhDP. was a natural product synthesis, and I had plenty of time to become sick of it. My project seemed to be rather tired of me, too, judging by the way it bucked like a mad horse at crucial stages. Month after month it ground on, and the time stretched into years. And I was still making starting material, grinding it out just the way I had two years before, the same reactions to make the same intermediates, which maybe I could get to fly in the right direction this time. Or maybe not. . .time to make another bucket of starting material, back to the well we go. . .

Contrast drug discovery: reaction not working? Do another one. There's always another product you can be making - maybe this one will be good. Project not going well? Toxicity, formulation problems? Everyone will give it the hearty try, but after a while, everyone will join in to give it the hearty heave-ho, because something else will come along that's a better use of the time. Time's money.

It keeps you on your toes. You have to learn the behavior of completely new classes of molecules each time - no telling what they'll be like. You dig through the literature, try some reactions, and get your bearings quickly, because you don't have weeks or months to become familiar with things. The important thing is to get some chemistry going. If it doesn't make the product you expected, then maybe it'll make something else interesting. Send that in, too. You never know.

March 18, 2004

Differences Between Industry and Academia, Pt. 1Email This EntryPrint This Article

A reader's e-mail got me thinking about this topic. It's worth a number of posts, as you'd guess, since there are many substantial differences. Some are merely of degree (funding!), while others are of kind.

But the funding makes for larger changes than you'd think, so I'll get that one out of the way first. When I was in graduate school, my advisor's research group was actually pretty well-heeled. We had substantial grant money, and none of us had to be teaching assistants past our first year. But even so, we had to watch the expenditures. For example, we didn't order dry solvents, in their individual syringable bottles, from the chemical companies because those were too expensive. Instead, we had our solvent stills, which (to be fair) produced extremely good quality reagents at the price of the occasional fire.

Grad student labor is so cheap it's nearly free, so making expensive reagents was more cost-effective than buying them. (At least, it was if you weren't the person making them.) I had a starting material that's produced from pyrolysis of corn starch (levoglucosan, it's called, and I'd be happy to hear from anyone who's worked with the stuff.) At the time, it sold for $27 per 100 milligrams, and since I used it in fifty-gram batches, that was out of our price range for sure.

So I pyrolyzed away, producing tarry sludge that had to be laboriously cleaned up over about a week to give something that would crystallize. (I saved the first small batch that did that for me back in the summer of 1984, and it's sitting in the same vial right next to me as I write. The label looks rather distressingly yellowed around the edges, I have to say.) A kilo of corn starch would net you about fifty grams of starting material, if everything worked perfectly. And if it didn't, well, I just started burning up another batch, because it's not like I had anything to do that Sunday night, anyway.

When I got my first industrial job, it took me a while to get all this out of my system. I needed an expensive iron complex at one point, about six months into my work, and sat down to order the things I needed to make it. My boss came by and asked what I was up to, and when I told him, asked me how much the reagent itself would cost. "About 900 dollars", I told him, whereupon he told me to forget it and just order the darn stuff. He pointed out that the company would spend a good part of that price just on my salary in the time it would take me to make it, and he was right, even at 1989 rates.

So we throw the money around, by most academic standards. But there can be too much of a good thing. There's a famous research institute in Europe, which I'm not quite going to name, that was famously well-funded for many years. They had a very large, very steady stream of income, and it bought the finest facilities anyone could want. Year after year, only the best. And what was discovered there, in the palatial labs? Well, now and then something would emerge. But nothing particularly startling, frankly - and from some of the labs, nothing much at all. You'd have to have a generous and forgiving spirit to think that the results justified the expenditure. There are other examples, over which for now I will draw the veil of discretion.

September 26, 2002

On the MoneyEmail This EntryPrint This Article

Greg Hlatky over at A Dog's Lifeis right on target in his post of Tuesday the 24th. And that's not just because he said that my posts always make him think - of course, he could always be thinking "What's with this maniac, anyway?"

No, he's completely correct about the uses of time and money in academia versus industry. He points out that:

Industry and academia each have major constraints. At colleges and universities, it's money. Money is always in short supply and grants have to be used to cover the administration's greed in charging overhead, tuition and stipend for the students, purchase of laboratory chemicals and equipment, and so on. The money never seems enough and professors are always rattling their begging cups with funding agencies to continue their research.

What graduate programs have lots of is time and people. Research groups have hordes of post-docs and graduate students who can be kept working 16 hours a day, seven days a week, since graduate school is the last bastion of feudalism. The product of these two factors is a maniacal stinginess about chemicals and equipment - acetone and deuterated solvents are recycled, broken glassware is patched up over and over, syntheses start from calcium carbide and water - combined with a total lack of concern as to whether these rigors are time-efficient.

Oh, yeah. And it gets perpetuated as well by the feeling that if you're in the lab all day and all night, you must be productive - no matter how worthless and time-wasting the stuff you're doing. I've seen a number of people fall into that trap; I've fallen into it myself.

For a good example of the attitude Greg's talking about, see the recent long article by K. C. Nicolau in Angewandte Chemie. It's an interesting synthetic story, that's for sure (Nicolau and his group don't work on any boring molecules.) But it's marred by mentions of how this reaction was done at 2 AM, and how this sample was obtained on Christmas Eve, and how when I walked into the lab at 6 AM on Sunday, my people rushed up with the latest spectrum. . .there's just no need for this sort of thing. Of course, Nicolau's people work hard - they couldn't make the things they make, as quickly as they make them, without working hard.

I recall during my first months in industry when it finally dawned on me that it was a lot better idea to order expensive reagents rather than make them, considering what I got paid and what delays would cost the projects I worked on. A liberating feeling, I can tell you. I've never looked back. Since then, I can spend a departmental budget with the best of 'em.