Yesterday's post introduced journal Impact Factors to those who haven't had the honor of meeting them yet. Everyone whose livelihood depends on scientific publication, though, already knows them well, since anything that can be measured will be used at performance evaluation time. IFs are a particular obsession in academic research, since publishing papers is one of those things that an aspiring tenure-seeking associate professor is expected to do. (On the priority list, it comes right after hauling down the grant money.)
But that's not what we value in industry. We know about the pecking order of journals, but we just don't get a chance to publish in them as often as academics do. I'd much rather have a paper in Angewande Chemie than in Synthetic Communications (to pick the top and near-bottom of the reasonable organic chemistry journals), but it won't make or break my raise or promotion hopes. Now, having zero patents might do the trick, but that's because patents are a fairly good surrogate for the number of potentially lucrative drug projects you've worked on.
Nope, it's academia that has to live by these things, and there are complaints. On one level, people have pointed out that impact factors may not be measuring what they're supposed to. Here's a broadside in the British Medical Journal, pointing out (among other things) that the individual papers inside a given journal follow a power-law distribution, too. It's glossed over by the assignment of a single impact factor to each journal, but the most-cited 50% of the papers in a given journal can be cited ten times as much as the lesser 50%.
The less interesting papers are getting a free impact ride, while the better ones could have presumably been playing off in a super-impact league of their own, if such a journal existed. The authors also point out that journals covering new fields with a rapidly expanding literature - much of which is also ephemeral - have necessarily inflated IFs. Does it really indicate their quality? (Well now, say the pro-impact people, isn't this just the sort of carping you'd expect from the BMJ, who live in the shadow of the more-prestigious Lancet?)
But there's also the problem of self-citation. As ISI's own data make clear, lousy journals tend to have more of it. (The text of that article seems to spend most of its time trying to deny what its graphs are saying, as far as I can see.) So if you think that the Journal of Pellucidarian Materials Science has an unimpressive impact factor, wait until you see it corrected by stripping out all the citations from the other papers in J. Pelluc. Mat. Sci. If you accept what IFs are supposed to be measuring, you have to conclude that the huge majority of journals are simply not worth bothering with.
On a different level, there's plenty of room to hate the whole idea, regardless of how it's implemented. The number of citations, say such critics, is not necessarily the only (or best) measure of a paper's worth, or the worth of the journal it appears in. (As that link shows, the original papers from both Salk and Sabin on their polio vaccines are on no one's list of high citation rates.)
It is no coincidence, they go on to point out, that the promulgators of this idea make their living by selling journal citation counts. And by conducting interviews with the authors of highly cited papers and with the editors of journals whose impact factors are moving up, and God only knows what else. The whole thing starts to remind one of the Franklin Mint.