I had an email asking if scientists in industry care about journal impact factors. It's an interesting question, but it needs to be answered in parts. Unless you deal with academic publishing, the phrase probably doesn't mean much. "Impact factors" are an attempt to quantify what everyone knows empirically: some journals are more prestigious than others. You know how we science types love to quantify stuff.
The whole business comes from the folks at ISI (now owned by Thomson.) They had been publishing the Citation Index for years, which was (and is) a way to find out who had referenced a given paper in the scientific literature after it was published. This can be useful if you want to see if anyone's followed up or commented on an interesting paper (or if you just want to see if anyone's cited your own work.)
And as ISI realized early on, it could also furnish some interesting rankings of "most-cited" papers in a given field. (Here's a recent set of lists from Chemical Abstracts, the big dog in the chemical information world, who got into the citation-counting business themselves.) You could figure out who the most cited authors are, too, although that ISI link won't rank-order them for you. (You have to pay for that data! Or you can look here.) There are lists of the most highly cited institutions, too, naturally.
About ten years ago, they introduced the Impact Factor to do the same thing for scientific journals. That's the number of citations generated by a journal (usually over a multiyear period) divided by the number of papers it published in that time: the average number of cites per paper, in other words.
The publishing community - initially rather worried and sceptical, if my memory serves - has gone completely crazy over the whole idea. Now journals advertise themselves by their impact factors. "Publish here! We're a good journal, really! We have proof!" If you'd like to know what a particular journal's rating is, they'll probably shout it out if it's any good at all. A failure to mention the number, down to three decimal places, is an act that speaks for itself.
Want the whole list? It can be rather hard to come by, unless you're a paying customer of ISI's, but here's a place to start. Those aren't the latest figures, but they'll do. You'll notice that at the top are a bunch of review journals, who publish comparatively few papers but get cited out the wazoo. Among the original-research journals in the top ranks are big kahunas like Cell, Nature, the New England Journal of Medicine, Science, and such. But I find a perverse fascination in browsing the low end of the scale. The Ethiopian Medical Journal? Fertiliser Research? Bovine Practice? Annals of Saudi Medicine? Surely some of these trench-dwellers ceased publication and vanished from sight during the rating period. The list of "0.000" impact factors is particularly alarming, although most of the journals listed are there by some sort of statistical artifact or (I presume) don't really exist. But if no one reads them, how do we know if they're real or not?
In the next installment, we'll look at some problems with the whole idea - there are some - and I'll tell you if we industrial types give a hoot about it or not. . .