Home\News\Journals: impact factors are too highly valued

Journals: impact factors are too highly valued

Aug.25 2017

Linda Butler in Correspondence shows that researchers in Australia are publishing more papers since the number of publications was introduced as a performance indicator for research. Butler points out there is now "little incentive to strive for placement in a prestigious journal. Whether a publication is a groundbreaking piece in Nature or a pedestrian piece in a low-impact journal, the rewards are identical".

The point is well-made, but her phrase highlights another growing problem in measuring performance which, if unchecked, threatens to have a major impact on science policy and progress. The problem is an over-reliance on journal impact factors to judge the worth of scientists.

It is increasingly common to hear scientists making snap judgements about the quality of others' work simply by perusing the names of the journals in which they publish, with no actual attempt to read their papers. This is a dangerous habit, for quite brilliant work can appear in a 'lesser' journal, either because its subject is not currently fashionable or because its author has special reasons for preferring a specialist forum. The habit is also dangerous because it erodes the capacity of the research community to determine its own direction.

An ex-colleague of mine, for example, liked to publish his excellent work on nerve regeneration, which could have been published anywhere, in a very specialist surgical journal because that is where he thought it would be most likely to inspire immediate clinical use.

The professional editorial staff of very high-impact journals such as Nature have a primary responsibility to the success of their journal: circulation, advertising, impact statistics and reputation. Deluged with submissions from authors hopeful of publishing in a journal that will give them bench-credibility in a world of instant judgements, these editors must screen submitted papers to see if they meet the journal's needs before sending them out for peer review. Therefore, most submissions are rejected for reasons other than flawed scientific reasoning.

I have no criticism of this approach: it makes sense in the commercial world of journal production. The problem arises when scientists and administrators of science use the placement of papers to judge the worth of researchers, the worth of institutions, the best places to award grant money and the best places to fund fellowships. The more we couple allocation of resources to publication in 'top' journals, the more we are effectively handing over the direction of research to a small group of professional editors, who never sought this responsibility and who (excellent at their intended jobs though they may be) are unlikely to be the best people to bear it.

Most of us are, at least sometimes, the judges as well as the judged. If we do not consistently take the trouble to judge papers by their content rather than by their location, the direction of science will come to be determined, however unintentionally, by an editorial élite. We shall have only ourselves to blame.