Guest post: improving impact factors

Today’s post comes from our Country Ambassador for China, Dave Lyons. We know many academic librarians will have something to add to this discussion, it’s certainly a hot topic at our University Library here in Sydney.

After completing his MLS in 1954, Eugene Garfield started his own consultancy which later became the Institute for Scientific Information (ISI). In 1964, ISI began publishing the Science Citation Index (SCI), and impact factor was born.
Impact factor is a relatively simple formula: the number of citations of articles in journal X by other journals ÷ the total number of articles in journal X over the past two years (there is also a five-year version)Around June, Thomson Reuters (which acquired ISI in 1992), will release its Journal Citation Reports (JCR) containing 2014 impact factors calculated using citation data for 2012-2013.
In the past 60 years, impact factor has grown in importance, particularly in the natural sciences, and with it concern. Criticisms of impact factor itself include:
  1. Irreproducibility – no one has the same access to citation data as Thomson Reuters, preventing independent confirmation or correction of errors;
  2. Skew – because impact factor is an average, a single blockbuster paper can be responsible for the majority of citations. An increased impact factor might be entirely due to a single article while citations for all other articles actually fell.
  3. Inconsistency – the numerator contains all incoming citations regardless of article type while the denominator only includes articles vetted by Thomson Reuters employees.*
  4. Lack of transparency – Thomson Reuters and publishers from time to time negotiate these calculations behind closed doors.

Additional criticisms have been levelled against how impact factors are used (or abused) rather than the numbers themselves. 

impact by Janine. Used with permission under CC BY-ND 2.0

impact by Janine. Used with permission under CC BY-ND 2.0

For researchers, publication in impact factor journals can affect whether they can get a job, a promotion, tenure or a grant – despite Thomson Reuters and other bodies caution against it. In some countries, such as China, massive cash bonuses for publication in an impact factor journal has led to a black market, and everyone knows that the system is broken. These high stakes gambles are likely a primary contributor to correlation between impact factor and retraction rates, and places increased pressure on the peer review system to detect increasingly bold and intricate frauds.

Journals, meanwhile, face pressure to continue to improve their impact factor in order to increase revenue, stay solvent, or establish credibility and legitimacy, all while maintaining or improving quality. Questionable “impact factor” platforms have proliferated, most of which appear to be based in and/or targeting developing nations. At best, these sites are the first awkward steps towards building indigenous metrics platforms; at worst, they might be scams.

Generally, there are two responses to these challenges: develop institutional reforms or better bibliometrics. The Leiden Manifesto issued in April 2015 lays out 10 principles for achieving the former, while the latter approach focuses on article-level metrics and altmetrics, usually hand-in-hand with open access, open data, and post-publication peer review.

And that’s where I’d like to start the discussion: questions and thoughts regarding any of these areas, because eventually one will lead to another.

*Side note: this is also interesting when you consider that those potentially inflated citation counts are used in calculating cost-per-citation in evaluating “Big Deal” bundles.

Do you work with impact factors or other bibliometrics in your job? What are your observations?

Posted in Guest post and tagged , , , , , .

One Comment

Leave a Reply

Your email address will not be published. Required fields are marked *