Measuring the lack of impact of journal papers

I’ve been involved in a depressing discussion on the Astronomers facebook page, part of which was about the widespread use of Journal Impact factors by appointments panels, grant agencies, promotion committees, and so on. It is argued (by some) that younger researchers should be discouraged from publishing in, e.g., the Open Journal of Astrophysics, because it doesn’t have an impact factor and they would therefore be jeopardising their research career. In fact it takes two years for new journal to acquire an impact factor so if you take this advice seriously nobody should ever publish in any new journal.

For the record, I will state that no promotion committee, grant panel or appointment process I’ve ever been involved in has even mentioned impact factors. However, it appears that some do, despite the fact that they are demonstrably worse than useless at measuring the quality of publications. You can find comprehensive debunking of impact factors and exposure of their flaws all over the internet if you care to look: a good place to start is Stephen Curry’s article here.  I’d make an additional point here, which is that the impact factor uses citation information for the journal as a whole as a sort of proxy measure of the research quality of papers publish in it. But why on Earth should one do this when citation information for each paper is freely available? Why use a proxy when it’s trivial to measure the real thing?

The basic statistical flaw behind impact factors is that they are based on the arithmetic mean number of citations per paper. Since the distribution of citations in all journals is very skewed, this number is dragged upwards by a few papers with extremely large numbers of citations. In fact, most papers published have many few citations than the impact factor of a journal. It’s all very misleading, especially when used as a marketing tool by cynical academic publishers.

Thinking about this on the bus on my way into work this morning I decided to suggest a couple of bibliometric indices that should help put impact factors into context. I urge relevant people to calculate these for their favourite journals:

  • The Dead Paper Fraction (DPF). This is defined to be the fraction of papers published in the journal that receive no citations at all in the census period.  For journals with an impact factor of a few, this is probably a majority of the papers published.
  • The Unreliability of Impact Factor Factor (UIFF). This is defined to be the fraction of papers with fewer citations than the Impact Factor. For many journals this is most of their papers, and the larger this fraction is the more unreliable their Impact Factor is.

Another usefel measure for individual papers is

  • The Corrected Impact Factor. If a paper with a number N of actual citations is published in a journal with impact factor I then the corrected impact factor is C=N-I. For a deeply uninteresting paper published in a flashily hyped journal this will be large and negative, and should be viewed accordingly by relevant panels.

Other suggestions for citation metrics less stupid than the impact factor are welcome through the comments box…

 

13 Responses to “Measuring the lack of impact of journal papers”

  1. lordbubonicus Says:

    I read that discussion about impact factors with interest. In fact I read the whole thread with some interest, particularly the long explanations put up by the AAS representative.

    Do you think that the use of impact factors or not is a USA vs UK division? I’ve noticed previously that there’s a heavy American bias on the Astronomers Facebook group, and that a lot of the discussion on subjects like this is coloured by what’s commonplace in the USA.

  2. […] “I’ve been involved in a depressing discussion on the Astronomers Facebook page, part of which was about the widespread use of Journal Impact factors by appointments panels, grant agencies, promotion committees, and so on …” (more) […]

  3. Sesh Nadathur Says:

    We had a discussion about open access and the Open Journal last week here at Portsmouth. A few points that came out of that discussion might be of interest:

    – there were a few mildly sceptical voices, mostly from permanent staff, worried about the potential impact on junior researchers’ careers from publishing in an “unknown” journal,
    – sceptics were primarily those who had been educated in the US,
    – a senior academic who was broadly supportive of the OJ suggested that if the editorial board were to succeed in negotiating the publication of all papers from some big collaboration (in the way that all Planck papers are submitted to A&A), this would go a long way to helping the journal “catch on”,
    – some people were also unhappy with the idea of putting a paper on the arXiv for all to see before it had undergone peer review – I think this was about professional embarrassment in case it required revision,
    – again, this opinion was more common among those who had been educated in the US (as well as among astronomers, as opposed to cosmologists).

  4. Reblogged this on Disturbing the Universe and commented:
    Why journal impact factor is a meaningless metric…

  5. The most relevant parameter is probably your ‘dead paper fraction’ although just like any citation index, deadness needs time to mature. I have never seen JIFs used by grant panels, promotion panels, etc, but I have often seen it mentioned in supporting documentation to proof to people in other fields that a journal called ‘A&A’ is high quality science and in no way related to the Daily Mail or indicating alcohol abuse. There ARE a lot of bogus journals around and people in the field kn ow which ones are real but people outside don’t. For young people it is more important to publish in recognized journals; established scientists can publish anywhere and Hawkins could publish on wordpress and still get cited. Reputation sadly is important, and if you don’t yet have a personal reputation, you need to borrow it from the journal. OJA should initially aim for papers from well known scientists.

    And I am definitely an astronomer – not a cosmologist. The word ‘cosmos’ seems to have become more distant since Sagan’s days.

  6. Indeed. I should wear my glasses more often. I now expect a phone call with a telling-off

  7. Distributions are more informative than indices:

    https://bernardrentier.wordpress.com/2015/12/31/denouncing-the-imposter-factor/

    “… using the impact factor of the journals where [one] publishes is like measuring someone’s qualities by the club where he/she is allowed to go dining. Stars are for restaurants, not for their customers…”

  8. It’s very strange that some astronomers seem to care so much about impact factors and prestige of journals. As far as I know, all of the major astronomy journals (ApJ, AJ, MNRAS, A&A) have very high acceptance rates (close to 90 %), so it’s not like getting a paper accepted necessarily means much.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: