Archive for Journal Impact Factor

Do “high-quality journals” always publish “high-quality papers”?

Posted in Uncategorized with tags , , , , on May 23, 2023 by telescoper

After a busy morning correcting examination scripts, I have now reached the lunch interval and thought I’d use the opportunity to share a paper I found via Stephen Curry on Twitter with the title In which fields do higher impact journals publish higher quality articles?. It’s quite telling that anyone should ask the question. It’s also telling that the paper, in a Springer journal called Scientometrics is behind a paywall. I can at least share the abstract:

The Journal Impact Factor and other indicators that assess the average citation rate of articles in a journal are consulted by many academics and research evaluators, despite initiatives against overreliance on them. Undermining both practices, there is limited evidence about the extent to which journal impact indicators in any field relate to human judgements about the quality of the articles published in the field’s journals. In response, we compared average citation rates of journals against expert judgements of their articles in all fields of science. We used preliminary quality scores for 96,031 articles published 2014–18 from the UK Research Excellence Framework 2021. Unexpectedly, there was a positive correlation between expert judgements of article quality and average journal citation impact in all fields of science, although very weak in many fields and never strong. The strength of the correlation varied from 0.11 to 0.43 for the 27 broad fields of Scopus. The highest correlation for the 94 Scopus narrow fields with at least 750 articles was only 0.54, for Infectious Diseases, and there was only one negative correlation, for the mixed category Computer Science (all), probably due to the mixing. The average citation impact of a Scopus-indexed journal is therefore never completely irrelevant to the quality of an article but is also never a strong indicator of article quality. Since journal citation impact can at best moderately suggest article quality it should never be relied on for this, supporting the San Francisco Declaration on Research Assessment.

There is some follow-up discussion on this paper and its conclusions here.

The big problem of course is how you define “high-quality papers” and “high-quality journals”. As in the above discussion this usually resolves itself into something to do with citation impact, which is problematic to start with but if that’s the route you want to go down then there is sufficient readily available article-level information for each paper nowadays that you don’t need any journal metrics at all. The academic journal industry won’t agree of course, as it’s in their interest to perpetuate the falsehood that such rankings matter. The fact that correlation between article “quality” measures and journal “quality” measures is weak does not surprise me. I think there are many weak papers that have passed peer review and appeared in high-profile journals. This is another reason for disregarding the journal entirely. Don’t judge the quality of an item by the wrapping, but by what’s inside it!

There is quite a lot of discussion in my own field of astrophysics about what the “leading journals” are. Different ranking methods produce different lists, not surprisingly given the arbitrariness of the methods used. According to this site, The Open Journal of Astrophysics ranks 4th out of 48 journals., but it doesn’t appear on some other lists because the academic publication industry, which acts as gate-keeper via Clarivate, does not seem not to like its unconventional approach. According to Exaly, Monthly Notices of the Royal Astronomical Society (MNRAS) is ranked in 13th place, while according to this list, it is 14th. No disrespect to MNRAS, but I don’t see any objective justification for calling it “the leading journal in the field”.

The top ranked journals in astronomy and astrophysics are generally review journals, which have always attract lots of citations through references like “see Bloggs 2015 and references therein”. Many of these review articles are really excellent and contribute a great deal to their discipline, but it’s not obvious they can be compared with actual research papers. At OJAp we decided to allow review articles of sufficiently high quality because we see the journal primarily as a service to the community rather than a service to the bean-counters who make the rankings.

Now, back to the exams…

Clarivate’s Web of Inconsistency

Posted in Open Access, The Universe and Stuff with tags , , , , on March 13, 2023 by telescoper

I am involved in the (painfully slow) process of trying to get the Open Journal of Astrophysics listed by Clarivate, which some researchers – or rather, their funding agencies – feel to be important. One of the reasons for this seems to be that some researchers are only allowed to publish in journals with an official Journal Impact Factor (JIF) and Clarivate has set itself up as the gatekeeper for those, although they can easily be calculated using data in the public domain.

Leaving Clarivate aside for a moment, I was googling around this morning and found an independent listing of the Journal Impact Factor for the Open Journal of Astrophysics for 2021, namely 7.4, and found the following description.

Nice. Not bad, considering the Open Journal of Astrophysics is run on a shoestring.

Anyway, although I have grave reservations about the JIF, wanting to make the Open Journal available to as wide a range of authors as possible, I applied for listing by Clarivate in August 2022. I waited and waited. Then, a couple of weeks ago somebody asked me on social media about it and I tagged Clarivate in my reply. No doubt by sheer coincidence I received a reply from Clarivate last week, just a matter of days after mentioning them on social media. A similar thing has happened before. It seems that if you want to ask Clarivate something you have to ask them in public.

At least they replied eventually. We’re still not listed though. Not yet anyway. Among the feedback I received was this:

The volume of scholarly works published annually is expected to be within ranges appropriate to the subject area. However, we have noticed that the publication volume is not in line with similar journals covering this subject area.

When we first started up the Open Journal of Astrophysics I expected this would be an issue as we are new and have published many fewer papers than the big hitters in the field such as MNRAS and ApJ. However, after doing a bit of research among the astronomical journals actually listed on the Web of Science, I changed my mind and thought it wouldn’t be a problem. It seems I was wrong.

Take, for example, the Serbian Astronomical Journal which is listed by Clarivate. I’m mentioning this journal not because I have anything against it: it’s a free Open Access journal and that is very laudable. I just want to use it as an examplar to demonstrate an inconsistency in the above feedback.

According to its web page, the Serbian Astronomical Journal (SerAJ) has an official impact factor of 1.1. A search on NASA/ADS reveals that since 2019 it has published 46 papers which have garnered a total of 69 citations between them. This journal has been published under its current name since 1998.

The Open Journal of Astrophysics (OJAp) is not listed by Clarivate so does not have an official journal impact factor, but I have calculated one here and it is also mentioned above. Since 2019 the Open Journal of Astrophysics has published 69 papers (actually 70, but one has not yet appeared on NASA/ADS). These papers have so far received a total of 1365 citations.

So OJAp has published 50% more papers than SerAJ, with twenty times the citation impact, and a far higher JIF, yet OJAp is not listed by Clarivate but SerAJ is. Can anyone out there explain the reason to me, or shall I assume the obvious?

The Gaming of Citation and Authorship

Posted in Open Access with tags , , on February 22, 2023 by telescoper

About ten days ago I wrote a piece about authorship of scientific papers in which I pointed out that in astrophysics in cosmology it is often the case that many “authors” (i.e. people listed in the author list) of papers (largely those emanating from large consortia) often haven’t even read the paper they are claiming to have written.

I now draw your attention to a paper by Stuart Macdonald, with the abstract:

You can find the full paper here, but unfortunately it requires a subscription. Open Access hasn’t reached sociology yet.

The paper focuses on practices in medicine, but it would be very wrong to assume that the issues are confined to that discipline; others have already fallen into the mire. I draw your attention in particular to the sentence:

Many authors in medicine have made no meaningful contribution to the article that bears their names, and those who have contributed most are often not named as authors. 

The first bit certainly also applies to astronomy, for example.

The paper does not just discuss authorship, but also citations. I won’t discuss the Journal Impact Factor further, as any sane person knows that it is daft. Citations are not just used to determine the JIF, however – citations at article level make more sense, but are also not immune from gaming, and although they undoubtedly contain some information, they do not tell the whole story. Nor will I discuss the alleged ineffectiveness of peer review in medicine (about which I know nothing). I will however end with one further quote from the abstract:

The problem is magnified by the academic publishing industry and by academic institutions….

So many problems are…

The underlying cause of all this is that the people in charge of academic institutions nowadays have no concept of the intrinsic value of research and scholarship. The only things that are meaningful in their world are metrics. Everything we do now is reduced to key performance indicators, such as publication and citation counts. This mindset is a corrupting influence encourages perverse behaviour among researchers as well as managers.

The (unofficial) 2021 Journal Impact Factor for the Open Journal of Astrophysics

Posted in Open Access, The Universe and Stuff with tags , , on July 16, 2022 by telescoper

Since a few people have been asking about the Journal Impact Factor (JIF) for the Open Journal of Astrophysics, I thought I’d do a quick post in response.

When asked about this my usual reply is (a) to repeat the arguments why the impact factor is daft and (b) point out that the official JIF is calculated by Clarivate so it’s up to them to calculate it – us plebs don’t get a say.

On the latter point Clarivate takes its bibliometric data from the Web of Science (which it owns). I have applied on behalf of the Open Journal of Astrophysics to be listed in the Web of Science but it has not yet been listed.

Anyway, the fact that it’s out of my hands doesn’t stop people from asking so I thought I’d proceed with my own calculation not using Web of Science but instead using NASA/ADS (which probably underestimates citation numbers but which is freely available, so you can check the numbers using the interface here); the official NASA/ADS abbreviation for the Open Journal of Astrophysics is OJAp.

For those of you who can’t be bothered to look up the definition of an impact factor for a given year it is defined the sum of the citations for all papers published in the journal over the previous two-year period divided by the total number of papers published in that journal over the same period. It’s therefore the average citations per paper published in a two-year window. Since our first full year of publication was 2019, the first year for which we can calculate a JIF is 2021 (i.e. last year) which is defined using data from 2019 and 2020.

I stress again we don’t have an official Journal Impact Factor for the Open Journal of Astrophysics but one can calculate its value easily. In 2019 and 2020 we published 12 and 15 papers respectively, a 27. These papers were cited a total of 193 times in 2021. The journal impact factor for 2021 is therefore … roll on the drums… 193/27, which gives:

If you don’t believe me, you can check the numbers yourself. For comparison, the latest available Impact Factor (2020) for Monthly Notices of the Royal Astronomical Society is 5.29 and Astronomy & Astrophysics is 5.80. OJAp’s first full year of publication was 2019 (in which we published 12 papers) but we did publish one paper in 2018. Based on the 134 citations received to these 13 papers in 2020, our 2020 Journal Impact Factor was 10.31, much higher than MNRAS or A&A.

Furthermore, we published 32 papers in 2020 and 2021 which have so far received 125 citations in 2022. Our Journal Impact Factor for 2022 will therefore be at least 125/32= 3.91 and if those 32 papers are cited at the same for the rest of this year the 2022 JIF will be about 7.5.

Who knows, perhaps these numbers will shame Clarivate into giving us an official figure?

With so much bibliometric information available at the article level there is no reason whatsoever to pay any attention to such a crudely aggregated statistics at the journal level as the JIF. One should judge the contents, not the packaging. I am however fully aware that many people who hold the purse strings for research insist on publications in journals with a high JIF. If there was any fairness in the system they would be mandating astronomy publications in OJAp rather than MNRAS or A&A.

Anyway, it might annoy all the right people if I add a subtitle to the Open Journal of Astrophysics: “The World’s Leading Astrophysics Journal”…

Open Journal of Astrophysics Impact Factor Poll

Posted in Open Access with tags , , on February 5, 2021 by telescoper

A few people ask from time to time about whether the Open Journal of Astrophysics has a Journal Impact Factor.

For those of you in the dark about this, the impact factor for Year N, which is usually published in year N+1, is based on the average number of citations obtained in Year N for papers published in Years N-1 and N-2 so it requires two complete years of publishing.

For the OJA, therefore, the first time an official IF can be constructed is for 2021, which would be published is in 2022 and it would be based on the citations gained in 2021 (this year) for papers published in 2019 and 2020. Earlier years were incomplete so no IF can be defined.

It is my personal view that article-level level bibliometric data are far more useful than journal-level descriptors such as the Journal Impact Factor (JIF). I think the Impact Factor is very silly actually. Unfortunately, however, there are some bureaucrats that seem to think that the Journal Impact Factor is important and some of our authors think we should apply to have an official one.
What do you think? If you have an opinion you can vote on the twitter poll here:

I should add that my criticisms of the Journal Impact Factor are not about the Open Journal’s own citation performance. We have every reason to believe our impact factor would be pretty high.

Comments welcome.

Measuring the lack of impact of journal papers

Posted in Open Access with tags , , , on February 4, 2016 by telescoper

I’ve been involved in a depressing discussion on the Astronomers facebook page, part of which was about the widespread use of Journal Impact factors by appointments panels, grant agencies, promotion committees, and so on. It is argued (by some) that younger researchers should be discouraged from publishing in, e.g., the Open Journal of Astrophysics, because it doesn’t have an impact factor and they would therefore be jeopardising their research career. In fact it takes two years for new journal to acquire an impact factor so if you take this advice seriously nobody should ever publish in any new journal.

For the record, I will state that no promotion committee, grant panel or appointment process I’ve ever been involved in has even mentioned impact factors. However, it appears that some do, despite the fact that they are demonstrably worse than useless at measuring the quality of publications. You can find comprehensive debunking of impact factors and exposure of their flaws all over the internet if you care to look: a good place to start is Stephen Curry’s article here.  I’d make an additional point here, which is that the impact factor uses citation information for the journal as a whole as a sort of proxy measure of the research quality of papers publish in it. But why on Earth should one do this when citation information for each paper is freely available? Why use a proxy when it’s trivial to measure the real thing?

The basic statistical flaw behind impact factors is that they are based on the arithmetic mean number of citations per paper. Since the distribution of citations in all journals is very skewed, this number is dragged upwards by a few papers with extremely large numbers of citations. In fact, most papers published have many few citations than the impact factor of a journal. It’s all very misleading, especially when used as a marketing tool by cynical academic publishers.

Thinking about this on the bus on my way into work this morning I decided to suggest a couple of bibliometric indices that should help put impact factors into context. I urge relevant people to calculate these for their favourite journals:

  • The Dead Paper Fraction (DPF). This is defined to be the fraction of papers published in the journal that receive no citations at all in the census period.  For journals with an impact factor of a few, this is probably a majority of the papers published.
  • The Unreliability of Impact Factor Factor (UIFF). This is defined to be the fraction of papers with fewer citations than the Impact Factor. For many journals this is most of their papers, and the larger this fraction is the more unreliable their Impact Factor is.

Another usefel measure for individual papers is

  • The Corrected Impact Factor. If a paper with a number N of actual citations is published in a journal with impact factor I then the corrected impact factor is C=N-I. For a deeply uninteresting paper published in a flashily hyped journal this will be large and negative, and should be viewed accordingly by relevant panels.

Other suggestions for citation metrics less stupid than the impact factor are welcome through the comments box…