After a busy morning correcting examination scripts, I have now reached the lunch interval and thought I’d use the opportunity to share a paper I found via Stephen Curry on Twitter with the title In which fields do higher impact journals publish higher quality articles?. It’s quite telling that anyone should ask the question. It’s also telling that the paper, in a Springer journal called Scientometrics is behind a paywall. I can at least share the abstract:
The Journal Impact Factor and other indicators that assess the average citation rate of articles in a journal are consulted by many academics and research evaluators, despite initiatives against overreliance on them. Undermining both practices, there is limited evidence about the extent to which journal impact indicators in any field relate to human judgements about the quality of the articles published in the field’s journals. In response, we compared average citation rates of journals against expert judgements of their articles in all fields of science. We used preliminary quality scores for 96,031 articles published 2014–18 from the UK Research Excellence Framework 2021. Unexpectedly, there was a positive correlation between expert judgements of article quality and average journal citation impact in all fields of science, although very weak in many fields and never strong. The strength of the correlation varied from 0.11 to 0.43 for the 27 broad fields of Scopus. The highest correlation for the 94 Scopus narrow fields with at least 750 articles was only 0.54, for Infectious Diseases, and there was only one negative correlation, for the mixed category Computer Science (all), probably due to the mixing. The average citation impact of a Scopus-indexed journal is therefore never completely irrelevant to the quality of an article but is also never a strong indicator of article quality. Since journal citation impact can at best moderately suggest article quality it should never be relied on for this, supporting the San Francisco Declaration on Research Assessment.
There is some follow-up discussion on this paper and its conclusions here.
The big problem of course is how you define “high-quality papers” and “high-quality journals”. As in the above discussion this usually resolves itself into something to do with citation impact, which is problematic to start with but if that’s the route you want to go down then there is sufficient readily available article-level information for each paper nowadays that you don’t need any journal metrics at all. The academic journal industry won’t agree of course, as it’s in their interest to perpetuate the falsehood that such rankings matter. The fact that correlation between article “quality” measures and journal “quality” measures is weak does not surprise me. I think there are many weak papers that have passed peer review and appeared in high-profile journals. This is another reason for disregarding the journal entirely. Don’t judge the quality of an item by the wrapping, but by what’s inside it!
There is quite a lot of discussion in my own field of astrophysics about what the “leading journals” are. Different ranking methods produce different lists, not surprisingly given the arbitrariness of the methods used. According to this site, The Open Journal of Astrophysics ranks 4th out of 48 journals., but it doesn’t appear on some other lists because the academic publication industry, which acts as gate-keeper via Clarivate, does not seem not to like its unconventional approach. According to Exaly, Monthly Notices of the Royal Astronomical Society (MNRAS) is ranked in 13th place, while according to this list, it is 14th. No disrespect to MNRAS, but I don’t see any objective justification for calling it “the leading journal in the field”.
The top ranked journals in astronomy and astrophysics are generally review journals, which have always attract lots of citations through references like “see Bloggs 2015 and references therein”. Many of these review articles are really excellent and contribute a great deal to their discipline, but it’s not obvious they can be compared with actual research papers. At OJAp we decided to allow review articles of sufficiently high quality because we see the journal primarily as a service to the community rather than a service to the bean-counters who make the rankings.
Now, back to the exams…
Follow @telescoper