Citation Metrics and “Judging People’s Careers”

There’s a paper on the arXiv by John Kormendy entitled Metrics of research impact in astronomy: Predicting later impact from metrics measured 10-15 years after the PhD. The abstract is as follows.

This paper calibrates how metrics derivable from the SAO/NASA Astrophysics Data System can be used to estimate the future impact of astronomy research careers and thereby to inform decisions on resource allocation such as job hires and tenure decisions. Three metrics are used, citations of refereed papers, citations of all publications normalized by the numbers of co-authors, and citations of all first-author papers. Each is individually calibrated as an impact predictor in the book Kormendy (2020), “Metrics of Research Impact in Astronomy” (Publ Astron Soc Pac, San Francisco). How this is done is reviewed in the first half of this paper. Then, I show that averaging results from three metrics produces more accurate predictions. Average prediction machines are constructed for different cohorts of 1990-2007 PhDs and used to postdict 2017 impact from metrics measured 10, 12, and 15 years after the PhD. The time span over which prediction is made ranges from 0 years for 2007 PhDs to 17 years for 1990 PhDs using metrics measured 10 years after the PhD. Calibration is based on perceived 2017 impact as voted by 22 experienced astronomers for 510 faculty members at 17 highly-ranked university astronomy departments world-wide. Prediction machinery reproduces voted impact estimates with an RMS uncertainty of 1/8 of the dynamic range for people in the study sample. The aim of this work is to lend some of the rigor that is normally used in scientific research to the difficult and subjective job of judging people’s careers.

This paper has understandably generated a considerable reaction on social media especially from early career researchers dismayed at how senior astronomers apparently think they should be judged. Presumably “judging people’s careers” means deciding whether or not they should get tenure (or equivalent) although the phrase is not a pleasant one to use.

My own opinion is that while citations and other bibliometric indicators do contain some information, they are extremely difficult to apply in the modern era in which so many high-impact results are generated by large international teams. Note also the extreme selectivity of this exercise: just 22 “experienced astronomers” provide the :calibration” which is for faculty in just 17 “highly-ranked” university astronomy departments. No possibility of any bias there, obviously. Subjectivity doesn’t turn into objectivity just because you make it quantitative.

If you’re interested here are the names of the 22:

Note that the author of the paper is himself on the list. I find that deeply inappropriate.

Anyway, the overall level of statistical gibberish in this paper is such that I am amazed it has been accepted for publication, but then it is in the Proceedings of the National Academy of Sciences, a journal that has form when it comes to dodgy statistics. If I understand correctly, PNAS has a route that allows “senior” authors to publish papers without passing through peer review. That’s the only explanation I can think of for this.

As a rejoinder I’d like to mention this paper by Adler et al. from 12 years ago, which has the following abstract:

This is a report about the use and misuse of citation data in the assessment of scientific research. The idea that research assessment must be done using “simple and objective” methods is increasingly prevalent today. The “simple and objective” methods are broadly interpreted as bibliometrics, that is, citation data and the statistics derived from them. There is a belief that citation statistics are inherently more accurate because they substitute simple numbers for complex judgments, and hence overcome the possible subjectivity of peer review. But this belief is unfounded.

O brave new world that has such metrics in it.

Update: John Kormendy has now withdrawn the paper; you can see his statement here.

7 Responses to “Citation Metrics and “Judging People’s Careers””

  1. Its good to see that the whole topic of solar physics has one representative on that list.

    Disciplines have widely different citation rates and so do sub-disciplines and even sub-sub-disciplines. And as you note many papers now are produced by large teams. I am particularly disturbed by the comment –

    …The aim of this work is to lend some of the rigor that is normally used in scientific research to the difficult and subjective job of judging people’s careers….

    Makes advertising and appointing new staff easy – just run the applications through a algorithm….

  2. I believe even the “contributed” papers by NAS members go through some kind of review at PNAS but if I remember correctly, in that case the reviewers can be chosen by the author(s).

    I’m not surprised that Kormendy was able to find reviewers who accepted his paper. Unfortunately, astrophysics seems to suffer from pointless citation counting even worse than many other fields of science.

    • Though I’m surprised about the list of “experienced astronomers” who agreed to take part in this exercise. I would have thought some of them would have known better not to do that.

    • Yes, NAS members can submit papers to PNAS which are then reviewed by a NAS member. Not sure that the reviewer is chosen by the author – as I recall the author could suggest some suitable names but that was all. (I have published in PNAS with an NAS co-author). Authors can also suggest possible reviewers for other journals.

  3. John Peacock Says:

    The Adler et al. paper implies that it is somehow immoral to reduce the complicated record of an individual to a number. But in a recruitment the relevant number is the ranking of the applicants, and whether or not you end up ranked as no. 1 matters. So we always reduce people to numbers, and the question is whether citation data can at least speed up this process – and ideally do so in a way that reduces errors from the bias and/or ignorance of a single reviewer. I’ve never understood the strongly negetive reaction that some people (including you, Peter, it seems?) have to citation data. Perhaps if, like me, you had sat on a REF panel spending much time reading and grading physics papers in a rather noisy way, you would have wondered if a more reliable assessment of the quality of the researchers concerned could be derived from citations. I am in no doubt that something better than REF could be produced in this way with <1% of the effort. There is undoubtedly information in citation statistics, and although those numbers can be used in stupid and naive ways, we should be clever enough to devise methods to compensate for that. To this extent, I applaud what Kormendy has tried to do. It's a bit like machine learning: the judgements of Blandford et al. are used as training data for an algorithm, and the algorithm seems to work. You might question whether the initial data are appropriate, but unfortunately people have always got jobs or not according to whether or not senior people like them. So if you don't like the detail of Kormendy's approach, it would help to know how you think he should have gone about it, rather than implying that no such exercise could ever have value.

    • telescoper Says:

      I would say (and did say in the post) that citations contain some information but the information they contain concerns a paper not an individual. In the REF you have to grade “outputs” not individual, What is wrong is trying to condense everything about an individual into a metric that is built in this way. When I’ve been involved in recruitment we have always used citation information but only alongside other criteria.

  4. Carole Mundell Says:

    Some related reading:

    ‘Gaming the Metrics’
    https://mitpress.mit.edu/books/gaming-metrics

    ‘Is publishing in the chemical sciences gender biased?’
    https://www.rsc.org/new-perspectives/talent/gender-bias-in-publishing/

    ‘Growing Citation Gender Gap’
    https://www.nature.com/articles/s42254-020-0207-3

    ‘Study suggests 5 ways to increase your citation counts’
    https://www.natureindex.com/news-blog/studies-research-five-ways-increase-citation-counts

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: