Archive for the Bad Statistics Category

Biosignature Hype

Posted in Astrohype, Bad Statistics, The Universe and Stuff with tags , , , , on April 17, 2025 by telescoper

I was thinking just the other day that I haven’t posted much in either the Astrohype or the Bad Statistics folders on this blog. Well today I found an item that belongs in both categories. Many people will have seen the widespread press coverage of a misleading claim of the discovery of alien life; see, e.g., here. This misleading press coverage is based on a misleading press release from the University of Cambridge which you can find here.

The story is based on a paper in the pay-to-publish Astrophysical Journal Letters with the title “New Constraints on DMS and DMDS in the Atmosphere of K2-18 b from JWST MIRI“. The DMS and DMDS in the title refer to Dimethyl Sulphide and Dimethyl Disulphide respectively. These are interpreted by the authors as biosignatures.

There are two main problems with this claim. One is that DMS and DMDS are not necessarily biosignatures in the first place; see here for the reasons. The other is that there isn’t even any evidence for the detection of DMS or DMDS anyway. Here is the spectrum of which the lead author of the paper, Prof. Nikku Madhusudhan, has claimed “the signal came through loud and clear”.

Yeah, right. In statistical terms this is a non-detection. The Bayes Factor used in the paper to quantify the evidence for a model with DMS and/or DMDS over one without is just 2.62 in the logarithm. That’s not a detection by any stretch of the imagination; to be anywhere near convincing a Bayes Factor has to be at least 100. The subsequent cherry-picking of the data to improve the apparent probability of a detection is just statistical flummery.

Notice that the use of the phrase “Constraints on” in the title of the paper does not indicate that the article presents evidence that a detection has been made. That the claim has somehow morphed into the “the strongest evidence for life beyond our solar system” is absurd. The most charitable thing I can say is that Prof. Madhusudhan must have been carried away by enthusiasm. This doesn’t reflect very well on Cambridge University either.

This episode worries me greatly. This is a time of increasing hostility towards science and this sort of thing can only make matters worse. Scientists need to be much more careful in communicating the uncertainties in their results.

UPDATE: There’s a now paper on arXiv here that argues that a straight line is a better fit to the data, in other words that there is no strong statistical evidence for spectral features at all.

Big Things in the Universe

Posted in Bad Statistics, The Universe and Stuff with tags , , , , on February 7, 2025 by telescoper

About a year ago I wrote a couple of articles (here and here) in response to the discovery of a very large structure (“The Big Ring“) and claims that this structure and others – such as a Giant Arc – were inconsistent with the standard model of cosmology; the work concerned was later submitted as a preprint to arXiv. In my first post on the Big Ring I wrote

To assess the significance of the Big Ring or other structures in a proper scientific fashion, one has to calculate how probable that structure is given a model. We have a standard model that can be used for this purpose, but to simulate very structures is not straightforward because it requires a lot of computing power even to simulate just the mass distribution. In this case one also has to understand how to embed Magnesium absorption too, something which may turn out to trace the mass in a very biased way. Moreover, one has to simulate the observational selection process too, so one is doing a fair comparison between observations and predictions.

Well on today’s arXiv there is a preprint by Sawala et al. with the title aims to assess the significance of structures comparable to the Giant Arc. The title of the paper is The Emperor’s New Arc: gigaparsec patterns abound in a ΛCDM universe from which you can guess the conclusions. The abstract is

Recent discoveries of apparent large-scale features in the structure of the universe, extending over many hundreds of megaparsecs, have been claimed to contradict the large-scale isotropy and homogeneity foundational to the standard (ΛCDM) cosmological model. We explicitly test and refute this conjecture using FLAMINGO-10K, a new and very large cosmological simulation of the growth of structure in a ΛCDM context. Applying the same methods used in the observations, we show that patterns like the “Giant Arc”, supposedly in tension with the standard model, are, in fact, common and expected in a ΛCDM universe. We also show that their reported significant overdensities are an algorithmic artefact and unlikely to reflect any underlying structure.

arXiv:2502.03515

Here’s a picture of a large structure (a “Giant Arc”) taken from a gallery of such objects found in the simulations


I quote from the conclusions:

We hope that our results will dispel the misconception that no inhomogeneity can be found in the standard model Universe beyond some finite size. Instead, any given realisation of the isotropic universe comprises a time- and scale-dependent population of structures from which patterns can be identified on any scale.

I have nothing to add.

Leaving Certificate Results

Posted in Bad Statistics, Covid-19, Education, Maynooth with tags , , , , , , on August 23, 2024 by telescoper

Today’s the day that over 60,000 school students across Ireland are receiving their Leaving Certificate Results. As always there will be joy for some, and disappointment for others. The headline news relating to these results is that a majority (68%) of grades have been scaled up to that the distribution matches last year’s outcomes. This has meant an uplift of marks by about 7.5% on average, with the biggest changes happening at the lower levels of grade.

This artificial boost is a consequence of the generous adjustments made during the pandemic and apparent wish by the Education Minister, Norma Foley, to ensure that this year’s students are treated “fairly” compared to last year’s. Of course this argument could be made for continuing to inflate grades next year too, and the year after that. Perhaps the Minister’s plan seems to be to keep the grades high until after the next General Election, after which it will be someone else’s job to treat students “unfairly”. Anyway, you might say that marks have been scaled to maintain a Norma Distribution…

One can’t blame the students, of course, but one of the effects of this scaling is that students will be coming into third-level education with grades that imply a greater level of achievement than they actually have reached. This is a particular problem with a subject like physics where we really need students to be comfortable with certain aspects of mathematics before they start their course. It has been clear that even students with very good grades at Higher level have considerable gaps in their knowledge. This looks set to continue, and we will just have to deal with it. This issue was compounded for a while because Leaving Certificate grades were produced so late that first-year students had to start university a week late, giving less time for the remedial teaching that many of them needed. At least this year we won’t have that problem, so can plan some activities early on in the new Semester.

Anyway, out of interest – probably mine rather than yours – I delved into the statistics of Leaving Certificate results going back six years for Mathematics (at Higher A and Ordinary B) level, Physics and Applied Mathematics which I fished out of the general numbers given here.

Here are the results in a table, with the columns denoting the grade (1=high) and the numbers are percentages:

You can seen that the percentage of students getting H1 in Mathematics has increased a bit to 12.6% after falling considerably from 18.1% in 2022 to 11.2% last year (2023); note the huge increase in H1 from 2020 to 2021 (8.6% to 15.1%). Another thing worth noting is that both Physics and Applied Mathematics have declined significantly in popularity since 2019 from 7210.

Now that the results are out there will be a busy time until next Wednesday (28th) when the CAO first round offers go out. That is when those students wanting to go to university find out if they made the grades and university departments find out how many new students (if any) they will have to teach in September.

P.S. When I was a little kid we used to call a “Certificate” a “Stiff Ticket”. I just thought you would like to know that.

An Exercise in Bayesian Probability

Posted in Bad Statistics with tags , on August 20, 2024 by telescoper

A businessman is on a luxury yacht, celebrating his recent acquittal in a high-profile fraud trial, when the yacht sinks in mysterious circumstances off the coast of Sicily. The businessman is one of six people on board who are missing, presumed dead. Just last week, the businessman’s co-defendant in the aforementioned fraud trial died in a mysterious road accident while out running in Cambridgeshire.

Using Bayesian methods, calculate the probability of these two events being a coincidence. Show your working. To the police.

Update: An investigation into possible manslaughter has been opened by the authorities in Italy.

Irrationalism and Deductivism in Science

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , , , , , , , , , , on March 11, 2024 by telescoper

I thought I would use today’s post to share the above reading list which was posted on the wall at the meeting I was at this weekend; it was only two days long and has now finished. Seeing the first book on the list, however, it seems a good idea to follow this up with a brief discussion -largely inspired by David Stove’s book – of some of the philosophical issues raised at the workshop.

It is ironic that the pioneers of probability theory, principally Laplace, unquestionably adopted a Bayesian rather than frequentist interpretation for his probabilities. Frequentism arose during the nineteenth century and held sway until recently. I recall giving a conference talk about Bayesian reasoning only to be heckled by the audience with comments about “new-fangled, trendy Bayesian methods”. Nothing could have been less apt. Probability theory pre-dates the rise of sampling theory and all the frequentist-inspired techniques that modern-day statisticians like to employ.

Most disturbing of all is the influence that frequentist and other non-Bayesian views of probability have had upon the development of a philosophy of science, which I believe has a strong element of inverse reasoning or inductivism in it. The argument about whether there is a role for this type of thought in science goes back at least as far as Roger Bacon who lived in the 13th Century. Much later the brilliant Scottish empiricist philosopher and enlightenment figure David Hume argued strongly against induction. Most modern anti-inductivists can be traced back to this source. Pierre Duhem has argued that theory and experiment never meet face-to-face because in reality there are hosts of auxiliary assumptions involved in making this comparison. This is nowadays called the Quine-Duhem thesis.

Actually, for a Bayesian this doesn’t pose a logical difficulty at all. All one has to do is set up prior probability distributions for the required parameters, calculate their posterior probabilities and then integrate over those that aren’t related to measurements. This is just an expanded version of the idea of marginalization, explained here.

Rudolf Carnap, a logical positivist, attempted to construct a complete theory of inductive reasoning which bears some relationship to Bayesian thought, but he failed to apply Bayes’ theorem in the correct way. Carnap distinguished between two types or probabilities – logical and factual. Bayesians don’t – and I don’t – think this is necessary. The Bayesian definition seems to me to be quite coherent on its own.

Other philosophers of science reject the notion that inductive reasoning has any epistemological value at all. This anti-inductivist stance, often somewhat misleadingly called deductivist (irrationalist would be a better description) is evident in the thinking of three of the most influential philosophers of science of the last century: Karl PopperThomas Kuhn and, most recently, Paul Feyerabend. Regardless of the ferocity of their arguments with each other, these have in common that at the core of their systems of thought likes the rejection of all forms of inductive reasoning. The line of thought that ended in this intellectual cul-de-sac began, as I stated above, with the work of the Scottish empiricist philosopher David Hume. For a thorough analysis of the anti-inductivists mentioned above and their obvious debt to Hume, see David Stove’s book Popper and After: Four Modern Irrationalists. I will just make a few inflammatory remarks here.

Karl Popper really began the modern era of science philosophy with his Logik der Forschung, which was published in 1934. There isn’t really much about (Bayesian) probability theory in this book, which is strange for a work which claims to be about the logic of science. Popper also managed to, on the one hand, accept probability theory (in its frequentist form), but on the other, to reject induction. I find it therefore very hard to make sense of his work at all. It is also clear that, at least outside Britain, Popper is not really taken seriously by many people as a philosopher. Inside Britain it is very different,and I’m not at all sure I understand why. Nevertheless, in my experience, most working physicists seem to subscribe to some version of Popper’s basic philosophy.

Among the things Popper has claimed is that all observations are “theory-laden” and that “sense-data, untheoretical items of observation, simply do not exist”. I don’t think it is possible to defend this view, unless one asserts that numbers do not exist. Data are numbers. They can be incorporated in the form of propositions about parameters in any theoretical framework we like. It is of course true that the possibility space is theory-laden. It is a space of theories, after all. Theory does suggest what kinds of experiment should be done and what data is likely to be useful. But data can be used to update probabilities of anything.

Popper has also insisted that science is deductive rather than inductive. Part of this claim is just a semantic confusion. It is necessary at some point to deduce what the measurable consequences of a theory might be before one does any experiments, but that doesn’t mean the whole process of science is deductive. He does, however, reject the basic application of inductive reasoning in updating probabilities in the light of measured data; he asserts that no theory ever becomes more probable when evidence is found in its favour. Every scientific theory begins infinitely improbable, and is doomed to remain so.

Now there is a grain of truth in this, or can be if the space of possibilities is infinite. Standard methods for assigning priors often spread the unit total probability over an infinite space, leading to a prior probability which is formally zero. This is the problem of improper priors. But this is not a killer blow to Bayesianism. Even if the prior is not strictly normalizable, the posterior probability can be. In any case, given sufficient relevant data the cycle of experiment-measurement-update of probability assignment usually soon leaves the prior far behind. Data usually count in the end.

The idea by which Popper is best known is the dogma of falsification. According to this doctrine, a hypothesis is only said to be scientific if it is capable of being proved false. In real science certain “falsehood” and certain “truth” are almost never achieved. Theories are simply more probable or less probable than the alternatives on the market. The idea that experimental scientists struggle through their entire life simply to prove theorists wrong is a very strange one, although I definitely know some experimentalists who chase theories like lions chase gazelles. To a Bayesian, the right criterion is not falsifiability but testability, the ability of the theory to be rendered more or less probable using further data. Nevertheless, scientific theories generally do have untestable components. Any theory has its interpretation, which is the untestable baggage that we need to supply to make it comprehensible to us. But whatever can be tested can be scientific.

Popper’s work on the philosophical ideas that ultimately led to falsificationism began in Vienna, but the approach subsequently gained enormous popularity in western Europe. The American Thomas Kuhn later took up the anti-inductivist baton in his book The Structure of Scientific Revolutions. Kuhn is undoubtedly a first-rate historian of science and this book contains many perceptive analyses of episodes in the development of physics. His view of scientific progress is cyclic. It begins with a mass of confused observations and controversial theories, moves into a quiescent phase when one theory has triumphed over the others, and lapses into chaos again when the further testing exposes anomalies in the favoured theory. Kuhn adopted the word paradigm to describe the model that rules during the middle stage,

The history of science is littered with examples of this process, which is why so many scientists find Kuhn’s account in good accord with their experience. But there is a problem when attempts are made to fuse this historical observation into a philosophy based on anti-inductivism. Kuhn claims that we “have to relinquish the notion that changes of paradigm carry scientists ..closer and closer to the truth.” Einstein’s theory of relativity provides a closer fit to a wider range of observations than Newtonian mechanics, but in Kuhn’s view this success counts for nothing.

Paul Feyerabend has extended this anti-inductivist streak to its logical (though irrational) extreme. His approach has been dubbed “epistemological anarchism”, and it is clear that he believed that all theories are equally wrong. He is on record as stating that normal science is a fairytale, and that equal time and resources should be spent on “astrology, acupuncture and witchcraft”. He also categorised science alongside “religion, prostitution, and so on”. His thesis is basically that science is just one of many possible internally consistent views of the world, and that the choice between which of these views to adopt can only be made on socio-political grounds.

Feyerabend’s views could only have flourished in a society deeply disillusioned with science. Of course, many bad things have been done in science’s name, and many social institutions are deeply flawed. But one can’t expect anything operated by people to run perfectly. It’s also quite reasonable to argue on ethical grounds which bits of science should be funded and which should not. But the bottom line is that science does have a firm methodological basis which distinguishes it from pseudo-science, the occult and new age silliness. Science is distinguished from other belief-systems by its rigorous application of inductive reasoning and its willingness to subject itself to experimental test. Not all science is done properly, of course, and bad science is as bad as anything.

The Bayesian interpretation of probability leads to a philosophy of science which is essentially epistemological rather than ontological. Probabilities are not “out there” in external reality, but in our minds, representing our imperfect knowledge and understanding. Scientific theories are not absolute truths. Our knowledge of reality is never certain, but we are able to reason consistently about which of our theories provides the best available description of what is known at any given time. If that description fails when more data are gathered, we move on, introducing new elements or abandoning the theory for an alternative. This process could go on forever. There may never be a final theory. But although the game might have no end, at least we know the rules….

Broken Science Initiative

Posted in Bad Statistics with tags , , , , , , on March 10, 2024 by telescoper

This weekend I find myself at an invitation-only event in Phoenix, Arizona, organized by the Broken Science Initiative and called  The Broken Science Epistemology Camp. I flew here on Thursday and will be returning on Tuesday, so it’s a flying visit to the USA.  I thank the organizers Greg Glassman and Emily Kaplan for inviting me. I wasn’t sure what to expect when I accepted the invitation to come but I welcomed the chance to attend an event that’s a bit different from the usual academic conference. There are some suggestions here for background reading which you may find interesting.

Yesterday we had a series of wide-ranging talks about subjects such as probability and statistics, the philosophy of science, the problems besetting academic research, and so on. One of the speakers was eminent psychologist  Gerd Gigerenzer, the theme of whose talk was the use of p-values in statistic and the effects of bad statistical reasoning in reporting research results and wider issues generated by this. You can find a paper covering many of the points raised by Gigerenzer here (PDF).

I’ve written about this before on this blog – see here for example – and I thought it might be useful to re-iterate some of the points here.

The p-value is a frequentist concept that corresponds to the probability of obtaining a value at least as large as that obtained for a test statistic under a “null hypothesis”. To give an example, the null hypothesis might be that two variates are uncorrelated; the test statistic might be the sample correlation coefficient r obtained from a set of bivariate data. If the data were uncorrelated then r would have a known probability distribution, and if the value measured from the sample were such that its numerical value would be exceeded with a probability of 0.05 then the p-value (or significance level) is 0.05.

Whatever the null hypothesis happens to be, the way a frequentist would proceed would be to calculate what the distribution of measurements would be if it were true. If the actual measurement is deemed to be unlikely (say that it is so high that only 1% of measurements would turn out that big under the null hypothesis) then you reject the null, in this case with a “level of significance” of 1%. If you don’t reject it then you tacitly accept it unless and until another experiment does persuade you to shift your allegiance.

But the p-value merely specifies the probability that you would reject the null-hypothesis if it were correct. This is what you would call making a Type I error. It says nothing at all about the probability that the null hypothesis is actually a correct description of the data or that some other hypothesis is needed. To make that sort of statement you would need to specify an alternative hypothesis, calculate the distribution based on it, and determine the statistical power of the test, i.e. the probability that you would actually reject the null hypothesis when the alternative hypothesis, rather than the null, is correct. To fail to reject the null hypothesis when it’s actually incorrect is to make a Type II error.

If all this stuff about p-values, significance, power and Type I and Type II errors seems a bit bizarre, I think that’s because it is. It’s so bizarre, in fact, that I think most people who quote p-values have absolutely no idea what they really mean. Gerd Gigerenzer gave plenty of examples of this in his talk.

A Nature piece published some time ago argues that in fact that results quoted with a p-value of 0.05 turn out to be wrong about 25% of the time. There are a number of reasons why this could be the case, including that the p-value is being calculated incorrectly, perhaps because some assumption or other turns out not to be true.  For instance, a widespread example is assuming that the variates concerned are normally distributed. Unquestioning application of off-the-shelf statistical methods in inappropriate situations is a serious problem in many disciplines, but is particularly prevalent in the social sciences when samples are also typically rather small.

The suggestion that this issue can be resolved  by simply choosing stricter criteria, i.e. a p-value of 0.005 rather than 0.05, does not help because the p-value is an answer to a question about what the hypothesis says about the probability of the data, which is quite different from that which a scientist would really want to ask, namely what the data have to say about a given hypothesis. Frequentist hypothesis testing is intrinsically confusing compared to the logically clearer Bayesian approach, which does focus on the probability of a hypothesis being right given the data, rather than on properties that the data might have given the hypothesis. If I had my way I’d ban p-values altogether.

The p-value is just one example of a statistical device that is too often applied mechanically without real understanding, as a black box, and which can be manipulated through data dredging (or “p-hacking”). Gerd Gigerenzer went on to bemoan the general use of “mindless statistics”, the prevalence of “statistical rituals” and referred to much statistical reasoning as “a meaningless ordeal of pedantic computations”. It

Bad statistics isn’t the only thing wrong with academic research, but it is a significant factor.

The Big Ring Circus

Posted in Astrohype, Bad Statistics, The Universe and Stuff with tags , , , , on January 15, 2024 by telescoper

At the annual AAS Meeting in New Orleans last week there was an announcement of a result that made headlines in the media (see, e.g., here and here). There is also a press release from the University of Central Lancashire.

Here is a video of the press conference:

I was busy last week so didn’t have time to read the details so refrained from commenting on this issue at the time of the announcement. Now that I am back in circulation, I have time to read the details, but unfortunately was unable to find even a preprint describing this “discovery”. The press conference doesn’t contain much detail either so it’s impossible to say anything much about the significance of the result, which is claimed (without explanation) to be 5.2σ (after “doing some statistics”). I see the “Big Ring” now has its own wikipedia page, the only references on which are to press reports, not peer-reviewed scientific papers or even preprints.

So is this structure “so big it challenges our understanding of the universe”?

Based on the available information it is impossible to say. The large-scale structure of the Universe comprises a complex network of walls and filaments known as the cosmic web which I have written about numerous times on this blog. This structure is so vast and complicated that it is very easy to find strange shapes in it but very hard to determine whether or not they indicate anything other than an over-active imagination.

To assess the significance of the Big Ring or other structures in a proper scientific fashion, one has to calculate how probable that structure is given a model. We have a standard model that can be used for this purpose, but to simulate very structures is not straightforward because it requires a lot of computing power even to simulate just the mass distribution. In this case one also has to understand how to embed Magnesium absorption too, something which may turn out to trace the mass in a very biased way. Moreover, one has to simulate the observational selection process too, so one is doing a fair comparison between observations and predictions.

I have seen no evidence that this has been done in this case. When it is, I’ll comment on the details. I’m not optimistic however, as the description given in the media accounts contains numerous falsehoods. For example, quoting the lead author:

The Cosmological Principle assumes that the part of the universe we can see is viewed as a ‘fair sample’ of what we expect the rest of the universe to be like. We expect matter to be evenly distributed everywhere in space when we view the universe on a large scale, so there should be no noticeable irregularities above a certain size.

https://www.uclan.ac.uk/news/big-ring-in-the-sky

This just isn’t correct. The standard cosmology has fluctuations on all scales. Although the fluctuation amplitude decreases with scale, there is no scale at which the Universe is completely smooth. See the discussion, for example, here. We can see correlations on very large angular scales in the cosmic microwave background which would be absent if the Universe were completely smooth on those scales. The observed structure is about 400 Mpc in size, which does not seem to be to be particularly impressive.

I suspect that the 5.2σ figure mentioned above comes from some sort of comparison between the observed structure and a completely uniform background, in which case it is meaningless.

My main comment on this episode is that I think it’s very poor practice to go hunting headlines when there isn’t even a preprint describing the results. That’s not the sort of thing PhD supervisors should be allowing their PhD students to do. As I have mentioned before on this blog, there is an increasing tendency for university press offices to see themselves entirely as marketing agencies instead of informing and/or educating the public. Press releases about scientific research nowadays rarely make any attempt at accuracy – they are just designed to get the institution concerned into the headlines. In other words, research is just a marketing tool.

In the long run, this kind of media circus, driven by hype rather than science, does nobody any good.

P.S. I was going to joke that ring-like structures can be easily explained by circular reasoning, but decided not to.

How not to do data visualisation…

Posted in Bad Statistics on January 9, 2024 by telescoper

How many things are wrong about this graphic?

An Open Letter to the Times Higher World University Rankers

Posted in Bad Statistics, Education with tags , , , , , , on September 20, 2023 by telescoper

Dear Rankers,

I note with interest that you have announced significant changes to the methodology deployed in the construction of this years forthcoming league tables. I would like to ask what steps you will take to make it clear to that any changes in institutional “performance” (whatever that is supposed to mean) could well be explained simply by changes in the metrics and how they are combined?,

I assume, as intelligent and responsible people, that you did the obvious test for this effect, i.e. to construct and publish a parallel set of league tables, with this year’s input data but last year’s methodology, which would make it easy to isolate changes in methodology from changes in the performance indicators.  This is a simple test that anyone with any scientific training would perform.

You have not done this on any of the previous occasions on which you have introduced changes in methodology. Perhaps this lamentable failure of process was the result of multiple oversights. Had you deliberately withheld evidence of the unreliability of your conclusions you would have left yourselves open to an accusation of gross dishonesty, which I am sure would be unfair.

Happily, however, there is a very easy way to allay the fears of the global university community that the world rankings are being manipulated. All you need to do is publish a set of league tables using the 2022 methodology and the 2023 data. Any difference between this table and the one you published would then simply be an artefact and the new ranking can be ignored.

I’m sure you are as anxious as anyone else to prove that the changes this year are not simply artificially-induced “churn”, and I look forward to seeing the results of this straightforward calculation published in the Times Higher as soon as possible, preferably next week when you announce this years league tables.

I look forward to seeing your response to the above through the comments box, or elsewhere. As long as you fail to provide a calibration of the sort I have described, this year’s league tables will be even more meaningless than usual. Still, at least the Times Higher provides you with a platform from which you can apologize to the global academic community for wasting their time and that of others.

Never mind the points, look at the line!

Posted in Bad Statistics, Open Access, The Universe and Stuff with tags , , , , , on June 14, 2023 by telescoper

I was just thinking this morning that it’s been a while since I posted anything in my Bad Statistics folder when suddenly I come across this gem from a paper in Nature Astronomy entitled Could quantum gravity slow down neutrinos?

The paper itself is behind a paywall (though a preprint version is on the arXiv here). The results in the paper were deemed so important that Nature Astronomy tweeted about them, including this remarkable graph:

Understandably there has been quite a lot of reaction from scientists on Twitter to this plot, questioning how the blue line is obtained from the dots (as only one point to the right appears to be responsible for the trend), remarking on the complete absence of any error bars on either axis for any of the points, and above all wondering how this managed to get past a referee, never mind one for a “prestigious” journal such as Nature Astronomy. It wouldn’t have passed muster as an undergraduate exercise.

Of course this is how a proper astronomer would do it:

Joking aside, if you look at the paper (or the preprint if you can’t afford it) you will see another graph, which shows two other points at higher energy (red triangles):

The extra two points don’t have any error-bars either, and according to the preprint these appear to be unconfirmed candidate GRB events.

The abstract of the paper is:

In addition to its implications for astrophysics, the hunt for neutrinos originating from gamma-ray bursts could also be significant in quantum-gravity research, as they are excellent probes of the microscopic fabric of spacetime. Some previous studies based on neutrinos observed by the IceCube observatory found intriguing preliminary evidence that some of them might be gamma-ray burst neutrinos whose travel times are affected by quantum properties of spacetime that would slow down some of the neutrinos while speeding up others. The IceCube collaboration recently significantly revised the estimates of the direction of observation of their neutrinos, and we here investigate how the corrected directional information affects the results of the previous quantum-spacetime-inspired analyses. We find that there is now little evidence for neutrinos being sped up by quantum spacetime properties, whereas the evidence for neutrinos being slowed down by quantum spacetime is even stronger than previously determined. Our most conservative estimates find a false-alarm probability of less than 1% for these ‘slow neutrinos’, providing motivation for future studies on larger data samples.

I agree with the last sentence where it says larger data samples are needed in future, but also I’d suggest higher standards of data analysis are also called for. Not to mention refereeing. After all, it’s the quality of the reviewing that you pay for, isn’t it?

P.S. For those of you wondering, this paper would not have been published by the Open Journal of Astrophysics even if passed review, as it is not on the astro-ph section of arXiv (it’s on gr-qc).