Archive for the Astrohype Category

Biosignature Hype

Posted in Astrohype, Bad Statistics, The Universe and Stuff with tags , , , , on April 17, 2025 by telescoper

I was thinking just the other day that I haven’t posted much in either the Astrohype or the Bad Statistics folders on this blog. Well today I found an item that belongs in both categories. Many people will have seen the widespread press coverage of a misleading claim of the discovery of alien life; see, e.g., here. This misleading press coverage is based on a misleading press release from the University of Cambridge which you can find here.

The story is based on a paper in the pay-to-publish Astrophysical Journal Letters with the title “New Constraints on DMS and DMDS in the Atmosphere of K2-18 b from JWST MIRI“. The DMS and DMDS in the title refer to Dimethyl Sulphide and Dimethyl Disulphide respectively. These are interpreted by the authors as biosignatures.

There are two main problems with this claim. One is that DMS and DMDS are not necessarily biosignatures in the first place; see here for the reasons. The other is that there isn’t even any evidence for the detection of DMS or DMDS anyway. Here is the spectrum of which the lead author of the paper, Prof. Nikku Madhusudhan, has claimed “the signal came through loud and clear”.

Yeah, right. In statistical terms this is a non-detection. The Bayes Factor used in the paper to quantify the evidence for a model with DMS and/or DMDS over one without is just 2.62 in the logarithm. That’s not a detection by any stretch of the imagination; to be anywhere near convincing a Bayes Factor has to be at least 100. The subsequent cherry-picking of the data to improve the apparent probability of a detection is just statistical flummery.

Notice that the use of the phrase “Constraints on” in the title of the paper does not indicate that the article presents evidence that a detection has been made. That the claim has somehow morphed into the “the strongest evidence for life beyond our solar system” is absurd. The most charitable thing I can say is that Prof. Madhusudhan must have been carried away by enthusiasm. This doesn’t reflect very well on Cambridge University either.

This episode worries me greatly. This is a time of increasing hostility towards science and this sort of thing can only make matters worse. Scientists need to be much more careful in communicating the uncertainties in their results.

UPDATE: There’s a now paper on arXiv here that argues that a straight line is a better fit to the data, in other words that there is no strong statistical evidence for spectral features at all.

Cosmology Results from DESI

Posted in Astrohype, The Universe and Stuff with tags , , , , , , on March 20, 2025 by telescoper

Yesterday evening (10pm Irish Time) saw the release of new results from the Dark Energy Spectroscopic Instrument (DESI), completing a trio of major announcements of cosmological results in the space of two days (the Atacama Cosmology Telescope and the Euclid Q1 release being the others). I didn’t see the DESI press conference but you can read the press release here.

There were no fewer than eight DESI papers on the astro-ph section of the arXiv this morning. Here are the titles with links:

You can see from the titles that the first seven of these relate to the second data release (DR2; three years of data) from DESI; the last one listed here is a description of the first data release (DR1), which is now publicly available.

Obviously there is a lot of information to digest in these papers so here are two members of the DESI collaboration talking with Shaun Hotchkiss on Cosmology Talks about the key messages from the analysis of Baryon Acoustic Oscillations (the BAO in the titles of the new papers):

A lot has been made in the press coverage of these results about the evidence that the standard cosmological model is incomplete; see, e.g., here. Here are a few comments.

As I see it, taken on their own, the DESI BAO results are broadly consistent with the ΛCDM model as specified by the parameters determined by the Cosmic Microwave Background (CMB) inferred from Planck. Issues do emerge, however, when these results are combined with other data sets. The most intriguing of these arises with the dark energy contribution. The simplest interpretation of dark energy is that it is a cosmological constant (usually called Λ) which – as explained here – corresponds to a perfect fluid with an equation-of-state p=wρc2 with w=-1. In this case the effective mass density of the dark energy ρ remains constant as the universe expands. To parametrise departures from this constant behaviour, cosmologists have replaced this form with the form w(a)=w0+wa(1-a) where a(t) is the cosmic scale factor. A cosmological constant Λ would correspond to a point (w0=-1, wa=0) in the plane defined by these parameters, but the only requirement for dark energy to result in cosmic acceleration is that w<-1/3, not that w=-1.

The DESI team allow (w0, wa) to act as free parameters and let the DESI data constrain them, either alone or in combinations with other data sets, finding evidence for departures from the “standard values”. Here’s an example plot:

The DESI data don’t include the standard point (at the intersection of the two dashed lines) but the discrepancy gets worse when other data (such as supernovae and CMB) are folded in, as in this picture. The weight of evidence suggests a dark energy contribution which is decreasing with time.

These results are certainly intriguing, and a lot of credit is due to the DESI collaboration for working so hard to identify and remove possible systematics in the analysis (see the papers above) but what do they tell us about ΛCDM?

My view is that we’ve never known what the dark energy actually is or why it is so large that it represents 70% of the overall energy density of the Universe. The Λ in ΛCDM is really just a place-holder, not there for any compelling physical reason but because it is the simplest way of accounting for the observations. In other words, it’s what it is because of Occam’s Razor and nothing more. As with any working hypothesis, the standard cosmological model will get updated whenever new information comes to light (as it is doing now) and/or if we get new physical insights into the origin of dark energy.

Do the latest observations cast doubt on the standard model? I’d say no. We’re seeing an evolutionary change from “We have no idea what the dark energy is but we think it might be a cosmological constant” to “We still have no idea what the dark energy is but we think it might not be a cosmological constant”.

Timescape versus Dark Energy?

Posted in Astrohype, Open Access, The Universe and Stuff with tags , , , , , , , on January 2, 2025 by telescoper

Just before the Christmas break I noticed a considerable amount of press coverage claiming that Dark Energy doesn’t exist. Much of the media discussion is closely based on a press release produced by the Royal Astronomical Society. Despite the excessive hype, and consequent initial scepticism, I think the paper has some merit and raises some interesting issues.

The main focus of the discussion is a paper (available on arXiv here) by Seifert et al. with the title Supernovae evidence for foundational change to cosmological models. This paper is accompanied by a longer article called Cosmological foundations revisited with Pantheon+ (also available on arXiv) by a permutation of the same authors, which goes into more detail about the analysis of supernova observations. If you want some background, the “standard” Pantheon+ supernova analysis is described in this paper. The reanalysis presented in the recent papers is motivated an idea called the Timescape model, which is not new. It was discussed by David Wiltshire (one of the authors of the recent papers) in 2007 here and in a number of subsequent papers; there’s also a long review article by Wiltshire here (dated 2013).

So what’s all the fuss about?

Simulation of the Cosmic Web

In the standard cosmological model we assume that, when sufficiently coarse-grained, the Universe obeys the Cosmological Principle, i.e. that it is homogeneous and isotropic. This implies that the space-time is described by a Friedmann–Lemaître–Robertson–Walker metric (FLRW) metric. Of course we know that the Universe is not exactly smooth. There is a complex cosmic web of galaxies, filaments, clusters, and giant voids which comprise the large-scale structure of the Universe. In the standard cosmological model these fluctuations are treated as small perturbations on a smooth background which evolve linearly on large scales and don’t have a significant effect on the global evolution of the Universe.

This standard model is very successful in accounting for many things but only at the expense of introducing dark energy whose origin is uncertain but which accounts for about 70% of the energy density of the Universe. Among other things, this accounts for the apparent acceleration of the Universe inferred from supernovae measurements.

The standard cosmology’s energy budget

The approach taken in the Timescape model is to dispense with the FLRW metric, and the idea of separating the global evolution from the inhomogeneities. The idea instead is that the cosmic structure is essentially non-linear so there is no “background metric”. In this model, cosmological observations can not be analysed within the standard framework which relies on the FLRW assumption. Hence the need to reanalyse the supernova data. The name Timescape refers to the presence of significant gravitational time-dilation effects in this model as distinct from the standard model.

I wrote before in the context of a different paper:

….the supernovae measurements do not directly measure cosmic acceleration. If one tries to account for them with a model based on Einstein’s general relativity and the assumption that the Universe is on large-scales is homogeneous and isotropic and with certain kinds of matter and energy then the observations do imply a universe that accelerates. Any or all of those assumptions may be violated (though some possibilities are quite heavily constrained). In short we could, at least in principle, simply be interpreting these measurements within the wrong framework…

So what to make of the latest papers? I have to admit that I didn’t follow all the steps of the supernova reanalysis. I hope an expert can comment on this! I will therefore restrict myself to some general comments.

  • My attitude to the standard cosmological model is that it is simply a working hypothesis and we should not elevate it to a status any higher than that. It is based not only on the Cosmological Principle (which could be false), but on the universal applicability of general relativity (which might not be true), and on a number of other assumptions that might not be true either.
  • It is important to recognize that one of the reasons that the standard cosmology is the front-runner is that it provides a framework that enables relatively straightforward prediction and interpretation of cosmological measurements. That goes not only for supernova measurements but also for the cosmic microwave background, galaxy clustering, gravitational lensing, and so on. This is much harder to do accurately in the Timescape model simply because the equations involved are much more complex; there are few exact solutions of Einstein’s equations that can help. It is important that people work on alternatives such as this.
  • Second, the idea that inhomogeneities might be much more important than assumed in the standard model has been discussed extensively in the literature over the last twenty years or so under the heading “backreaction”. My interpretation of the current state of play is that there are many unresolved questions, largely because of technical difficulties. See, for example, work by Thomas Buchert (here and, with many other collaborators here) and papers by Green & Wald (here and here). Nick Kasiser also wrote about it here.
  • The new papers under discussion focus entirely on supernovae measurements. It must be recognized that these provide just one of the pillars supporting the standard cosmology. Over the years, many alternative models have been suggested that claim to “fix” some alleged problem with cosmology only to find that it makes other issues worse. That’s not a reason to ignore departures from the standard framework, but it is an indication that we have a huge amount of data and we’re not allowed to cherry-pick what we want. We have to fit it all. The strongest evidence in favour of the FLRW framework actually comes from the cosmic microwave background (CMB) with the supernovae provide corroboration. I would need to see a detailed prediction of the anisotropy of the CMB before being convinced.
  • The Timescape model is largely based on the non-linear expansion of cosmic voids. These are undoubtedly important, and there has been considerable observational and theoretical activity in understanding them and their evolution in the standard model. It is not at all obvious to me that the voids invoked to explain the apparent acceleration of the Universe are consistent with what we actually see in our surveys. That is something else to test.
  • Finally, the standard cosmology includes a prescription for the initial conditions from which the present inhomogeneities grew. Where does the cosmic web come from in the Timescape model?

Anyway, I’m sure there’ll be a lot of discussion of this in the next few weeks as cosmologists return to the Universe from their Christmas holidays!

Comments are welcome through the box below, especially from people who have managed to understand the cos.

Big Ring Questions and Answers

Posted in Astrohype, The Universe and Stuff with tags , , , , , on February 14, 2024 by telescoper

A month ago I wrote a piece about observations of an apparent “Big Ring” of absorption systems that was claimed to be inconsistent with the Cosmological Principle and hence with the standard cosmological model. At the time there was no paper describing the results, but a preprint has now appeared on arXiv. I haven’t read it carefully yet, but at a cursory reading it confirms my prior expectation that it does not contain a comparison of the observations with predictions of the standard model. I’ll say more after I’ve had a chance to digest the paper.

One of the things that irked me at the time of the announcement of this “discovery” was that there was no way to scrutinize the claims because they hadn’t been written up. Another was that the media covering the Big Ring did not appear to want to present balancing opinions.

An exception was Danish journalist Peter Harmsen who writes for the weekly broadsheet Weekendavisen who asked me for an interview after seeing my sceptical blog post. The results appeared in an article that came out yesterday (13th February). It’s behind a paywall but here’s a screengrab to give you an idea (if you can read Danish):

The word “store” in Danish means “big” or “large”; it comes up quite often if you want to buy a beer in Denmark. The key quote of mine is

Det er meget dårlig stil at fremsætte resultater i offentlige fora, uden at de er nedfældet skriftligt

Weekendavisen, 13th February 2024

I actually kept a transcript of the interview which I thought it might be useful to share here in the form of questions and answers. You will find the original English version of the above quote in my response to the last question.

Fundamentally, do you think that the cosmological principle still stands or is in need of adjustment or even replacement?The Cosmological Principle, in the form used in the standard cosmological model, requires the Universe to be sufficiently homogeneous and isotropic on large scales that its behaviour can be described by relatively simple solutions of Einstein’s equations called the Friedman equations. We know the Universe is not exactly​ homogenous and isotropic, and the standard model predicts actually fluctuations on rather large scales that do not violate it.  Of course the part of the Universe we have actually observed directly is relatively small, but as I see it there is no compelling evidence that the Cosmological Principle is violated. 
Specifically regarding the research on the so-called Big Ring, is the jury still out on whether the people behind the research are on to something, pending publication of a peer-reviewed paper, or is it your assessment, based on what has been made public so far, that it is probably not the breakthrough that it has been made out to be in some reports?I am sceptical of the claims made about the Big Ring because there is no scientific paper describing the result. Based on what I have seen, however, just like other claims of arcs and filaments, the structure described does not seem to be on a sufficiently large scale to violate the cosmological principle. A careful comparison of the results with simulations would be required to draw more definite conclusions. I am not aware that the authors have done that.
The PhD student credited with the research is quoted in the Financial Times as making the following remark: “Lots of people are excited but, having said that, you do get this [resistant] attitude in cosmology that you don’t generally find elsewhere in science… Good science should be about pushing back and testing our fundamental assumptions but there are clearly people who want to protect the Standard Model.” What is your comment on this? Is cosmology stifled by a scientific community resistant to change?Science is – or should be – based on evidence. In my view the weight of evidence supporting the standard model is substantial, but that does not mean that it is proven to be true; it is a working hypothesis. If anyone does come up with evidence that shows it to be wrong then that would be the most exciting thing possible. I don’t see such evidence here. There are of course many people working on alternative theories , for example involving different forms of gravitational theory. I’d say cosmologists are very open to such ideas. Indeed we know that the standard model is incomplete and will eventually be replaced by a more complete theory. That has to be driven by evidence.
You describe in your blog “an increasing tendency for university press offices to see themselves entirely as marketing agencies.” Have there been other recent examples of universities being a little too eager to sell their scientific advances to the public?There’s quite a lot of this about, and I have to say that scientists, sadly, are often willing participants. A famous example  from some years ago was the BICEP2 “discovery” concerning the cosmic microwave background, which made headlines around the world but was later shown to be false. More recently there have been many claims that very distant galaxies observed with JWST are incompatible with the standard cosmology. In that case some of the observations turned out to be incorrect and the theoretical interpretation misleading. Very high redshift galaxies would indeed be difficult to account for in the standard model, but we haven’t seen enough evidence yet. 
The narrative of a young scholar proposing revolutionary new ideas despite resistance from established science seems to resonate with the public and has echoes of Galilei and Darwin. Are we, the lay public, too easy victims of such dramatic story-telling, and does it give us a wrong idea about how science actually works?I think the public don’t really understand how science really works for a number of reasons. I think many people expect scientists to be  certain about things, when really it’s about dealing with statistical evidence in as careful and rational a way as possible. Earlier you asked me about the Cosmological Principle. If you asked me if the Cosmological Principle is valid I would answer “I don’t know, but as a working hypothesis it accounts very well for the reliable data”. That sort of statement, however, does not make headlines.  A significant problem is that extravagant unsubstantiated claims make headlines, but subsequent retractions don’t. This presents a very misleading picture to the public.
In your blog, you write that headline-hunting without the presence of even a pre-print is “not the sort of thing PhD supervisors should be allowing their PhD students to do.” Is it because it is harmful to science as a whole, or because there is a risk of derailing a young scientist’s career before it has even begun due to an early debacle?My objection is more that I think it is very bad form to present in public results which have not even been written up, let alone subject to proper peer review. It’s essential for science that this happens, so that the claims can be properly evaluated by experts in the field. Bypassing this is potentially extremely damaging to the proper public understanding of this subject.
Q&A about the Big Ring

The Big Ring Circus

Posted in Astrohype, Bad Statistics, The Universe and Stuff with tags , , , , on January 15, 2024 by telescoper

At the annual AAS Meeting in New Orleans last week there was an announcement of a result that made headlines in the media (see, e.g., here and here). There is also a press release from the University of Central Lancashire.

Here is a video of the press conference:

I was busy last week so didn’t have time to read the details so refrained from commenting on this issue at the time of the announcement. Now that I am back in circulation, I have time to read the details, but unfortunately was unable to find even a preprint describing this “discovery”. The press conference doesn’t contain much detail either so it’s impossible to say anything much about the significance of the result, which is claimed (without explanation) to be 5.2σ (after “doing some statistics”). I see the “Big Ring” now has its own wikipedia page, the only references on which are to press reports, not peer-reviewed scientific papers or even preprints.

So is this structure “so big it challenges our understanding of the universe”?

Based on the available information it is impossible to say. The large-scale structure of the Universe comprises a complex network of walls and filaments known as the cosmic web which I have written about numerous times on this blog. This structure is so vast and complicated that it is very easy to find strange shapes in it but very hard to determine whether or not they indicate anything other than an over-active imagination.

To assess the significance of the Big Ring or other structures in a proper scientific fashion, one has to calculate how probable that structure is given a model. We have a standard model that can be used for this purpose, but to simulate very structures is not straightforward because it requires a lot of computing power even to simulate just the mass distribution. In this case one also has to understand how to embed Magnesium absorption too, something which may turn out to trace the mass in a very biased way. Moreover, one has to simulate the observational selection process too, so one is doing a fair comparison between observations and predictions.

I have seen no evidence that this has been done in this case. When it is, I’ll comment on the details. I’m not optimistic however, as the description given in the media accounts contains numerous falsehoods. For example, quoting the lead author:

The Cosmological Principle assumes that the part of the universe we can see is viewed as a ‘fair sample’ of what we expect the rest of the universe to be like. We expect matter to be evenly distributed everywhere in space when we view the universe on a large scale, so there should be no noticeable irregularities above a certain size.

https://www.uclan.ac.uk/news/big-ring-in-the-sky

This just isn’t correct. The standard cosmology has fluctuations on all scales. Although the fluctuation amplitude decreases with scale, there is no scale at which the Universe is completely smooth. See the discussion, for example, here. We can see correlations on very large angular scales in the cosmic microwave background which would be absent if the Universe were completely smooth on those scales. The observed structure is about 400 Mpc in size, which does not seem to be to be particularly impressive.

I suspect that the 5.2σ figure mentioned above comes from some sort of comparison between the observed structure and a completely uniform background, in which case it is meaningless.

My main comment on this episode is that I think it’s very poor practice to go hunting headlines when there isn’t even a preprint describing the results. That’s not the sort of thing PhD supervisors should be allowing their PhD students to do. As I have mentioned before on this blog, there is an increasing tendency for university press offices to see themselves entirely as marketing agencies instead of informing and/or educating the public. Press releases about scientific research nowadays rarely make any attempt at accuracy – they are just designed to get the institution concerned into the headlines. In other words, research is just a marketing tool.

In the long run, this kind of media circus, driven by hype rather than science, does nobody any good.

P.S. I was going to joke that ring-like structures can be easily explained by circular reasoning, but decided not to.

NANOGrav Newsflash!

Posted in Astrohype, The Universe and Stuff with tags , , , , , on June 29, 2023 by telescoper

In a post earlier this week I wrote that

There is a big announcement scheduled for Thursday by the NANOGrav collaboration. I don’t know what is on the agenda, but I suspect it may be the detection of a stochastic gravitational wave background using pulsar timing measurements. I may of course be quite wrong about that, but will blog about it anyway.

The press conference is not until 1pm EDT (6pm Irish Time) but the papers have already arrived and it appears I was correct in my inference. The papers can be found here, along with a summary. The main results paper is entitled The NANOGrav 15 yr Data Set: Evidence for a Gravitational-wave Background. Here is the abstract (click on the image to make it bigger):

In a nutshell, this evidence differs from the direct detection of gravitational waves by interferometric experiments, such as Advanced LIGO, in that it: (a) does not detect individual sources but an integrated background produced by many sources; (b) it is sensitive to much longer gravitational waves (measured in light-years rather than kilometres).; and (c) the statistical evidence of this detection is far less clear-cut.

While Advanced LIGO can – and does – detect gravitational waves from mergers of stellar mass black holes, the NANOGrav signal would correspond to similar events involving much more massive objects – supermassive black holes (SMBHs) – with masses exceeding a million times the mass of the Sun, such as the one found in the Galactic Centre. If this is the right interpretation, the signal will provide important information about how many such mergers are happening across the Universe and hence about the formation of such objects and their host galaxies.

SMBH mergers are not the only possible source of the NANOGrav signal, however, and you can bet your bottom dollar that there will now be an avalanche of theory papers on the arXiv purporting to explain the results in terms of more exotic models.

Incidentally, for a nice explanation of the Hellings-Downs correlation, see here. The figure from the paper is

I haven’t had time to go through the papers in detail so won’t comment on the results, at least partly because I find the presentation of the statistical results in the abstract a very confusing jumble of Bayesian and frequentist language which I find hard to penetrate. Hopefully it will make more sense when I have time to read the papers and/or when I watch the announcement later.

Can Black Holes really create Dark Energy?

Posted in Astrohype, The Universe and Stuff with tags , , , , on February 25, 2023 by telescoper
Gratuitous Black Hole Graphic

A couple of papers were published recently that attracted quite a lot of media interest so I thought I’d mention the work here.

The researchers detail the theory in two papers, published in The Astrophysical Journal and The Astrophysical Journal Letterswith both laying out different aspects of the cosmological connection and providing the first “astrophysical explanation of dark energy”. The lead author of both papers is Duncan Farrah of the University of Hawaii. Both are available on the arXiv, where all papers worth reading in astrophysics can be found.

The first paper, available on the arXiv here, entitled Preferential Growth Channel for Supermassive Black Holes in Elliptical Galaxies at z<2, and makes the argument that observations imply that supermassive black holes grow preferentially in elliptical galaxies:

The assembly of stellar and supermassive black hole (SMBH) mass in elliptical galaxies since z∼1 can help to diagnose the origins of locally-observed correlations between SMBH mass and stellar mass. We therefore construct three samples of elliptical galaxies, one at z∼0 and two at 0.7≲z≲2.5, and quantify their relative positions in the MBH−M∗ plane. Using a Bayesian analysis framework, we find evidence for translational offsets in both stellar mass and SMBH mass between the local sample and both higher redshift samples. The offsets in stellar mass are small, and consistent with measurement bias, but the offsets in SMBH mass are much larger, reaching a factor of seven between z∼1 and z∼0. The magnitude of the SMBH offset may also depend on redshift, reaching a factor of ∼20 at z∼2. The result is robust against variation in the high and low redshift samples and changes in the analysis approach. The magnitude and redshift evolution of the offset are challenging to explain in terms of selection and measurement biases. We conclude that either there is a physical mechanism that preferentially grows SMBHs in elliptical galaxies at z≲2, or that selection and measurement biases are both underestimated, and depend on redshift.

arXiv: 2212.06854

Note the important caveats at the end. I gather from people who work on this topic that it’s a rather controversial claim.

The second paper, entitled Observational evidence for cosmological coupling of black holes and its implications for an astrophysical source of dark energy and available on the arXiv here, discusses a mechanism by which it is claimed that the formation of black holes actually creates dark energy:

Observations have found black holes spanning ten orders of magnitude in mass across most of cosmic history. The Kerr black hole solution is however provisional as its behavior at infinity is incompatible with an expanding universe. Black hole models with realistic behavior at infinity predict that the gravitating mass of a black hole can increase with the expansion of the universe independently of accretion or mergers, in a manner that depends on the black hole’s interior solution. We test this prediction by considering the growth of supermassive black holes in elliptical galaxies over 0<z≲2.5. We find evidence for cosmologically coupled mass growth among these black holes, with zero cosmological coupling excluded at 99.98% confidence. The redshift dependence of the mass growth implies that, at z≲7, black holes contribute an effectively constant cosmological energy density to Friedmann’s equations. The continuity equation then requires that black holes contribute cosmologically as vacuum energy. We further show that black hole production from the cosmic star formation history gives the value of ΩΛ measured by Planck while being consistent with constraints from massive compact halo objects. We thus propose that stellar remnant black holes are the astrophysical origin of dark energy, explaining the onset of accelerating expansion at z∼0.7.

arXiv:2302.07878


The first I saw of these papers was in a shockingly poor write-up in the Guardian which is so garbled that I dismissed the story out of hand. I recently saw it taken up in Physics World though so maybe there is something in it. Having scanned it quickly it doesn’t look trivially wrong as I had feared it would be.

I haven’t had much time to read papers over the last few weeks but I’ve decided to present the second paper – the more theoretical one – next time I do our cosmology journal club at Maynooth, which means I’ll have to read it! I’ll add my summary after I’ve done the Journal club on Monday afternoon.

In the meantime I was wondering what the general reaction in the cosmological community is to these papers, especially the second one. If anyone has strong views please feel free to put them in the comments box!

UPDATE: There is a counter-argument on the arXiv today.

That Wormhole Garbage

Posted in Astrohype, The Universe and Stuff with tags , , on December 2, 2022 by telescoper

I’m glad I was too busy today to respond earlier to a junk science story that has been doing the rounds, in the Guardian, in Quanta and even in Physics World to name but a few. Had I had time to write something as soon as I’d seen these pieces of tripe I would probably have responded with more expletives than would be seemly even for this blog. This sort of crap makes me rather angry, you see.

Meaningless Illustration

The story is basically that a group of scientists have created a “wormhole in space-time” that enables quantum teleportation.

Of course they have done no such thing. The paper, like so many stories hyped beyond the bounds of reason, is published in Nature. There are some interesting things in this publication, but nothing to justify the absurd claims that have propagated into the media. The authors must take some of the blame for allowing such tosh to be spread about in their names. I don’t think it will do them any good in the long run.

At least I hope it doesn’t.

You can read it for yourself and make your own , but my take is the following:

  • Did the authors create a wormhole (even a baby one) in a laboratory? Definitely not.
  • Did they discover anything whatsoever to do with quantum gravity? No way.
  • Did they even simulate a wormhole in a lab? Not even close.
  • Did they even make progress towards simulating a wormhole in a lab? Still no.

Apart from all that it’s fine.

The author of the Quanta article, Natalie Wolchover, writes:

Researchers were able to send a signal through the open wormhole, though it’s not clear in what sense the wormhole can be said to exist.

Au contraire, it’s absolutely clear that no wormhole can be said to exist in any sense whatsoever.

I hope this clarifies the situation.

UPDATE: I see that Peter Woit has gone to town on this on his blog here.

Cosmological Dipole Controversy

Posted in Astrohype, Bad Statistics, The Universe and Stuff with tags , , on October 11, 2022 by telescoper

I’ve just finished reading an interesting paper by Secrest et al. which has attracted some attention recently. It’s published in the Astrophysical Journal Letters but is also available on the arXiv here. I blogged about earlier work by some of these authors here.

The abstract of the current paper is:

We present the first joint analysis of catalogs of radio galaxies and quasars to determine if their sky distribution is consistent with the standard ΛCDM model of cosmology. This model is based on the cosmological principle, which asserts that the universe is statistically isotropic and homogeneous on large scales, so the observed dipole anisotropy in the cosmic microwave background (CMB) must be attributed to our local peculiar motion. We test the null hypothesis that there is a dipole anisotropy in the sky distribution of radio galaxies and quasars consistent with the motion inferred from the CMB, as is expected for cosmologically distant sources. Our two samples, constructed respectively from the NRAO VLA Sky Survey and the Wide-field Infrared Survey Explorer, are systematically independent and have no shared objects. Using a completely general statistic that accounts for correlation between the found dipole amplitude and its directional offset from the CMB dipole, the null hypothesis is independently rejected by the radio galaxy and quasar samples with p-value of 8.9×10−3 and 1.2×10−5, respectively, corresponding to 2.6σ and 4.4σ significance. The joint significance, using sample size-weighted Z-scores, is 5.1σ. We show that the radio galaxy and quasar dipoles are consistent with each other and find no evidence for any frequency dependence of the amplitude. The consistency of the two dipoles improves if we boost to the CMB frame assuming its dipole to be fully kinematic, suggesting that cosmologically distant radio galaxies and quasars may have an intrinsic anisotropy in this frame.

I can summarize the paper in the form of this well-worn meme:

My main reaction to the paper – apart from finding it interesting – is that if I were doing this I wouldn’t take the frequentist approach used by the authors as this doesn’t address the real question of whether the data prefer some alternative model over the standard cosmological model.

As was the case with a Nature piece I blogged about some time ago, this article focuses on the p-value, a frequentist concept that corresponds to the probability of obtaining a value at least as large as that obtained for a test statistic under a particular null hypothesis. To give an example, the null hypothesis might be that two variates are uncorrelated; the test statistic might be the sample correlation coefficient r obtained from a set of bivariate data. If the data were uncorrelated then r would have a known probability distribution, and if the value measured from the sample were such that its numerical value would be exceeded with a probability of 0.05 then the p-value (or significance level) is 0.05. This is usually called a ‘2σ’ result because for Gaussian statistics a variable has a probability of 95% of lying within 2σ of the mean value.

Anyway, whatever the null hypothesis happens to be, you can see that the way a frequentist would proceed would be to calculate what the distribution of measurements would be if it were true. If the actual measurement is deemed to be unlikely (say that it is so high that only 1% of measurements would turn out that large under the null hypothesis) then you reject the null, in this case with a “level of significance” of 1%. If you don’t reject it then you tacitly accept it unless and until another experiment does persuade you to shift your allegiance.

But the p-value merely specifies the probability that you would reject the null-hypothesis if it were correct. This is what you would call making a Type I error. It says nothing at all about the probability that the null hypothesis is actually a correct description of the data. To make that sort of statement you would need to specify an alternative distribution, calculate the distribution based on it, and hence determine the statistical power of the test, i.e. the probability that you would actually reject the null hypothesis when it is incorrect. To fail to reject the null hypothesis when it’s actually incorrect is to make a Type II error.

If all this stuff about p-values, significance, power and Type I and Type II errors seems a bit bizarre, I think that’s because it is. In fact I feel so strongly about this that if I had my way I’d ban p-values altogether…

This is not an objection to the value of the p-value chosen, and whether this is 0.005 rather than 0.05 or, , a 5σ standard (which translates to about 0.000001!  While it is true that this would throw out a lot of flaky ‘two-sigma’ results, it doesn’t alter the basic problem which is that the frequentist approach to hypothesis testing is intrinsically confusing compared to the logically clearer Bayesian approach. In particular, most of the time the p-value is an answer to a question which is quite different from that which a scientist would actually want to ask, which is what the data have to say about the probability of a specific hypothesis being true or sometimes whether the data imply one hypothesis more strongly than another. I’ve banged on about Bayesian methods quite enough on this blog so I won’t repeat the arguments here, except that such approaches focus on the probability of a hypothesis being right given the data, rather than on properties that the data might have given the hypothesis.

Not that it’s always easy to implement the (better) Bayesian approach. It’s especially difficult when the data are affected by complicated noise statistics and selection effects, and/or when it is difficult to formulate a hypothesis test rigorously because one does not have a clear alternative hypothesis in mind. That’s probably why many scientists prefer to accept the limitations of the frequentist approach than tackle the admittedly very challenging problems of going Bayesian.

But having indulged in that methodological rant, I certainly have an open mind about departures from isotropy on large scales. The correct scientific approach is now to reanalyze the data used in this paper to see if the result presented stands up, which it very well might.

Recalibration of Ultra-High-Redshift Galaxies

Posted in Astrohype, The Universe and Stuff with tags , , , , on August 10, 2022 by telescoper

Remember all the recent excitement about the extremely high redshift galaxies (such as this and this; the two examples shown above) “identified” in early-release JWST observations? Well, a new paper on the arXiv by Adams et al using post-launch calibration of the JWST photometry suggests that we should be cautious about the interpretation of these objects. The key message of this study is that the preliminary calibration that has been in widespread use for these studies is wrong by up to 30% and that can have a huge impact on inferred redshifts.

The new study does indeed identify some good candidates for ultra-high-redshift galaxies, but it also casts doubt on many of the previous claims. Here is a table of some previous estimates alongside those using the newly recalibrated data:

You will see that in most – but not all – cases the recalibration results in a substantial lowering of the estimated redshift; one example decreases from z>20 to 0.7! The two candidates mentioned at the start of this post are not included in this table but one should probably reserve judgement on them too.

The conclusive measurements for these objects will however include spectroscopy, and the identification of spectral lines, rather than photometry and model fits to the spectra energy distribution. Only with such data will we really know how many of these sources are actually at very high redshift. As the philosopher Hegel famously remarked

The Owl of Minerva only spreads its wings with the coming of spectroscopy.