To much media interest the Dark Energy Survey team yesterday released 11 new papers based on the analysis of their 3-year data. You can find the papers together with short descriptions here. There’s even a little video about the Dark Energy Survey here:
Scientists measured that the way matter is distributed throughout the universe is consistent with predictions in the standard cosmological model, the best current model of the universe.
The results are a surprise because they show that it is slightly smoother and more spread out than the current best theories predict.
The observation appears to stray from Einstein’s theory of general relativity – posing a conundrum for researchers.
The reason for this appears to be that the BBC story focusses on the weak lensing paper (found here; I’ll add a link to the arXiv version if and when it appears there). The abstract is here:
The parameter S8 is a (slightly) rescaled version of the more familiar parameter σ8 – which quantifies the matter-density fluctuations on a scale of 8 h-1 Mpc – as defined in the abstract; cosmic shear is particularly sensitive to this parameter.
The key figure showing the alleged “tension” with Planck is here:
The companion paper referred to in the above abstract (found here has an abstract that concludes with the words (my emphasis).
We find a 2.3σ difference between our S8 result and that of Planck (2018), indicating no statistically significant tension, and additionally find our results to be in qualitative agreement with current weak lensing surveys (KiDS-1000 and HSC).
So, although certain people have decided to hype up a statistically insignificant l discrepancy, everything basically fits the standard model…
It’s time I shared another one of those interesting cosmology talks on the Youtube channel curated by Shaun Hotchkiss. This channel features technical talks rather than popular expositions so it won’t be everyone’s cup of tea but for those seriously interested in cosmology at a research level they should prove interesting.
In this talk from a couple of months ago Volker Springel discusses Gadget-4 which is a parallel computational code that combines cosmological N-body and SPH code and is intended for simulations of cosmic structure formation and calculations relevant for galaxy evolution and galactic dynamics.
The predecessor of GADGET-2 is probably the most used computational code in cosmology; this talk discusses what new ideas are implemented in GADGET-4 to improve on the earlier version and what new features it has. Volker also explains what happened to GADGET-3!
The other day, via Twitter, I came across an interesting blog post about the relatively recent resurgence of Bayesian reasoning in science. That piece had triggered a discussion about why cosmologists seem to be largely Bayesian in outlook, so I thought I’d share a few thoughts about that. You can find a lot of posts about various aspects of Bayesian reasoning on this blog, e.g. here.
When I was an undergraduate student I didn’t think very much about statistics at all, so when I started my DPhil studies I realized I had a great deal to learn. However, at least to start with, I mainly used frequentist methods. Looking back I think that’s probably because I was working on cosmic microwave background statistics and we didn’t really have any data back in 1985. Or actually we had data, but no firm detections. I was therefore taking models and calculating things in what I would call the forward direction, indicated by the up arrow. What I was trying to do was find statistical descriptors that looked likely to be able to discriminate between different models but I didn’t have the data.
Once measurements started to become available the inverse-reasoning part of the diagram indicated by the downward arrow came to the fore. It was only then that it started to become necessary to make firm statements about which models were favoured by the data and which weren’t. That is what Bayesian methods do best, especially when you have to combine different data sets.
By the early 1990s I was pretty much a confirmed Bayesian – as were quite a few fellow theorists -but I noticed that most observational cosmologists I knew were confirmed frequentists. I put that down to the fact that they preferred to think in “measurement space” rather than “theory space”, the latter requiring the inductive step furnished by Bayesian reasoning indicated by the downward arrow. As cosmology has evolved the separation between theorists and observers in some areas – especially CMB studies – has all but vanished and there’s a huge activity at the interface between theory and measurement.
But my first exposure to Bayesian reasoning came long before that change. I wasn’t aware of its usefulness until 1987, when I returned to Cambridge for a conference called The Post-Recombination Universe organized by Nick Kaiser and Anthony Lasenby. There was an interesting discussion in one session about how to properly state the upper limit on CMB fluctuations arising from a particular experiment, which had been given incorrectly in a paper using a frequentist argument. During the discussion, Nick described Anthony as a “Born-again Bayesian”, a phrase that stuck in my memory though I’m still not sure whether or not it was meant as an insult.
It may be the case for many people that a relatively simple example convinces them of the superiority of a particular method or approach. I had previously found statistical methods – especially frequentist hypothesis-testing – muddled and confusing, but once I’d figured out what Bayesian reasoning was I found it logically compelling. It’s not always easy to do a Bayesian analysis for reasons discussed in the paper to which I linked above, but it least you have a clear idea in your mind what question it is that you are trying to answer!
Anyway, it was only later that I became aware that there were many researchers who had been at Cambridge while I was there as a student who knew all about Bayesian methods: people such as Steve Gull, John Skilling, Mike Hobson, Anthony Lasenby and, of course, one Anthony Garrett. It was only later in my career that I actually got to talk to any of them about any of it!
So I think the resurgence of Bayesian ideas in cosmology owes a very great deal to the Cambridge group which had the expertise necessary to exploit the wave of high quality data that started to come in during the 1990s and the availability of the computing resources needed to handle it.
But looking a bit further back I think there’s an important Cambridge (but not cosmological) figure that preceded them, Sir Harold Jeffreys whose book The Theory of Probability was first published in 1939. I think that book began to turn the tide, and it still makes for interesting reading.
P.S. I have to say I’ve come across more than one scientist who has argued that you can’t apply statistical reasoning in cosmology because there is only one Universe and you can’t use probability theory for unique events. That erroneous point of view has led to many otherwise sensible people embracing the idea of a multiverse, but that’s the subject for another rant.
The point is that if the Universe is described by a space-time with the Robertson-Walker Metric (which is the case if the Cosmological Principle applies in the framework of General Relativity) then angular diameter distances and luminosity distances can differ only by a factor of (1+z)2 where z is the redshift: DL=DA(1+z)2.
I’ve included here some slides from undergraduate course notes to add more detail to this if you’re interested:
The result DL=DA(1+z)2 is an example of Etherington’s Reciprocity Theorem and it does not depend on a particular energy-momentum tensor; the redshift of a source just depends on the scale factor when light is emitted and the scale factor when it is received, not how it evolves in between.
Etherington’s theorem requires light rays to be described by null geodesics which would not be the case if photons had mass, so introducing massive photons would violate the theorem. It also requires photon numbers to be conserved, so some mysterious way of making photons disappear might do the trick, so adding some exotic field that interacts with light in a peculiar way is another possibility, as is the possibility of having a space-time with torsion, i.e. a non-Riemannian space-time.
Another possibility you might think of is to abandon the Robertson-Walker metric. We know that the Universe is not exactly homogeneous and isotropic, so one could appeal to the gravitational lensing effect of lumpiness to provide a departure from the simple relationship given above. In fact a inhomogeneous cosmological model based on GR does not in itself violate Etherington’s theorem, but it means that the relation DL=DA(1+z)2 is no longer global. In such models there is no way of defining a global scale factor a(t) so the reciprocity relation applies only locally, in a different form for each source and observer. In order to test this idea one would have to have luminosity distances and angular diameter distances for each source. The most distant objects for which we have luminosity distance measures are supernovae, and we don’t usually have angular-diameter distances for them.
Anyway, these thoughts popped back into my head when I saw a new paper on the arXiv by Holanda et al, the abstract of which is here:
Here we have an example of a set of sources (galaxy clusters) for which we can estimate both luminosity and angular-diameter distances (the latter using gravitational lensing) and thus test the reciprocity relation (called the cosmic distance duality relation in the paper). The statistics aren’t great but the result is consistent with the standard theory, as are previous studies mentioned in the paper. So there’s no need yet to turn the Hubble tension into torsion!
It’s time I shared another one of those interesting cosmology talks on the Youtube channel curated by Shaun Hotchkiss. This channel features technical talks rather than popular expositions so it won’t be everyone’s cup of tea but for those seriously interested in cosmology at a research level they should prove interesting. This one was published just yesterday.
In the talk Dan Thomas discusses his recent work first creating a framework for describing modified gravity (i.e. extensions of general relativity) in a model-independent way on non-linear scales and then running N-body simulations in that framework. The framework involves finding a correspondence between large scale linear theory where everything is under control and small scale non-linear post-Newtonian dynamics. After a lot of care and rigour it boils down to a modified Poisson equation – on both large and small scales (in a particular gauge). The full generality of the modification to the Poisson equation allows, essentially, for a time and space dependent value for Newton’s constant. For most modified gravity models, the first level of deviation from general relativity can be parametrised in this way. This approach allows the method to use to constrain modified gravity using observations without needing to run a new simulation for every step of a Monte Carlo parameter fit.
P. S. A couple of papers to go with this talk can be found here and here.
I’m a bit late getting round to writing something on the blog today because it has been yet another hectic day. Between my usual lecture this morning and Computational Physics Laboratory session this afternoon we also had our long-awaited Astrophysics & Cosmology Masterclass (held via Zoom).
This event had been delayed twice because of Covid-19 so we were glad that it went ahead today at last!
We were a little nervous about how well it would go but as it happened I think it was a success. We had approaching a hundred schools tuning in, from Wicklow to Tralee, Longford to Monaghan, Donegal to Cork and many places between. The level of engagement was excellent. We held a question-and-answer session but were a little nervous in advance about whether we would actually get any questions. As it turned out we got a lot of questions with some very good ones among them. Reaction from students and teachers was very good.
For those who couldn’t make it to this morning’s session we did record the presentations and I’ll make the video available via YouTube in due course.
Now, I’ve been Zooming and Teaming (with a bit of Panopto thrown in) all day so if you don’t mind I’ll now go and vegetate.
A rather pugnacious paper by George Efstathiou appeared on the arXiv earlier this week. Here is the abstract:
This paper investigates whether changes to late time physics can resolve the `Hubble tension’. It is argued that many of the claims in the literature favouring such solutions are caused by a misunderstanding of how distance ladder measurements actually work and, in particular, by the inappropriate use of distance ladder H0 priors. A dynamics-free inverse distance ladder shows that changes to late time physics are strongly constrained observationally and cannot resolve the discrepancy between the SH0ES data and the base LCDM cosmology inferred from Planck.
For a more detailed discussion of this paper, see Sunny Vagnozzi’s blog post. I’ll just make some general comments on the context.
One of the reactions to the alleged “tension” between the two measurements of H0 is to alter the standard model in such a way that the equation of state changes significantly at late cosmological times. This is because the two allegedly discrepant sets of measures of the cosmological distance scale (seen, for example, in the diagram below taken from the paper I blogged about a while ago here) differ in that the low values are global measures (based on observations at high redshift) while the high values of are local (based on direct determinations using local sources, specifically stars of various types).
That is basically true. There is, however, another difference in the two types of distance determination: the high values of the Hubble constant are generally related to interpretations of the measured brightness of observed sources (i.e. they are based on luminosity distances) while the lower values are generally based on trigonometry (specifically they are angular diameter distances). Observations of the cosmic microwave background temperature pattern, baryon acoustic oscillations in the matter power-spectrum, and gravitational lensing studies all involve angular-diameter distances rather than luminosity distances.
Before going on let me point out that the global (cosmological) determinations of the Hubble constant are indirect in that they involve the simultaneous determination of a set of parameters based on a detailed model. The Hubble constant is not one of the basic parameters inferred from cosmological observations, it is derived from the others. One does not therefore derive the global estimates in the same way as the local ones, so I’m simplifying things a lot in the following discussion which I am not therefore claiming to be a resolution of the alleged discrepancy. I’m just thinking out loud, so to speak.
With that caveat in mind, and setting aside the possibility (or indeed probability) of observational systematics in some or all of the measurements, let us suppose that we did find that there was a real discrepancy between distances inferred using angular diameters and distances using luminosities in the framework of the standard cosmological model. What could we infer?
Well, if the Universe is described by a space-time with the Robertson-Walker Metric (which is the case if the Cosmological Principle applies in the framework of General Relativity) then angular diameter distances and luminosity distances differ only by a factor of (1+z)2 where z is the redshift: DL=DA(1+z)2.
I’ve included here some slides from undergraduate course notes to add more detail to this if you’re interested:
The result DL=DA(1+z)2 is an example of Etherington’s Reciprocity Theorem. If we did find that somehow this theorem were violated, how could we modify our cosmological theory to explain it?
Well, one thing we couldn’t do is change the evolutionary history of the scale factor a(t) within a Friedman model. The redshift just depends on the scale factor when light is emitted and the scale factor when it is received, not how it evolves in between. And because the evolution of the scale factor is determined by the Friedman equation that relates it to the energy contents of the Universe, changing the latter won’t help either no matter how exotic the stuff you introduce (as long as it only interacts with light rays via gravity). In the light of this, the fact there are significant numbers of theorists pushing for such things as interacting dark-energy models to engineer late-time changes in expansion history is indeed a bit perplexing.
In the light of the caveat I introduced above, I should say that changing the energy contents of the Universe might well shift the allowed parameter region which may reconcile the cosmological determination of the Hubble constant from cosmology with local values. I am just talking about a hypothetical simpler case.
In order to violate the reciprocity theorem one would have to tinker with something else. An obvious possibility is to abandon the Robertson-Walker metric. We know that the Universe is not exactly homogeneous and isotropic, so one could appeal to the gravitational lensing effect of lumpiness as the origin of the discrepancy. This must happen to some extent, but understanding it fully is very hard because we have far from perfect understanding of globally inhomogeneous cosmological models.
Etherington’s theorem requires light rays to be described by null geodesics which would not be the case if photons had mass, so introducing massive photons that’s another way out. It also requires photon numbers to be conserved, so some mysterious way of making photons disappear might do the trick, so adding some exotic field that interacts with light in a peculiar way is another possibility.
Anyway, my main point here is that if one could pin down the Hubble constant tension as a discrepancy between angular-diameter and luminosity based distances then the most obvious place to look for a resolution is in departures of the metric from the Robertson-Walker form. The reciprocity theorem applies to any GR-based metric theory, i.e. just about anything without torsion in the metric, so it applies to inhomogeneous cosmologies based on GR too. However, in such theories there is no way of defining a global scale factor a(t) so the reciprocity relation applies only locally, in a different form for each source and observer.
All of this begs the question of whether or not there is real tension in the H0 measures. I certainly have better things to get tense about. That gives me an excuse to include my long-running poll on the issue:
Regular readers of the blog – both of them – may remember that we planned to present a Masterclass in Astrophysics & Cosmology on January 14th 2021 but this had to be postponed due to Covid-19 restrictions. After today’s announcements by the Government of a phased return to school starting on March 1st we have now decided to proceed with a new date of March 25th 2021.
This will be a half-day virtual event via Zoom. It’s meant for school students in their 5th or 6th year of the Irish system, who should be returning to classrooms on March 15th, but there might be a few of them or their teachers who see this blog so I thought I’d share the news here. You can find more information, including instructions on how to book a place, here.
Here is the updated official poster and the programme:
I’ll be talking about cosmology early on, while John Regan will talk about black holes. After the coffee break one of our PhD students will talk about why they wanted to study astrophysics. Then I’ll say something about our degree programmes for those students who might be interested in studying astrophysics and/or cosmology as part of a science course. We’ll finish with questions either about the science or the study!
It’s time I shared another one of those interesting cosmology talks on the Youtube channel curated by Shaun Hotchkiss. This channel features technical talks rather than popular expositions so it won’t be everyone’s cup of tea but for those seriously interested in cosmology at a research level they should prove interesting. This is quite a recent one, from about a week ago.
In the talk, Alvaro Pozo tells us about a recent paper where he an collaborators detect the transition between a core (flat density profile) and halo (power law density profile) in dwarf galaxies. The full core + halo profile matches very closely what is expected in simulations of wave dark matter (sometimes called “fuzzy” dark matter), by which is meant dark matter consisting of a particle so light that its de Broglie wavelength is long enough to be astrophysically relevant. That is, there is a very flat core, which then drops off suddenly and then flattens off to a decaying power-law profile. The core matches the soliton expected in wave dark matter and the halo matches an outer NFW profile expected outside the soliton. They also detect evidence for tidal stripping of the matter in the galaxies. The galaxies closer to the centre of the Milky Way have their transition point between core and halo happen at smaller densities (despite the core density itself not being systematically smaller). The transition also appears to happen closer to the centre of the galaxy, which matches simulations. Of course the core+halo pattern they have clearly observed might be due to something else, but the match between wave dark matter simulations and observations is impressive. An important caveat is that the mass for the dark matter that they use is very small and in significant tension with Lyman Alpha constraints for wave-like dark matter. This might indicate that the source of this universal core+halo pattern they’re observing comes from something else, or it might indicate that the wave dark matter is more complicated than represented in the simplest models.
P. S. The papers that accompany this talk can be found here.
P.P.S. If you’re interested in wave dark matter there is a nice recent review article by Lam Hui here.
Time to announce another publication in the Open Journal of Astrophysics. This one was published yesterday, actually, but I didn’t get time to post about it until just now. It is the second paper in Volume 4 (2021).
The latest publication is entitled Characterizing the Sample Selection for Supernova Cosmology and is written by Alex G. Kim on behalf of the LSST Dark Energy Science Collaboration. It’s nice to be getting papers from large collaborations like this!
Here is a screen grab of the overlay which includes the abstract:
You can click on the image to make it larger should you wish to do so. You can find the arXiv version of the paper here. This is one for the Cosmology and Nongalactic Astrophysics folder.
The views presented here are personal and not necessarily those of my employer (or anyone else for that matter).
Feel free to comment on any of the posts on this blog but comments may be moderated; anonymous comments and any considered by me to be vexatious and/or abusive and/or defamatory will not be accepted. I do not necessarily endorse, support, sanction, encourage, verify or agree with the opinions or statements of any information or other content in the comments on this site and do not in any way guarantee their accuracy or reliability.