I bookmarked this paper on arXiv a week or so ago with the intention of sharing it here, but evidently forgot about it. Anyway, as its name suggests, it’s a review by Brent Tully from a historical perspective of measurements of the Hubble Constant. I’m not sure whether it is intended for publication in a book – as it opens with the heading “Chapter 1” – but it’s well worth reading whatever its purpose. Here is the abstract:
For 100 years since galaxies were found to be flying apart from each other, astronomers have been trying to determine how fast. The expansion, characterized by the Hubble constant, H0, is confused locally by peculiar velocities caused by gravitational interactions, so observers must obtain accurate distances at significant redshifts. Very nearby in our Galaxy, accurate distances can be determined through stellar parallaxes. There is no good method for obtaining galaxy distances that is applicable from the near domain of stellar parallaxes to the far domain free from velocity anomalies. The recourse is the distance ladder involving multiple methods with overlapping domains. Good progress is being made on this project, with satisfactory procedures and linkages identified and tested across the necessary distance range. Best values of H0 from the distance ladder lie in the range 73 – 75 km/s/Mpc. On the other hand, from detailed information available from the power spectrum of fluctuations in the cosmic microwave background, coupled with constraints favoring the existence of dark energy from distant supernova measurements, there is the precise prediction that H0 = 67.4 to 1%. If it is conclusively determined that the Hubble constant is well above 70 km/s/Mpc as indicated by distance ladder results then the current preferred LambdaCDM cosmological model based on the Standard Model of particle physics may be incomplete. There is reason for optimism that the value of the Hubble constant from distance ladder observations will be rigorously defined with 1% accuracy in the near future.
Brent Tully, arXiv:2305.11950
Here is the concluding paragraph:
As the 20th century came to an end, ladder measurements of the Hubble constant were at odds with the favored cosmological model of the time of cold dark matter with Λ =0. The new favorite became the ΛCDM model with dark energy giving rise to acceleration of space in a topologically flat universe. Yet ladder measurements, continuously improving, create doubts that this currently favorite model is complete. Yes, there is a Hubble tension.
The latest contribution to the ongoing debate about the Hubble constant is a new paper by Adam Riess and collaborators which you can find on the arXiv here. The abstract reads:
As you can see, this group is doubling down up on a high value for the Hubble constant. This longstanding discrepancy gives me an excuse to post my longstanding opinion polls on the topic.
First, would you go for a “high” (73-ish) or “low” (68-ish) value:
Second, do you think the discrepancy or tension is anything to get excited or even tense about?
One topic on this blog seems to be as perennial as the weeds in my garden: the so-called Hubble Tension. I just saw a review paper by Wendy Freedman, one of the acknowedged experts in this area, on arXiv here. I have abstracted the abstract here:
What’s particularly interesting about this discussion is that stellar distance indicators have typically produced higher values than the 69.8 ± 0.6 (stat) ± 1.6 (sys) km s-1 Mpc-1 quoted here, which is consistent with the lower value favoured by Planck. See the above graphic discussed here. So perhaps there’s no tension at all. Maybe.
Anyway, here’s that poll again! I wonder if this paper might change the voting.
I recently came across a comprehensive review article on the arXiv and thought some of my regular readers might find it interesting as a description of the current state of play in cosmology. The paper is called Challenges for ΛCDM: An update and is written by Leandros Perivolaropoulos and Foteini Skara.
Here is the abstract:
A number of challenges of the standard ΛCDM model has been emerging during the past few years as the accuracy of cosmological observations improves. In this review we discuss in a unified manner many existing signals in cosmological and astrophysical data that appear to be in some tension (2σ or larger) with the standard ΛCDM model as defined by the Planck18 parameter values. In addition to the major well studied 5σ challenge of ΛCDM (the Hubble H0 crisis) and other well known tensions (the growth tension and the lensing amplitude AL anomaly), we discuss a wide range of other less discussed less-standard signals which appear at a lower statistical significance level than the H0 tension (also known as ‘curiosities’ in the data) which may also constitute hints towards new physics. For example such signals include cosmic dipoles (the fine structure constant α, velocity and quasar dipoles), CMB asymmetries, BAO Lyα tension, age of the Universe issues, the Lithium problem, small scale curiosities like the core-cusp and missing satellite problems, quasars Hubble diagram, oscillating short range gravity signals etc. The goal of this pedagogical review is to collectively present the current status of these signals and their level of significance, with emphasis to the Hubble crisis and refer to recent resources where more details can be found for each signal. We also briefly discuss possible theoretical approaches that can potentially explain the non-standard nature of some of these signals.
Among the useful things in it you will find this summary of the current ‘tension’ over the Hubble constant that I’ve posted about numerous times (e.g. here):
I was idly wondering earlier this week when the annual list of new Fellows elected to the Royal Society would be published, as it is normally around this time of year. Today it finally emerged and can be found here.
I am particularly delighted to see that my erstwhile Cardiff colleague Bernard Schutz (with whom I worked in the Data Innovation Research Institute and the School of Physics & Astronomy) is now an FRS! In fact I have known Bernard for quite a long time – he chaired the Panel that awarded me an SERC Advanced Fellowship in the days before STFC, and even before PPARC, way back in 1993. It just goes to show that even the most eminent scientists do occasionally make mistakes…
Anyway, hearty congratulations to Bernard, whose elevation to the Royal Society follows the award, a couple of years ago, of the Eddington Medal of the Royal Astronomical Society about which I blogged here. The announcement from the Royal Society is rather brief:
Bernard Schutz is honoured for his work driving the field of gravitational wave searches, leading to their direct detection in 2015.
I report here how gravitational wave observations can be used to determine the Hubble constant, H 0. The nearly monochromatic gravitational waves emitted by the decaying orbit of an ultra–compact, two–neutron–star binary system just before the stars coalesce are very likely to be detected by the kilometre–sized interferometric gravitational wave antennas now being designed1–4. The signal is easily identified and contains enough information to determine the absolute distance to the binary, independently of any assumptions about the masses of the stars. Ten events out to 100 Mpc may suffice to measure the Hubble constant to 3% accuracy.
In this paper, Bernard points out that a binary coalescence — such as the merger of two neutron stars — is a self calibrating `standard candle’, which means that it is possible to infer directly the distance without using the cosmic distance ladder. The key insight is that the rate at which the binary’s frequency changes is directly related to the amplitude of the gravitational waves it produces, i.e. how `loud’ the GW signal is. Just as the observed brightness of a star depends on both its intrinsic luminosity and how far away it is, the strength of the gravitational waves received at LIGO depends on both the intrinsic loudness of the source and how far away it is. By observing the waves with detectors like LIGO and Virgo, we can determine both the intrinsic loudness of the gravitational waves as well as their loudness at the Earth. This allows us to directly determine distance to the source.
It may have taken 31 years to get a measurement, but hopefully it won’t be long before there are enough detections to provide greater precision – and hopefully accuracy! – than the current methods can manage!
Here is a short video of Bernard himself talking about his work:
Once again, congratulations to Bernard on a very well deserved election to a Fellowship of the Royal Society.
The point is that if the Universe is described by a space-time with the Robertson-Walker Metric (which is the case if the Cosmological Principle applies in the framework of General Relativity) then angular diameter distances and luminosity distances can differ only by a factor of (1+z)2 where z is the redshift: DL=DA(1+z)2.
I’ve included here some slides from undergraduate course notes to add more detail to this if you’re interested:
The result DL=DA(1+z)2 is an example of Etherington’s Reciprocity Theorem and it does not depend on a particular energy-momentum tensor; the redshift of a source just depends on the scale factor when light is emitted and the scale factor when it is received, not how it evolves in between.
Etherington’s theorem requires light rays to be described by null geodesics which would not be the case if photons had mass, so introducing massive photons would violate the theorem. It also requires photon numbers to be conserved, so some mysterious way of making photons disappear might do the trick, so adding some exotic field that interacts with light in a peculiar way is another possibility, as is the possibility of having a space-time with torsion, i.e. a non-Riemannian space-time.
Another possibility you might think of is to abandon the Robertson-Walker metric. We know that the Universe is not exactly homogeneous and isotropic, so one could appeal to the gravitational lensing effect of lumpiness to provide a departure from the simple relationship given above. In fact a inhomogeneous cosmological model based on GR does not in itself violate Etherington’s theorem, but it means that the relation DL=DA(1+z)2 is no longer global. In such models there is no way of defining a global scale factor a(t) so the reciprocity relation applies only locally, in a different form for each source and observer. In order to test this idea one would have to have luminosity distances and angular diameter distances for each source. The most distant objects for which we have luminosity distance measures are supernovae, and we don’t usually have angular-diameter distances for them.
Anyway, these thoughts popped back into my head when I saw a new paper on the arXiv by Holanda et al, the abstract of which is here:
Here we have an example of a set of sources (galaxy clusters) for which we can estimate both luminosity and angular-diameter distances (the latter using gravitational lensing) and thus test the reciprocity relation (called the cosmic distance duality relation in the paper). The statistics aren’t great but the result is consistent with the standard theory, as are previous studies mentioned in the paper. So there’s no need yet to turn the Hubble tension into torsion!
The full paper (i.e. author list plus a small amount of text) can be found here. Here are two plots from that work.
The first shows the constraints from the six loudest gravitational wave events selected for the latest work, together with the two competing measurements from Planck and SH0ES:
As you can see the individual measurements do not constrain very much. The second plot shows the effect of combining all relevant data, including a binary neutron star merger with an electromagnetic counterparts. The results are much stronger when the latter is included
Obviously this measurement isn’t yet able to resolve the alleged tension between “high” and “low” values described on this blog passim, but it’s early days. If LIGO reaches its planned sensitivity the next observing run should provide many more events. A few hundred should get the width of the posterior distribution shown in the second figure down to a few percent, which would be very interesting indeed!
A rather pugnacious paper by George Efstathiou appeared on the arXiv earlier this week. Here is the abstract:
This paper investigates whether changes to late time physics can resolve the `Hubble tension’. It is argued that many of the claims in the literature favouring such solutions are caused by a misunderstanding of how distance ladder measurements actually work and, in particular, by the inappropriate use of distance ladder H0 priors. A dynamics-free inverse distance ladder shows that changes to late time physics are strongly constrained observationally and cannot resolve the discrepancy between the SH0ES data and the base LCDM cosmology inferred from Planck.
For a more detailed discussion of this paper, see Sunny Vagnozzi’s blog post. I’ll just make some general comments on the context.
One of the reactions to the alleged “tension” between the two measurements of H0 is to alter the standard model in such a way that the equation of state changes significantly at late cosmological times. This is because the two allegedly discrepant sets of measures of the cosmological distance scale (seen, for example, in the diagram below taken from the paper I blogged about a while ago here) differ in that the low values are global measures (based on observations at high redshift) while the high values of are local (based on direct determinations using local sources, specifically stars of various types).
That is basically true. There is, however, another difference in the two types of distance determination: the high values of the Hubble constant are generally related to interpretations of the measured brightness of observed sources (i.e. they are based on luminosity distances) while the lower values are generally based on trigonometry (specifically they are angular diameter distances). Observations of the cosmic microwave background temperature pattern, baryon acoustic oscillations in the matter power-spectrum, and gravitational lensing studies all involve angular-diameter distances rather than luminosity distances.
Before going on let me point out that the global (cosmological) determinations of the Hubble constant are indirect in that they involve the simultaneous determination of a set of parameters based on a detailed model. The Hubble constant is not one of the basic parameters inferred from cosmological observations, it is derived from the others. One does not therefore derive the global estimates in the same way as the local ones, so I’m simplifying things a lot in the following discussion which I am not therefore claiming to be a resolution of the alleged discrepancy. I’m just thinking out loud, so to speak.
With that caveat in mind, and setting aside the possibility (or indeed probability) of observational systematics in some or all of the measurements, let us suppose that we did find that there was a real discrepancy between distances inferred using angular diameters and distances using luminosities in the framework of the standard cosmological model. What could we infer?
Well, if the Universe is described by a space-time with the Robertson-Walker Metric (which is the case if the Cosmological Principle applies in the framework of General Relativity) then angular diameter distances and luminosity distances differ only by a factor of (1+z)2 where z is the redshift: DL=DA(1+z)2.
I’ve included here some slides from undergraduate course notes to add more detail to this if you’re interested:
The result DL=DA(1+z)2 is an example of Etherington’s Reciprocity Theorem. If we did find that somehow this theorem were violated, how could we modify our cosmological theory to explain it?
Well, one thing we couldn’t do is change the evolutionary history of the scale factor a(t) within a Friedman model. The redshift just depends on the scale factor when light is emitted and the scale factor when it is received, not how it evolves in between. And because the evolution of the scale factor is determined by the Friedman equation that relates it to the energy contents of the Universe, changing the latter won’t help either no matter how exotic the stuff you introduce (as long as it only interacts with light rays via gravity). In the light of this, the fact there are significant numbers of theorists pushing for such things as interacting dark-energy models to engineer late-time changes in expansion history is indeed a bit perplexing.
In the light of the caveat I introduced above, I should say that changing the energy contents of the Universe might well shift the allowed parameter region which may reconcile the cosmological determination of the Hubble constant from cosmology with local values. I am just talking about a hypothetical simpler case.
In order to violate the reciprocity theorem one would have to tinker with something else. An obvious possibility is to abandon the Robertson-Walker metric. We know that the Universe is not exactly homogeneous and isotropic, so one could appeal to the gravitational lensing effect of lumpiness as the origin of the discrepancy. This must happen to some extent, but understanding it fully is very hard because we have far from perfect understanding of globally inhomogeneous cosmological models.
Etherington’s theorem requires light rays to be described by null geodesics which would not be the case if photons had mass, so introducing massive photons that’s another way out. It also requires photon numbers to be conserved, so some mysterious way of making photons disappear might do the trick, so adding some exotic field that interacts with light in a peculiar way is another possibility.
Anyway, my main point here is that if one could pin down the Hubble constant tension as a discrepancy between angular-diameter and luminosity based distances then the most obvious place to look for a resolution is in departures of the metric from the Robertson-Walker form. The reciprocity theorem applies to any GR-based metric theory, i.e. just about anything without torsion in the metric, so it applies to inhomogeneous cosmologies based on GR too. However, in such theories there is no way of defining a global scale factor a(t) so the reciprocity relation applies only locally, in a different form for each source and observer.
All of this begs the question of whether or not there is real tension in the H0 measures. I certainly have better things to get tense about. That gives me an excuse to include my long-running poll on the issue:
It is of course interesting in itself to see the cut and thrust of scientific debate on a live topic such as this, but in my mind at least it raises interesting questions about the nature of scientific publication. To repeat something I wrote a a while ago, it seems to me that the scientific paper published in an academic journal is an anachronism. Digital technology enables us to communicate ideas far more rapidly than in the past and allows much greater levels of interaction between researchers. I agree with Daniel Shanahan that the future for many fields will be defined not in terms of “papers” which purport to represent “final” research outcomes, but by living documents continuously updated in response to open scrutiny by the research community.
The Open Journal of Astrophysics is innovative in some ways but remains wedded to the paper as its fundamental object, and the platform is not able to facilitate interaction with readers. Of course one of the worries is that the comment facilities on many websites tend to get clogged up with mindless abuse, but I think that is manageable. I have some ideas on this, but for the time being I’m afraid all my energies are taken up with other things so this is for the future.
I’ve long argued that the modern academic publishing industry is not facilitating but hindering the communication of research. The arXiv has already made academic journals virtually redundant in many of branches of physics and astronomy; other disciplines will inevitably follow. The age of the academic journal is drawing to a close, and it is consequently time to rethink the concept of a paper.
The views presented here are personal and not necessarily those of my employer (or anyone else for that matter).
Feel free to comment on any of the posts on this blog but comments may be moderated; anonymous comments and any considered by me to be vexatious and/or abusive and/or defamatory will not be accepted. I do not necessarily endorse, support, sanction, encourage, verify or agree with the opinions or statements of any information or other content in the comments on this site and do not in any way guarantee their accuracy or reliability.