Today’s announcement of a new measurement of the anomalous magnetic dipole moment – known to its friends as (g-2) of the muon – has been greeted with excitement by the scientific community, as it seems to provide evidence of a departure from the standard model of particle physics (by 4.2σ in frequentist parlance).
My own view is that the measurement of g-2, which seems to be a bit higher than theorists expected, can be straightforwardly reconciled with the predictions of the standard model of particle physics by simply adopting a slightly lower value for 2 in the theoretical calculations.
P.S. According to my own (unpublished) calculations, the value of g-2 ≈ 7.81 m s-2.
It’s time I shared another one of those interesting cosmology talks on the Youtube channel curated by Shaun Hotchkiss. This channel features technical talks rather than popular expositions so it won’t be everyone’s cup of tea but for those seriously interested in cosmology at a research level they should prove interesting. This one was published just yesterday.
In the talk Dan Thomas discusses his recent work first creating a framework for describing modified gravity (i.e. extensions of general relativity) in a model-independent way on non-linear scales and then running N-body simulations in that framework. The framework involves finding a correspondence between large scale linear theory where everything is under control and small scale non-linear post-Newtonian dynamics. After a lot of care and rigour it boils down to a modified Poisson equation – on both large and small scales (in a particular gauge). The full generality of the modification to the Poisson equation allows, essentially, for a time and space dependent value for Newton’s constant. For most modified gravity models, the first level of deviation from general relativity can be parametrised in this way. This approach allows the method to use to constrain modified gravity using observations without needing to run a new simulation for every step of a Monte Carlo parameter fit.
P. S. A couple of papers to go with this talk can be found here and here.
Just a quick word to let you know that my obituary of John Barrow (partly based on my blog post here) has now been published in The Observatory Vol. 141 No. 1281 (2021 April) pp. 93-96. The Observatory Magazine isn’t available online so, with the permission of the Editors, I’ve included a link to a PDF of the published version here:
As a service to the public I thought I’d share one of my lectures. This is one I did yesterday, Lecture 16 in my module MP465 Advanced Electromagnetism:
I don’t know how I managed to pad this out to a whole hour.
As as Astronomist I am often asked “How do they calculate the date of Easter?”, to which my answer is usually “Look it up on Wikipedia!“.
The simple answer is that Easter Sunday is on the first Sunday after the first full Moon on or after the Vernal equinox. The Vernal Equinox took place this year on March 20th and the more observant among you will have noticed that yesterday was (a) Sunday and (b) a Full Moon. Yesterday was not Easter Sunday because the rule says Easter is on the first Sunday after the first full Moon on or after the Vernal equinox, which does not include a Full Moon on the first Sunday on or after the vernal equinox. Accordingly Easter 2021 is next Sunday 4th April. If the Full Moon had happened on Saturday, yesterday would have been Easter Sunday.
That is just as well really because next weekend is when the holidays and sporting events have been arranged.
I say “simple” answer above because it isn’t quite how the date of Easter is reckoned for purposes of the liturgical calendar.
For a start the ecclesiastical calculation of the date for Easter – the computus – assumes that the Vernal Equinox is always on March 21st, while in reality it can be a day or two either side of that. This year it was on March 20th.
On top of that there’s the issue of what reference time and date to use. The equinox is a precisely timed astronomical event but it occurs at different times and possibly on different days in different time zones. Likewise the full Moon. In the ecclesiastical calculation the “full moon” does not currently correspond directly to any astronomical event, but is instead the 14th day of a lunar month, as determined from tables (see below). It may differ from the date of the actual full moon by up to two days.
There have been years (1974, for example) where the official date of Easter does not coincide with the date determined by the simple rule given above. The actual rule is a complicated business involving Golden Numbers and Metonic cycles and whatnot.
I’m grateful to Graham Pointer on Twitter for sending this excerpt from the Book of Common Prayer that sheweth how to determine the date of Easter for any year up to 2199:
I don’t care what happens after that as I’ll be retired by then. If you apply this method to 2021 you will find it is an 8C. Next year will be a 9B. Further calculations are left as an exercise to the reader.
I’m a bit late getting round to writing something on the blog today because it has been yet another hectic day. Between my usual lecture this morning and Computational Physics Laboratory session this afternoon we also had our long-awaited Astrophysics & Cosmology Masterclass (held via Zoom).
This event had been delayed twice because of Covid-19 so we were glad that it went ahead today at last!
We were a little nervous about how well it would go but as it happened I think it was a success. We had approaching a hundred schools tuning in, from Wicklow to Tralee, Longford to Monaghan, Donegal to Cork and many places between. The level of engagement was excellent. We held a question-and-answer session but were a little nervous in advance about whether we would actually get any questions. As it turned out we got a lot of questions with some very good ones among them. Reaction from students and teachers was very good.
For those who couldn’t make it to this morning’s session we did record the presentations and I’ll make the video available via YouTube in due course.
Now, I’ve been Zooming and Teaming (with a bit of Panopto thrown in) all day so if you don’t mind I’ll now go and vegetate.
Time to announce another publication in the Open Journal of Astrophysics. This one was published yesterday, actually, but I didn’t get time to post about it until just now. It is the third paper in Volume 4 (2021) and the 34th paper in all.
The latest publication is entitled Dwarfs from the Dark (Energy Survey): a machine learning approach to classify dwarf galaxies from multi-band imagesand is written by Oliver Müller of the Observatoire Astronomique de Strasbourg (France) and Eva Schnider of the University of Basel (Switzerland).
Here is a screen grab of the overlay which includes the abstract:
You can click on the image to make it larger should you wish to do so. You can find the arXiv version of the paper here. This one is in the Instrumentation and Methods for Astrophysics Folder, though it does overlap with Astrophysics of Galaxies too.
It seems the authors were very happy with the publication process!
I am very happy with the experience of publishing with the open-access journal @OJ_Astro. Everything went smoothly, the editor (@telescoper) was very quick to respond to questions, no article fees, and a useful referee report. Would recommend 10/10. https://t.co/B3I6XCKUAO
Incidentally, the Scholastica platform we are using for the Open Journal of Astrophysics is continuing to develop additional facilities. The most recent one is that the Open Journal of Astrophysics now has the facility to include supplementary files (e.g. code or data sets) along with the papers we publish. If any existing authors (i.e. of papers we have already published) would like us to add supplementary files retrospectively then please contact us with a request!
The full paper (i.e. author list plus a small amount of text) can be found here. Here are two plots from that work.
The first shows the constraints from the six loudest gravitational wave events selected for the latest work, together with the two competing measurements from Planck and SH0ES:
As you can see the individual measurements do not constrain very much. The second plot shows the effect of combining all relevant data, including a binary neutron star merger with an electromagnetic counterparts. The results are much stronger when the latter is included
Obviously this measurement isn’t yet able to resolve the alleged tension between “high” and “low” values described on this blog passim, but it’s early days. If LIGO reaches its planned sensitivity the next observing run should provide many more events. A few hundred should get the width of the posterior distribution shown in the second figure down to a few percent, which would be very interesting indeed!
Currently Ireland spends just 1.1% of its GDP on scientific research and development and SFI currently has a heavy focus on applied research (i.e. research aligned with industry that can be exploited for short-term commercial gain). This has made life difficult for basic or fundamental science and has driven many researchers in such areas abroad, to the detriment of Ireland’s standing in the international community.
The new strategy, which will cover the period from now to 2025, plans for 15% annual rises that will boost the agency’s grant spending — the greater part of the SFI budget — from €200 million in 2020 to €376 million by 2025. Much of this is focused in top-down manner on specific programmes and research centres but there is at least an acknowledgement of the need to support basic research, including an allocation of €11 million in 2021 for early career researchers.
The overall aim is to increase the overall R&D spend from 1.1% of gross domestic product, well below the European average of 2.2%, to 2.5% by 2025.
One of the jobs I had to do last week was to write the Annual Research Report for the Department of Theoretical Physics at Maynooth University. I am very pleased that despite the Covid-19 pandemic, over the last year we managed to score some notable successes in securing new grant awards (amounting to €1.3M altogether) as well as doubling the number of refereed publications since the previous year. This is of course under the old SFI regime. Hopefully in the next few years covered by the new SFI strategic plan we’ll be able to build on that growth still further, especially in areas related to quantum computing and quantum technology generally.
Anyway, it seems that SFI listened to at least some of the submissions made to the consultation exercise I mentioned a few months ago.
A rather pugnacious paper by George Efstathiou appeared on the arXiv earlier this week. Here is the abstract:
This paper investigates whether changes to late time physics can resolve the `Hubble tension’. It is argued that many of the claims in the literature favouring such solutions are caused by a misunderstanding of how distance ladder measurements actually work and, in particular, by the inappropriate use of distance ladder H0 priors. A dynamics-free inverse distance ladder shows that changes to late time physics are strongly constrained observationally and cannot resolve the discrepancy between the SH0ES data and the base LCDM cosmology inferred from Planck.
For a more detailed discussion of this paper, see Sunny Vagnozzi’s blog post. I’ll just make some general comments on the context.
One of the reactions to the alleged “tension” between the two measurements of H0 is to alter the standard model in such a way that the equation of state changes significantly at late cosmological times. This is because the two allegedly discrepant sets of measures of the cosmological distance scale (seen, for example, in the diagram below taken from the paper I blogged about a while ago here) differ in that the low values are global measures (based on observations at high redshift) while the high values of are local (based on direct determinations using local sources, specifically stars of various types).
That is basically true. There is, however, another difference in the two types of distance determination: the high values of the Hubble constant are generally related to interpretations of the measured brightness of observed sources (i.e. they are based on luminosity distances) while the lower values are generally based on trigonometry (specifically they are angular diameter distances). Observations of the cosmic microwave background temperature pattern, baryon acoustic oscillations in the matter power-spectrum, and gravitational lensing studies all involve angular-diameter distances rather than luminosity distances.
Before going on let me point out that the global (cosmological) determinations of the Hubble constant are indirect in that they involve the simultaneous determination of a set of parameters based on a detailed model. The Hubble constant is not one of the basic parameters inferred from cosmological observations, it is derived from the others. One does not therefore derive the global estimates in the same way as the local ones, so I’m simplifying things a lot in the following discussion which I am not therefore claiming to be a resolution of the alleged discrepancy. I’m just thinking out loud, so to speak.
With that caveat in mind, and setting aside the possibility (or indeed probability) of observational systematics in some or all of the measurements, let us suppose that we did find that there was a real discrepancy between distances inferred using angular diameters and distances using luminosities in the framework of the standard cosmological model. What could we infer?
Well, if the Universe is described by a space-time with the Robertson-Walker Metric (which is the case if the Cosmological Principle applies in the framework of General Relativity) then angular diameter distances and luminosity distances differ only by a factor of (1+z)2 where z is the redshift: DL=DA(1+z)2.
I’ve included here some slides from undergraduate course notes to add more detail to this if you’re interested:
The result DL=DA(1+z)2 is an example of Etherington’s Reciprocity Theorem. If we did find that somehow this theorem were violated, how could we modify our cosmological theory to explain it?
Well, one thing we couldn’t do is change the evolutionary history of the scale factor a(t) within a Friedman model. The redshift just depends on the scale factor when light is emitted and the scale factor when it is received, not how it evolves in between. And because the evolution of the scale factor is determined by the Friedman equation that relates it to the energy contents of the Universe, changing the latter won’t help either no matter how exotic the stuff you introduce (as long as it only interacts with light rays via gravity). In the light of this, the fact there are significant numbers of theorists pushing for such things as interacting dark-energy models to engineer late-time changes in expansion history is indeed a bit perplexing.
In the light of the caveat I introduced above, I should say that changing the energy contents of the Universe might well shift the allowed parameter region which may reconcile the cosmological determination of the Hubble constant from cosmology with local values. I am just talking about a hypothetical simpler case.
In order to violate the reciprocity theorem one would have to tinker with something else. An obvious possibility is to abandon the Robertson-Walker metric. We know that the Universe is not exactly homogeneous and isotropic, so one could appeal to the gravitational lensing effect of lumpiness as the origin of the discrepancy. This must happen to some extent, but understanding it fully is very hard because we have far from perfect understanding of globally inhomogeneous cosmological models.
Etherington’s theorem requires light rays to be described by null geodesics which would not be the case if photons had mass, so introducing massive photons that’s another way out. It also requires photon numbers to be conserved, so some mysterious way of making photons disappear might do the trick, so adding some exotic field that interacts with light in a peculiar way is another possibility.
Anyway, my main point here is that if one could pin down the Hubble constant tension as a discrepancy between angular-diameter and luminosity based distances then the most obvious place to look for a resolution is in departures of the metric from the Robertson-Walker form. The reciprocity theorem applies to any GR-based metric theory, i.e. just about anything without torsion in the metric, so it applies to inhomogeneous cosmologies based on GR too. However, in such theories there is no way of defining a global scale factor a(t) so the reciprocity relation applies only locally, in a different form for each source and observer.
All of this begs the question of whether or not there is real tension in the H0 measures. I certainly have better things to get tense about. That gives me an excuse to include my long-running poll on the issue:
The views presented here are personal and not necessarily those of my employer (or anyone else for that matter).
Feel free to comment on any of the posts on this blog but comments may be moderated; anonymous comments and any considered by me to be vexatious and/or abusive and/or defamatory will not be accepted. I do not necessarily endorse, support, sanction, encourage, verify or agree with the opinions or statements of any information or other content in the comments on this site and do not in any way guarantee their accuracy or reliability.