Archive for Cosmology

New Publication at the Open Journal of Astrophysics!

Posted in OJAp Papers, Uncategorized with tags , , , , , , , on July 19, 2019 by telescoper

I was a bit busy yesterday doing a number of things, including publishing a new paper at The Open Journal of Astrophysics, but I didn’t get time to write a post about it until now. Anyway, here is how the new paper looks on the site:

The authors are Tom Kitching, Paniez Paykari and Mark Cropper of the Mullard Space Sciences Laboratory (of University College London) and Henk Hoekstra of Leiden Observatory.

You can find the accepted version on the arXiv here. This version was accepted after modifications requested by the referee and editor. Because this is an overlay journal the authors have to submit the accepted version to the arXiv (which we then check against the copy submitted to us) before publishing. We actually have a bunch of papers that we have accepted but are awaiting the appearance of the final version on the arXiv so we can validate it.

Anyway, this is another one for the `Cosmology and Nongalactic Astrophysics’ folder. We would be happy to get more submissions from other areas of astrophysics. Hint! Hint!

P.S. Just a reminder that we now have an Open Journal of Astrophysics Facebook page where you can follow updates from the Journal should you wish..

The Hubble Constant from the Tip of the Red Giant Branch

Posted in The Universe and Stuff with tags , , , , on July 16, 2019 by telescoper

At the risk of boring everyone again with Hubble constant news there’s yet another paper on the arXiv about the Hubble constant. This one is another `local’ measurement, in that it uses properties of nearby stars,  time based on a new calibration of the Red Giant Branch. This one is by Wendy Freedman et al. and its abstract reads:

We present a new and independent determination of the local value of the Hubble constant based on a calibration of the Tip of the Red Giant Branch (TRGB) applied to Type Ia supernovae (SNeIa). We find a value of Ho = 69.8 +/- 0.8 (+/-1.1\% stat) +/- 1.7 (+/-2.4\% sys) km/sec/Mpc. The TRGB method is both precise and accurate, and is parallel to, but independent of the Cepheid distance scale. Our value sits midway in the range defined by the current Hubble tension. It agrees at the 1.2-sigma level with that of the Planck 2018 estimate, and at the 1.7-sigma level with the SHoES measurement of Ho based on the Cepheid distance scale. The TRGB distances have been measured using deep Hubble Space Telescope (HST) Advanced Camera for Surveys (ACS) imaging of galaxy halos. The zero point of the TRGB calibration is set with a distance modulus to the Large Magellanic Cloud of 18.477 +/- 0.004 (stat) +/-0.020 (sys) mag, based on measurement of 20 late-type detached eclipsing binary (DEB) stars, combined with an HST parallax calibration of a 3.6 micron Cepheid Leavitt law based on Spitzer observations. We anchor the TRGB distances to galaxies that extend our measurement into the Hubble flow using the recently completed Carnegie Supernova Project I sample containing about 100 well-observed SNeIa. There are several advantages of halo TRGB distance measurements relative to Cepheid variables: these include low halo reddening, minimal effects of crowding or blending of the photometry, only a shallow (calibrated) sensitivity to metallicity in the I-band, and no need for multiple epochs of observations or concerns of different slopes with period. In addition, the host masses of our TRGB host-galaxy sample are higher on average than the Cepheid sample, better matching the range of host-galaxy masses in the CSP distant sample, and reducing potential systematic effects in the SNeIa measurements.

You can download a PDF of the paper here.

Note that the value obtained ising the TRGB here lies in between the two determinations using the cosmic microwave background and the Cepheid distance scale I discussed, for example, here. This is illustrated nicely by the following couple of Figures:

I know that this result – around 70 km s-1 Mpc-1 – has made some people a bit more relaxed about the apparent tension between the previous measurements, but what do you think? Here’s a poll so you can express your opinion.

My own opinion is that if there isn’t any tension at all at the one-sigma level then you should consider the possibility that you got sigma wrong!

Hubble’s Constant – A Postscript on w

Posted in The Universe and Stuff with tags , , , , , , , on July 15, 2019 by telescoper

Last week I posted about new paper on the arXiv (by Wong et al.) that adds further evidence to the argument about whether or not the standard cosmological model is consistent with different determinations of the Hubble Constant. You can download a PDF of the full paper here.

Reading the paper through over the weekend I was struck by Figure 6:

This shows the constraints on H0 and the parameter w which is used to describe the dark energy component. Bear in mind that these estimates of cosmological parameters actually involve the simultaneous estimation of several parameters, six in the case of the standard ΛCDM model. Incidentally, H0 is not one of the six basic parameters of the standard model – it is derived from the others – and some important cosmological observations are relatively insensitive to its value.

The parameter w is the equation of state parameter for the dark energy component so that the pressure p is related to the energy density ρc2 via p=wρc2. The fixed value w=-1 applies if the dark energy is of the form of a cosmological constant (or vacuum energy). I explained why here. Non-relativistic matter (dominated by rest-mass energy) has w=0 while ultra-relativistic matter has w=1/3.

Applying the cosmological version of the thermodynamic relation for adiabatic expansion  “dE=-pdV” one finds that ρ ∼ a-3(1+w) where a is the cosmic scale factor. Note that w=-1 gives a constant energy density as the Universe expands (the cosmological constant); w=0 gives ρ ∼ a-3, as expected for `ordinary’ matter.

As I already mentioned, in the standard cosmological model w is fixed at  w=-1 but if it is treated as a free parameter then it can be added to the usual six to produce the Figure shown above. I should add for Bayesians that this plot shows the posterior probability assuming a uniform prior on w.

What is striking is that the data seem to prefer a very low value of w. Indeed the peak of the likelihood (which determines the peak of the posterior probability if the prior is flat) appears to be off the bottom of the plot. It must be said that the size of the black contour lines (at one sigma and two sigma for dashed and solid lines respectively) suggests that these data aren’t really very informative; the case w=-1 is well within the 2σ contour. In other words, one might get a slightly better fit by allowing the equation of state parameter to float, but the quality of the fit might not improve sufficiently to justify the introduction of another parameter.

Nevertheless it is worth mentioning that if it did turn out, for example, that w=-2 that would imply ρ ∼ a+3, i.e. an energy density that increases steeply as a increases (i.e. as the Universe expands). That would be pretty wild!

On the other hand, there isn’t really any physical justification for cases with w<-1 (in terms of a plausible model) which, in turn, makes me doubt the reasonableness of imposing a flat prior. My own opinion is that if dark energy turns out not to be of the simple form of a cosmological constant then it is likely to be too complicated to be expressed in terms of a single number anyway.

 

Postscript to this postscript: take a look at this paper from 2002!

Hubble’s Constant – The Tension Mounts!

Posted in The Universe and Stuff with tags , , , , on July 12, 2019 by telescoper

There’s a new paper on the arXiv (by Wong et al.) that adds further evidence to the argument about whether or not the standard cosmological model is consistent with different determinations of the Hubble Constant. The abstract is here:

You can download a PDF of the full paper here.

You will that these measurements, based on observations of time delays in multiply imaged quasars that have been  gravitationally lensed, give higher values of the Hubble constant than determinations from, e.g., the Planck experiment.

Here’s a nice summary of the tension in pictorial form:

And here are some nice pictures of the lensed quasars involved in the latest paper:

 

It’s interesting that these determinations seem more consistent with local distance-scale approaches than with global cosmological measurements but the possibility remains of some unknown systematic.

Time, methinks, to resurrect my long-running poll on this!

Please feel free to vote. At the risk of inciting Mr Hine to clog up my filter with further gibberish,  you may also comment through the box below.

 

Cosmology with the Minimal Spanning Tree

Posted in The Universe and Stuff with tags , , , , , , on July 8, 2019 by telescoper

There’s a nice paper on the arXiv (by Naidoo et al) with the abstract:

The code mentioned at the end can be found here.

The appearance of this paper gives me an excuse to mention that I actually wrote a paper (with Russell Pearson) on the use of the Minimal (or Minimum) Spanning Tree (MST) to analyze galaxy clustering way back in 1995.

Here’s how we described the Minimal Spanning Tree in that old paper:

Strictly speaking , we used the Euclidean Minimum Spanning Tree in which the total length of the lines connecting a set of points in a tree is minimized. In general cases a weight can be assigned to each link that is not necessarily defined simply by the length. Here is visual illustration (which I think we drew by hand!)

You can think of the MST as a sort of pre-processing technique which accentuates linear features in a point process that might otherwise get lost in shot noise. Once one has a tree (pruned and/or separated as necessary) one can then extract various statistical properties in order to quantify the pattern present.

Way back in 1995 there were far fewer datasets available to which to apply this method and it didn’t catch on at the time. Now, with  ever-increasing availability of spectroscopic redshift surveys maybe its time has come at last! I look forward to playing with the Python code in due course!

 

New Publication at the Open Journal of Astrophysics!

Posted in OJAp Papers, Open Access, The Universe and Stuff with tags , , , , , , on June 26, 2019 by telescoper

In a blog I posted just a couple of day ago I mentioned that there were a number of papers about to be published by the Open Journal of Astrophysics and, to show that I wasn’t making that up, the first of the latest batch has just appeared. Here is how it looks on the site!

There are thirteen authors altogether (from Oxford, Liverpool, Edinburgh, Leiden, British Columbia, Zurich and Munich); the lead other is Elisa

You can find the accepted version on the arXiv here. This version was accepted after modifications requested by the referee and editor.

This is another one for the `Cosmology and Nongalactic Astrophysics’ folder. We would be happy to get more submissions from other areas of astrophysics. Hint! Hint!

A few people have asked why the Open Journal of Astrophysics is not yet listed in the Directory of Open Access Journals. The answer to that is simple: to qualify for listing a journal must publish a minimum of five papers in a calendar year. Since OJA underwent a failure long hiatus after publishing its first batch of papers we haven’t yet qualified. However, this new one means that we have now published five papers so have reached the qualifying level.  I’ll put in the application as soon as I can, but will probably wait a little because we have a bunch of other papers coming out very soon to add to that number.

P.S. Please note that we now have an Open Journal of Astrophysics Facebook page where you can follow updates from the Journal should you wish..

Dark Energy – Lectures by Varun Sahni

Posted in The Universe and Stuff with tags , , on June 9, 2019 by telescoper

I thought I’d share this lecture course about Dark Energy here. It was delivered by Varun Sahni at an international school on cosmology earlier this year. The material is quite technical in places but I’m sure these lectures will prove a very helpful introduction to, for example, new PhD students in this area. Varun has been a very good friend and colleague of mine for many years, and he is an excellent lecturer!

Here are the three lectures:

The 2019 Gruber Prize for Cosmology: Nick Kaiser and Joe Silk

Posted in The Universe and Stuff with tags , , , , , , , on May 9, 2019 by telescoper

I’ve just heard that the Gruber Foundation has announced the winners of this year’s Gruber Prize for cosmology, namely Nick Kaiser and Joe Silk. Worthy winners the both of them! Congratulations!

Here’s some text taken from the press release:

The recipients of the 2019 prize are Nicholas Kaiser and Joseph Silk, both of whom have made seminal contributions to the theory of cosmological structure formation and to the creation of new probes of dark matter. Though they have worked mostly independently of each other, the two theorists’ results are complementary in these major areas, and have transformed modern cosmology — not once but twice.

The two recipients will share the $500,000 award, and each will be presented with a gold medal at a ceremony that will take place on 28 June at the CosmoGold conference at the Institut d’Astrophysique de Paris in France.

The physicists’ independent contributions to the theory of cosmological structure formation have been instrumental in building a more complete picture of how the early Universe evolved into the Universe as astronomers observe it today. In 1967 and 1968, Silk predicted that density fluctuations below a critical size in the Cosmic Microwave Background, the remnant radiation “echoing” the Big Bang, would have dissipated. This phenomenon, later verified by increasingly high precision measurements of the CMB, is now called “Silk Damping”.

In the meantime, ongoing observations of the large-scale structure of the Universe, which evolved from the larger CMB fluctuations, were subject to conflicting interpretations. In a series of papers beginning in 1984, Kaiser helped to resolve these debates by providing statistical tools that would allow astronomers to separate “noise” from data, reducing ambiguity in the observations.

Kaiser’s statistical methodology was also influential in dark matter research; the DEFW collaboration (Marc Davis, George Efstathiou, Carlos Frenk, and Simon D. M. White) utilised it to determine the distribution and velocity of dark matter in the Universe, and discovered its non-relativistic nature (moving at a velocity not approaching the speed of light). Furthermore, Kaiser devised an additional statistical methodology to detect dark matter distribution through weak lensing — an effect by which foreground matter distorts the light of background galaxies, providing a measure of the mass of both. Today weak lensing is among cosmology’s most prevalent tools.

Silk has also been impactful in dark matter research, having proposed in 1984 a method of investigating dark matter particles by exploring the possibilities of their self-annihilations into particles that we can identify (photons, positrons and antiprotons). This strategy continues to drive research worldwide.

Both Kaiser and Silk are currently affiliated with institutions in Paris, Kaiser as a professor at the École Normale Supérieure, and Silk as an emeritus professor and a research scientist at the Institut d’Astrophysique de Paris (in addition to a one-quarter appointment at The John Hopkins University). Among their numerous significant contributions to their field, their work on the CMB and dark matter has truly revolutionised our understanding of the Universe.

I haven’t worked directly with either Nick Kaiser or Joe Silk but both had an enormous influence on me, especially early on in my career. When I was doing my PhD, Nick was in Cambridge and Joe was in Berkeley. In fact I think Nick was the first person ever to ask me a question during a conference talk – which terrified the hell out of me because I didn’t know him except by scientific reputation and didn’t realize what a nice guy he is! Anyway his 1984 paper on cluster correlations was the direct motivation for my very first publication (in 1986).

I don’t suppose either will be reading this but heartiest congratulations to both, and if they follow my advice they won’t spend all the money in the same shop!

P.S. Both Nick and Joe are so distinguished that each has appeared in my Astronomy Lookalikes gallery (here and here).

Redshift and Distance in Cosmology

Posted in The Universe and Stuff with tags , , , , , on April 29, 2019 by telescoper

I was looking for a copy of this this picture this morning and when I found it I thought I’d share it here. It was made by Andy Hamilton and appears in this paper. I used it (with permission) in the textbook I wrote with Francesco Lucchin which was published in 2003.

I think this is a nice simple illustration of the effect of the density parameter Ω and the cosmological constant Λ on the relationship between redshift and (comoving) distance in the standard cosmological models based on the Friedman Equations.

On the left there is the old standard model (from when I was a lad) in which space is Euclidean and there is a critical density of matter; this is called the Einstein de Sitter model in which Λ=0. On the right you can see something much closer to the current standard model of cosmology, with a lower density of matter but with the addition of a cosmological constant. Notice that in the latter case the distance to an object at a given redshift is far larger than in the former. This is, for example, why supernovae at high redshift look much fainter in the latter model than in the former, and why these measurements are so sensitive to the presence of a cosmological constant.

In the middle there is a model with no cosmological constant but a low density of matter; this is an open Universe. Because it decelerates much more slowly than in the Einstein de Sitter model, the distance out to a given redshift is larger (but not quite as large as the case on the right, which is an accelerating model), but the main property of interest in the open model is that the space is not Euclidean, but curved. The effect of this is that an object of fixed physical size at a given redshift subtends a much smaller angle than in the cases either side. That shows why observations of the pattern of variations in the temperature of the cosmic microwave background across the sky yield so much information about the spatial geometry.

It’s a very instructive picture, I think!

Poisson (d’Avril) Point Processes

Posted in Uncategorized with tags , , , on April 2, 2019 by telescoper

I was very unimpressed by yesterday’s batch of April Fool jokes. Some of them were just too obvious:

I’m glad I didn’t try to do one.

Anyway, I noticed that an old post of mine was getting some traffic and when I investigated I found that some of the links to pictures were dead. So I’ve decided to refresh it and post again.

–0–

I’ve got a thing about randomness. For a start I don’t like the word, because it covers such a multitude of sins. People talk about there being randomness in nature when what they really mean is that they don’t know how to predict outcomes perfectly. That’s not quite the same thing as things being inherently unpredictable; statements about the nature of reality are ontological, whereas I think randomness is only a useful concept in an epistemological sense. It describes our lack of knowledge: just because we don’t know how to predict doesn’t mean that it can’t be predicted.

Nevertheless there are useful mathematical definitions of randomness and it is also (somtimes) useful to make mathematical models that display random behaviour in a well-defined sense, especially in situations where one has to take into account the effects of noise.

I thought it would be fun to illustrate one such model. In a point process, the random element is a “dot” that occurs at some location in time or space. Such processes occur in wide range of contexts: arrivals of buses at a bus stop, photons in a detector, darts on a dartboard, and so on.

Let us suppose that we think of such a process happening in time, although what follows can straightforwardly be generalised to things happening over an area (such a dartboard) or within some higher-dimensional region. It is also possible to invest the points with some other attributes; processes like this are sometimes called marked point processes, but I won’t discuss them here.

The “most” random way of constructing a simple point process is to assume that each event happens independently of every other event, and that there is a constant probability per unit time of an event happening. This type of process is called a Poisson process, after the French mathematician Siméon-Denis Poisson, who was born in 1781. He was one of the most creative and original physicists of all time: besides fundamental work on electrostatics and the theory of magnetism for which he is famous, he also built greatly upon Laplace’s work in probability theory. His principal result was to derive a formula giving the number of random events if the probability of each one is very low. The Poisson distribution, as it is now known and which I will come to shortly, is related to this original calculation; it was subsequently shown that this distribution amounts to a limiting of the binomial distribution. Just to add to the connections between probability theory and astronomy, it is worth mentioning that in 1833 Poisson wrote an important paper on the motion of the Moon.

In a finite interval of duration T the mean (or expected) number of events for a Poisson process will obviously just be proportional to the product of the rate per unit time and T itself; call this product λ.

The full distribution is then of the form:

This gives the probability that a finite interval contains exactly x events. It can be neatly derived from the binomial distribution by dividing the interval into a very large number of very tiny pieces, each one of which becomes a Bernoulli trial. The probability of success (i.e. of an event occurring) in each trial is extremely small, but the number of trials becomes extremely large in such a way that the mean number of successes is l. In this limit the binomial distribution takes the form of the above expression. The variance of this distribution is interesting: it is alsol.  This means that the typical fluctuations within the interval are of order the square root of l on a mean level of l, so the fractional variation is of the famous “one over root n” form that is a useful estimate of the expected variation in point processes.  Indeed, it’s a useful rule-of-thumb for estimating likely fluctuation levels in a host of statistical situations.

If football were a Poisson process with a mean number of goals per game of, say, 2 then would expect must games to have 2 plus or minus 1.4 (the square root of 2)  goals, i.e. between about 0.6 and 3.4. That is actually not far from what is observed and the distribution of goals per game in football matches is actually quite close to a Poisson distribution.

This idea can be straightforwardly extended to higher dimensional processes. If points are scattered over an area with a constant probability per unit area then the mean number in a finite area will also be some number l and the same formula applies.

As a matter of fact I first learned about the Poisson distribution when I was at school, doing A-level mathematics (which in those days actually included some mathematics). The example used by the teacher to illustrate this particular bit of probability theory was a two-dimensional one from biology. The skin of a fish was divided into little squares of equal area, and the number of parasites found in each square was counted. A histogram of these numbers accurately follows the Poisson form. For years I laboured under the delusion that it was given this name because it was something to do with fish, but then I never was very quick on the uptake.

This is all very well, but point processes are not always of this Poisson form. Points can be clustered, so that having one point at a given position increases the conditional probability of having others nearby. For example, galaxies like those shown in the nice picture are distributed throughout space in a clustered pattern that is very far from the Poisson form. But it’s very difficult to tell from just looking at the picture. What is needed is a rigorous statistical analysis.

 

The statistical description of clustered point patterns is a fascinating subject, because it makes contact with the way in which our eyes and brain perceive pattern. I’ve spent a large part of my research career trying to figure out efficient ways of quantifying pattern in an objective way and I can tell you it’s not easy, especially when the data are prone to systematic errors and glitches. I can only touch on the subject here, but to see what I am talking about look at the two patterns below:

pointbpointa

You will have to take my word for it that one of these is a realization of a two-dimensional Poisson point process and the other contains correlations between the points. One therefore has a real pattern to it, and one is a realization of a completely unstructured random process.

I show this example in popular talks and get the audience to vote on which one is the random one. The vast majority usually think that the top  is the one that is random and the bottom one is the one with structure to it. It is not hard to see why. The top pattern is very smooth (what one would naively expect for a constant probability of finding a point at any position in the two-dimensional space) , whereas the bottom one seems to offer a profusion of linear, filamentary features and densely concentrated clusters.

In fact, it’s the bottom  picture that was generated by a Poisson process using a  Monte Carlo random number generator. All the structure that is visually apparent is imposed by our own sensory apparatus, which has evolved to be so good at discerning patterns that it finds them when they’re not even there!

The top  process is also generated by a Monte Carlo technique, but the algorithm is more complicated. In this case the presence of a point at some location suppresses the probability of having other points in the vicinity. Each event has a zone of avoidance around it; the points are therefore anticorrelated. The result of this is that the pattern is much smoother than a truly random process should be. In fact, this simulation has nothing to do with galaxy clustering really. The algorithm used to generate it was meant to mimic the behaviour of glow-worms which tend to eat each other if they get  too close. That’s why they spread themselves out in space more uniformly than in the random pattern.

Incidentally, I got both pictures from Stephen Jay Gould’s collection of essays Bully for Brontosaurus and used them, with appropriate credit and copyright permission, in my own book From Cosmos to Chaos. I forgot to say this in earlier versions of this post.

The tendency to find things that are not there is quite well known to astronomers. The constellations which we all recognize so easily are not physical associations of stars, but are just chance alignments on the sky of things at vastly different distances in space. That is not to say that they are random, but the pattern they form is not caused by direct correlations between the stars. Galaxies form real three-dimensional physical associations through their direct gravitational effect on one another.

People are actually pretty hopeless at understanding what “really” random processes look like, probably because the word random is used so often in very imprecise ways and they don’t know what it means in a specific context like this.  The point about random processes, even simpler ones like repeated tossing of a coin, is that coincidences happen much more frequently than one might suppose.

I suppose there is an evolutionary reason why our brains like to impose order on things in a general way. More specifically scientists often use perceived patterns in order to construct hypotheses. However these hypotheses must be tested objectively and often the initial impressions turn out to be figments of the imagination, like the canals on Mars.

Now, I think I’ll complain to wordpress about the widget that links pages to a “random blog post”. I’m sure it’s not really random….