Archive for Cosmology

Cosmology: Galileo to Gravitational Waves – with Hiranya Peiris

Posted in The Universe and Stuff with tags , , , on September 9, 2016 by telescoper

Here’s another thing I was planning to post earlier in the summer, but for some reason forgot. It’s a video of a talk given at the Royal Institution earlier this year by eminent cosmologist Prof. Hiranya Peiris of University College London. The introduction to the talk goes like this:

Modern fundamental physics contains ideas just as revolutionary as those of Copernicus or Newton; ideas that may radically change our understanding of the world; ideas such as extra dimensions of space, or the possible existence of other universes.

Testing these concepts requires enormous energies, far higher than what is achievable by the Large Hadron Collider at CERN, and in fact, beyond any conceivable Earth-bound experiments. However, at the Big Bang, the Universe itself performed the ultimate experiment and left clues and evidence about what was behind the origin of the cosmos as we know it, and how it is evolving. And the biggest clue is the afterglow of the Big Bang itself.

In the past decade we have been able to answer age-old questions accurately, such as how old the Universe is, what it contains, and its destiny. Along with these answers have also come many exciting new questions. Join Hiranya Peiris to unravel the detective story, explaining what we have uncovered, and how we know what we know.

Hiranya Peiris is Professor of Astrophysics in the Astrophysics Group in the Department of Physics and Astronomy at University College London. She is also the Principal Investigator of the CosmicDawn project, funded by the European Research Council

She is also a member of the Planck Collaboration and of the ongoing Dark Energy Survey, the Dark Energy Spectroscopic Instrument and the Large Synoptic Survey Telescope. Her work both delves into the Cosmic Microwave Background and contributes towards the next generation galaxy surveys that will yield deep insights into the evolution of the Universe.

I’ve heard a lot of people talk about “Cosmic Dawn” but I’ve never met her…

Anyway, here is the video. It’s quite long (almost an hour) but very interesting and well-presented for experts and non-experts alike!

Update: I’ve just heard the news that Hiranya is shortly to take up a new job in Sweden as Director of the Oscar Klein Centre for Cosmoparticle Physics. Hearty congratulations and good luck to her!

 

George Ellis – Are there multiple universes?

Posted in The Universe and Stuff with tags , , , on July 18, 2016 by telescoper

So, back to Brighton and a sweltering office on Sussex University Campus. I made it back to pick up the list of names I’ll be reading out at tomorrow afternoon’s graduation ceremony in time to give me a few hours’ practice tonight. On the train back from Cardiff I remembered a discussion I had at the conference last week, especially about the various views about cosmology, especially the idea that we might live in a multiverse. I did a bit of a dig around and found this nice video of esteemed cosmologist  (and erstwhile co-author of mine) George Ellis talking about this, and also about his favourite kind of universe (namely one with a compact topology).

 

Cosmology: A Bayesian Perspective

Posted in Talks and Reviews, The Universe and Stuff with tags , , on July 14, 2016 by telescoper

For those of you who are interested, here are the slides I used in my invited talk at MaxEnt 2016 Maximum Entropy and Bayesian Methods in Science and Engineering, yesterday (13th July 2016) in Ghent (Belgium).

MaxEnt 2016: Norton’s Dome and the Cosmological Density Parameter

Posted in The Universe and Stuff with tags , , , on July 11, 2016 by telescoper

The second in my sequence of posts tangentially related to talks at this meeting on Maximum Entropy and Bayesian Methods in Science and Engineering is inspired by a presentation this morning by Sylvia Wenmackers. The talk featured an example which was quite new to me called Norton’s Dome. There’s a full discussion of the implications of this example at John D. Norton’s own website, from which I have taken the following picture:

dome_with_eqn

This is basically a problem in Newtonian mechanics, in which a particle rolls down from the apex of a dome with a particular shape in response to a vertical gravitational field. The solution is well-determined and shown in the diagram.

An issue arises, however, when you consider the case where the particle starts at the apex of the dome with zero velocity. One solution in this case is that the particle stays put forever. However it can be shown that there are other solutions in which the particle sits at the top for an arbitrary (finite) time before rolling down. An example could be for example if the particle were launched up the dome from some point with just enough kinetic energy to reach the top where it is momentarily at rest, but then rolls down again.

Norton argues that this problem demonstrates a certain kind of indeterminism in Newtonian Mechanics. The mathematical problem with the specified initial conditions clearly has a solution in which the ball stays at the top forever. This solution is unstable, which is a familiar situation in mechanics, but this equilibrium has an unusual property related to the absence of Lipschitz continuity. One might expect that an infinitesimal asymmetric perturbation of the particle or the shape of the surface would be needed to send the particle rolling down the slope, but in this case it doesn’t. This is because there isn’t just one solution that has zero velocity at the equilibrium, but an entirely family as described above. This is both curious and interesting, and it does raise the question of how to define a probability measure that describes these solutions.

I don’t really want to go into the philosophical implications of this cute example, but it did strike me that there’s a similarity with an interesting issue in cosmology that I’ve blogged about before (in different terms).

This probably seems to have very little to do with physical cosmology, but now forget about domes and think instead about the behaviour of the mathematical models that describe the Big Bang. To keep things simple, I’m going to ignore the cosmological constant and just consider how things depend on one parameter, the density parameter Ω0. This is basically the ratio between the present density of the matter in the Universe compared to what it would have to be to cause the expansion of the Universe eventually to halt. To put it a slightly different way, it measures the total energy of the Universe. If Ω0>1 then the total energy of the Universe is negative: its (negative) gravitational potential energy dominates over the (positive) kinetic energy. If Ω0<1 then the total energy is positive: kinetic trumps potential. If Ω0=1 exactly then the Universe has zero total energy: energy is precisely balanced, like the man on the tightrope.

A key point, however, is that the trade-off between positive and negative energy contributions changes with time. The result of this is that Ω is not fixed at the same value forever, but changes with cosmic epoch; we use Ω0 to denote the value that it takes now, at cosmic time t0, but it changes with time.

At the beginning, i.e. at the Big Bang itself,  all the Friedmann models begin with Ω arbitrarily close to unity at arbitrarily early times, i.e. the limit as t tends to zero is Ω=1.

In the case in which the Universe emerges from the Big bang with a value of Ω just a tiny bit greater than one then it expands to a maximum at which point the expansion stops. During this process Ω grows without bound. Gravitational energy wins out over its kinetic opponent.

If, on the other hand, Ω sets out slightly less than unity – and I mean slightly, one part in 1060 will do – the Universe evolves to a state where it is very close to zero. In this case kinetic energy is the winner  and Ω ends up on the ground, mathematically speaking.

In the compromise situation with total energy zero, this exact balance always applies. The universe is always described by Ω=1. It walks the cosmic tightrope. But any small deviation early on results in runaway expansion or catastrophic recollapse. To get anywhere close to Ω=1 now – I mean even within a factor ten either way – the Universe has to be finely tuned.

The evolution of Ω  is neatly illustrated by the following phase-plane diagram (taken from an old paper by Madsen & Ellis) describing a cosmological model involving a perflect fluid with an equation of state p=(γ-1)ρc2. This is what happens for γ>2/3 (which includes dust, relativistic particles, etc):

Phase_plane_crop

The top panel shows how the density parameter evolves with scale factor S; the bottom panel shows a completion of this portrait obtained using a transformation that allows the point at infinity to be plotted on a finite piece of paper (or computer screen).

As discussed above this picture shows that all these Friedmann models begin at S=0 with Ω arbitrarily close to unity and that the value of Ω=1 is an unstable fixed point, just like the situation of the particle at the top of the dome. If the universe has Ω=1 exactly at some time then it will stay that way forever. If it is perturbed, however, then it will eventually diverge and end up collapsing (Ω>1) or going into free expansion (Ω<1).  The smaller the initial perturbation,  the longer the system stays close to Ω=1.

The fact that all trajectories start at Ω(S=0)=1 means that one has to be very careful in assigning some sort of probability measure on this parameter, ust as is the case with the Norton’s Dome problem I started with. About twenty years ago, Guillaume Evrard and I tried to put this argument on firmer mathematical grounds by assigning a sensible prior probability to Ω based on nothing other than the assumption that our Universe is described by a Friedmann model.

The result we got was that it should be of the form

P(\Omega) \propto \Omega^{-1}(\Omega-1)^{-1}.

I was very pleased with this result, which is based on a principle advanced by physicist Ed Jaynes, but I have no space to go through the mathematics here. Note, however, that this prior has three interesting properties: it is infinite at Ω=0 and Ω=1, and it has a very long “tail” for very large values of Ω. It’s not a very well-behaved measure, in the sense that it can’t be integrated over, but that’s not an unusual state of affairs in this game. In fact it is what is called an improper prior.

I think of this prior as being the probabilistic equivalent of Mark Twain’s description of a horse:

dangerous at both ends, and uncomfortable in the middle.

Of course the prior probability doesn’t tell usall that much. To make further progress we have to make measurements, form a likelihood and then, like good Bayesians, work out the posterior probability . In fields where there is a lot of reliable data the prior becomes irrelevant and the likelihood rules the roost. We weren’t in that situation in 1995 – and we’re arguably still not – so we should still be guided, to some extent by what the prior tells us.

The form we found suggests that we can indeed reasonably assign most of our prior probability to the three special cases I have described. Since we also know that the Universe is neither totally empty nor ready to collapse, it does indicate that, in the absence of compelling evidence to the contrary, it is quite reasonable to have a prior preference for the case Ω=1.  Until the late 1980s there was indeed a strong ideological preference for models with Ω=1 exactly, but not because of the rather simple argument given above but because of the idea of cosmic inflation.

From recent observations we now know, or think we know, that Ω is roughly 0.26. To put it another way, this means that the Universe has roughly 26% of the density it would need to have to halt the cosmic expansion at some point in the future. Curiously, this corresponds precisely to the unlikely or “fine-tuned” case where our Universe is in between  two states in which we might have expected it to lie.

Even if you accept my argument that Ω=1 is a special case that is in principle possible, it is still the case that it requires the Universe to have been set up with very precisely defined initial conditions. Cosmology can always appeal to special initial conditions to get itself out of trouble because we don’t know how to describe the beginning properly, but it is much more satisfactory if properties of our Universe are explained by understanding the physical processes involved rather than by simply saying that “things are the way they are because they were the way they were.” The latter statement remains true, but it does not enhance our understanding significantly. It’s better to look for a more fundamental explanation because, even if the search is ultimately fruitless, we might turn over a few interesting stones along the way.

The reasoning behind cosmic inflation admits the possibility that, for a very short period in its very early stages, the Universe went through a phase where it was dominated by a third form of energy, vacuum energy. This forces the cosmic expansion to accelerate; this means basically that the equation of state of the contents of the universe is described by γ<2/3 rather than the case γ>2/3 described above. This drastically changes the arguments I gave above.

Without inflation the case with Ω=1 is unstable: a slight perturbation to the Universe sends it diverging towards a Big Crunch or a Big Freeze. While inflationary dynamics dominate, however, this case has a very different behaviour. Not only stable, it becomes an attractor to which all possible universes converge. Here’s what the phase plane looks like in this case:

Phase_plane+2_crop

 

Whatever the pre-inflationary initial conditions, the Universe will emerge from inflation with Ω very close to unity.

So how can we reconcile inflation with current observations that suggest a low matter density? The key to this question is that what inflation really does is expand the Universe by such a large factor that the curvature radius becomes infinitesimally small. If there is only “ordinary” matter in the Universe then this requires that the universe have the critical density. However, in Einstein’s theory the curvature is zero only if the total energy is zero. If there are other contributions to the global energy budget besides that associated with familiar material then one can have a low value of the matter density as well as zero curvature. The missing link is dark energy, and the independent evidence we now have for it provides a neat resolution of this problem.

Or does it? Although spatial curvature doesn’t really care about what form of energy causes it, it is surprising to some extent that the dark matter and dark energy densities are similar. To many minds this unexplained coincidence is a blemish on the face of an otherwise rather attractive structure.

It can be argued that there are initial conditions for non-inflationary models that lead to a Universe like ours. This is true. It is not logically necessary to have inflation in order for the Friedmann models to describe a Universe like the one we live in. On the other hand, it does seem to be a reasonable argument that the set of initial data that is consistent with observations is larger in models with inflation than in those without it. It is rational therefore to say that inflation is more probable to have happened than the alternative.

I am not totally convinced by this reasoning myself, because we still do not know how to put a reasonable measure on the space of possibilities existing prior to inflation. This would have to emerge from a theory of quantum gravity which we don’t have. Nevertheless, inflation is a truly beautiful idea that provides a framework for understanding the early Universe that is both elegant and compelling. So much so, in fact, that I almost believe it.

 

MaxEnt 2016: Some thoughts on the infinite

Posted in The Universe and Stuff with tags , , , on July 10, 2016 by telescoper

I thought I might do a few posts about matters arising from talks at this workshop I’m at. Today is devoted to tutorial talks, and the second one was given by John Skilling and in the course of it, he made some comments about the concept of infinity in science. These remarks weren’t really central to his talk, but struck me as an interesting subject for a few tangential remarks of my own.

Most of us – whether scientists or not – have an uncomfortable time coping with the concept of infinity. Physicists have had a particularly difficult relationship with the notion of boundlessness, as various kinds of pesky infinities keep cropping up in calculations. In most cases this this symptomatic of deficiencies in the theoretical foundations of the subject. Think of the ‘ultraviolet catastrophe‘ of classical statistical mechanics, in which the electromagnetic radiation produced by a black body at a finite temperature is calculated to be infinitely intense at infinitely short wavelengths; this signalled the failure of classical statistical mechanics and ushered in the era of quantum mechanics about a hundred years ago. Quantum field theories have other forms of pathological behaviour, with mathematical components of the theory tending to run out of control to infinity unless they are healed using the technique of renormalization. The general theory of relativity predicts that singularities in which physical properties become infinite occur in the centre of black holes and in the Big Bang that kicked our Universe into existence. But even these are regarded as indications that we are missing a piece of the puzzle, rather than implying that somehow infinity is a part of nature itself.

One exception to this rule is the field of cosmology. Somehow it seems natural at least to consider the possibility that our cosmos might be infinite, either in extent or duration, or both, or perhaps even be a multiverse comprising an infinite collection of sub-universes. If the Universe is defined as everything that exists, why should it necessarily be finite? Why should there be some underlying principle that restricts it to a size our human brains can cope with?

On the other hand, there are cosmologists who won’t allow infinity into their view of the Universe. A prominent example is George Ellis, a strong critic of the multiverse idea in particular, who frequently quotes David Hilbert

The final result then is: nowhere is the infinite realized; it is neither present in nature nor admissible as a foundation in our rational thinking—a remarkable harmony between being and thought.

This comment is quoted from a famous essay which seems to echo earlier remarks by Carl Friedrich Gauss which can be paraphrased:

Infinity is nothing more than a figure of speech which helps us talk about limits. The notion of a completed infinity doesn’t belong in mathematics.

This summarises Gauss’s reaction to Cantor’s Theory of Ininite Sets. But to every Gauss there’s an equal and opposite Leibniz

I am so in favor of the actual infinite that instead of admitting that Nature abhors it, as is commonly said, I hold that Nature makes frequent use of it everywhere, in order to show more effectively the perfections of its Author.

You see that it’s an argument with quite a long pedigree!

When I was at the National Astronomy Meeting in Llandudno a few years ago, I attended an excellent plenary session that featured a Gerald Whitrow Lecture, by Alex Vilenkin, entitled The Principle of Mediocrity. This was a talk based on some ideas from his book Many Worlds in One: The Search for Other Universese, in which he discusses some of the consequences of the so-called eternal inflation scenario, which leads to a variation of the multiverse idea in which the universe comprises an infinite collection of causally-disconnected “bubbles” with different laws of low-energy physics applying in each. Indeed, in Vilenkin’s vision, all possible configurations of all possible things are realised somewhere in this ensemble of mini-universes. An infinite number of National Astronomy Meetings, each with the same or different programmes, an infinite number of Vilenkins, etc etc.

One of the features of this scenario is that it brings the anthropic principle into play as a potential “explanation” for the apparent fine-tuning of our Universe that enables life to be sustained within it. We can only live in a domain wherein the laws of physics are compatible with life so it should be no surprise that’s what we find. There is an infinity of dead universes, but we don’t live there.

I’m not going to go on about the anthropic principle here, although it’s a subject that’s quite fun to write or, better still, give a talk about, especially if you enjoy winding people up! What I did want to say mention, though, is that Vilenkin correctly pointed out that three ingredients are needed to make this work:

  1. An infinite ensemble of realizations
  2. A discretizer
  3. A randomizer

Item 2 involves some sort of principle that ensures that the number of possible states of the system we’re talking about  is not infinite. A very simple example from  quantum physics might be the two spin states of an electron, up (↑) or down(↓). No “in-between” states are allowed, according to our tried-and-tested theories of quantum physics, so the state space is discrete.  In the more general context required for cosmology, the states are the allowed “laws of physics” ( i.e. possible  false vacuum configurations). The space of possible states is very much larger here, of course, and the theory that makes it discrete much less secure. In string theory, the number of false vacua is estimated at 10500. That’s certainly a very big number, but it’s not infinite so will do the job needed.

Item 3 requires a process that realizes every possible configuration across the ensemble in a “random” fashion. The word “random” is a bit problematic for me because I don’t really know what it’s supposed to mean. It’s a word that far too many scientists are content to hide behind, in my opinion. In this context, however, “random” really means that the assigning of states to elements in the ensemble must be ergodic, meaning that it must visit the entire state space with some probability. This is the kind of process that’s needed if an infinite collection of monkeys is indeed to type the (large but finite) complete works of shakespeare. It’s not enough that there be an infinite number and that the works of shakespeare be finite. The process of typing must also be ergodic.

Now it’s by no means obvious that monkeys would type ergodically. If, for example, they always hit two adjoining keys at the same time then the process would not be ergodic. Likewise it is by no means clear to me that the process of realizing the ensemble is ergodic. In fact I’m not even sure that there’s any process at all that “realizes” the string landscape. There’s a long and dangerous road from the (hypothetical) ensembles that exist even in standard quantum field theory to an actually existing “random” collection of observed things…

More generally, the mere fact that a mathematical solution of an equation can be derived does not mean that that equation describes anything that actually exists in nature. In this respect I agree with Alfred North Whitehead:

There is no more common error than to assume that, because prolonged and accurate mathematical calculations have been made, the application of the result to some fact of nature is absolutely certain.

It’s a quote I think some string theorists might benefit from reading!

Items 1, 2 and 3 are all needed to ensure that each particular configuration of the system is actually realized in nature. If we had an infinite number of realizations but with either infinite number of possible configurations or a non-ergodic selection mechanism then there’s no guarantee each possibility would actually happen. The success of this explanation consequently rests on quite stringent assumptions.

I’m a sceptic about this whole scheme for many reasons. First, I’m uncomfortable with infinity – that’s what you get for working with George Ellis, I guess. Second, and more importantly, I don’t understand string theory and am in any case unsure of the ontological status of the string landscape. Finally, although a large number of prominent cosmologists have waved their hands with commendable vigour, I have never seen anything even approaching a rigorous proof that eternal inflation does lead to realized infinity of  false vacua. If such a thing exists, I’d really like to hear about!

R.I.P. Tom Kibble (1932-2016)

Posted in The Universe and Stuff with tags , , , , , on June 2, 2016 by telescoper

Yet again, I find myself having to use this blog pass on some very sad news. Distinguished theoretical physicist Tom Kibble (below) passed away today, at the age of 83.

Kibble

Sir Thomas Walter Bannerman Kibble FRS (to give his full name) worked on  quantum field theory, especially the interface between high-energy particle physics and cosmology. He has worked on mechanisms ofsymmetry breaking, phase transitions and the topological defects (monopoles, cosmic strings or domain walls) that can be formed in some theories of the early Universe;  he is  probably most famous for introducing the idea of cosmic strings to modern cosmology in a paper with Mark Hindmarsh. Although there isn’t yet any observational support for this idea, it has generated a great deal of very interesting research.

Tom was indeed an extremely distinguished scientist, but what most people will remember best is that he was an absolutely lovely human being. Gently spoken and impeccably courteous, he was always receptive to new ideas and gave enormous support to younger researchers. He will be very sadly missed by friends and colleagues across the physics world.

Rest in peace, Tom Kibble (1932-2016).

 

What does “Big Data” mean to you?

Posted in The Universe and Stuff with tags , , , , on April 7, 2016 by telescoper

On several occasions recently I’ve had to talk about Big Data for one reason or another. I’m always at a disadvantage when I do that because I really dislike the term.Clearly I’m not the only one who feels this way:

say-big-data-one-more-time

For one thing the term “Big Data” seems to me like describing the Ocean as “Big Water”. For another it’s not really just the how big the data set is that matters. Size isn’t everything, after all. There is much truth in Stalin’s comment that “Quantity has a quality all its own” in that very large data sets allow you to do things you wouldn’t even try with smaller ones, but it can be complexity rather than sheer size that also requires new methods of analysis.

Planck_CMB_large

The biggest event in my own field of cosmology in the last few years has been the Planck mission. The data set is indeed huge: the above map of the temperature pattern in the cosmic microwave background has no fewer than 167 million pixels. That certainly caused some headaches in the analysis pipeline, but I think I would argue that this wasn’t really a Big Data project. I don’t mean that to be insulting to anyone, just that the main analysis of the Planck data was aimed at doing something very similar to what had been done (by WMAP), i.e. extracting the power spectrum of temperature fluctuations:

Planck_power_spectrum_origIt’s a wonderful result of course that extends the measurements that WMAP made up to much higher frequencies, but Planck’s goals were phrased in similar terms to those of WMAP – to pin down the parameters of the standard model to as high accuracy as possible. For me, a real “Big Data” approach to cosmic microwave background studies would involve doing something that couldn’t have been done at all with a smaller data set. An example that springs to mind is looking for indications of effects beyond the standard model.

Moreover what passes for Big Data in some fields would be just called “data” in others. For example, the Atlas Detector on the  Large Hadron Collider  represents about 150 million sensors delivering data 40 million times per second. There are about 600 million collisions per second, out of which perhaps one hundred per second are useful. The issue here is then one of dealing with an enormous rate of data in such a way as to be able to discard most of it very quickly. The same will be true of the Square Kilometre Array which will acquire exabytes of data every day out of which perhaps one petabyte will need to be stored. Both these projects involve data sets much bigger and more difficult to handle that what might pass for Big Data in other arenas.

Books you can buy at airports about Big Data generally list the following four or five characteristics:

  1. Volume
  2. Velocity
  3. Variety
  4. Veracity
  5. Variability

The first two are about the size and acquisition rate of the data mentioned above but the others are more about qualitatively different matters. For example, in cosmology nowadays we have to deal with data sets which are indeed quite large, but also very different in form.  We need to be able to do efficient joint analyses of heterogeneous data structures with very different sampling properties and systematic errors in such a way that we get the best science results we can. Now that’s a Big Data challenge!

 

The Distribution of Cauchy

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , on April 6, 2016 by telescoper

Back into the swing of teaching after a short break, I have been doing some lectures this week about complex analysis to theoretical physics students. The name of a brilliant French mathematician called Augustin Louis Cauchy (1789-1857) crops up very regularly in this branch of mathematics, e.g. in the Cauchy integral formula and the Cauchy-Riemann conditions, which reminded me of some old jottings aI made about the Cauchy distribution, which I never used in the publication to which they related, so I thought I’d just quickly pop the main idea on here in the hope that some amongst you might find it interesting and/or amusing.

What sparked this off is that the simplest cosmological models (including the particular one we now call the standard model) assume that the primordial density fluctuations we see imprinted in the pattern of temperature fluctuations in the cosmic microwave background and which we think gave rise to the large-scale structure of the Universe through the action of gravitational instability, were distributed according to Gaussian statistics (as predicted by the simplest versions of the inflationary universe theory).  Departures from Gaussianity would therefore, if found, yield important clues about physics beyond the standard model.

Cosmology isn’t the only place where Gaussian (normal) statistics apply. In fact they arise  fairly generically,  in circumstances where variation results from the linear superposition of independent influences, by virtue of the Central Limit Theorem. Thermal noise in experimental detectors is often treated as following Gaussian statistics, for example.

The Gaussian distribution has some nice properties that make it possible to place meaningful bounds on the statistical accuracy of measurements made in the presence of Gaussian fluctuations. For example, we all know that the margin of error of the determination of the mean value of a quantity from a sample of size n independent Gaussian-dsitributed varies as 1/\sqrt{n}; the larger the sample, the more accurately the global mean can be known. In the cosmological context this is basically why mapping a larger volume of space can lead, for instance, to a more accurate determination of the overall mean density of matter in the Universe.

However, although the Gaussian assumption often applies it doesn’t always apply, so if we want to think about non-Gaussian effects we have to think also about how well we can do statistical inference if we don’t have Gaussianity to rely on.

That’s why I was playing around with the peculiarities of the Cauchy distribution. This distribution comes up in a variety of real physics problems so it isn’t an artificially pathological case. Imagine you have two independent variables X and Y each of which has a Gaussian distribution with zero mean and unit variance. The ratio Z=X/Y has a probability density function of the form

p(z)=\frac{1}{\pi(1+z^2)},

which is a Cauchy distribution. There’s nothing at all wrong with this as a distribution – it’s not singular anywhere and integrates to unity as a pdf should. However, it does have a peculiar property that none of its moments is finite, not even the mean value!

Following on from this property is the fact that Cauchy-distributed quantities violate the Central Limit Theorem. If we take n independent Gaussian variables then the distribution of sum X_1+X_2 + \ldots X_n has the normal form, but this is also true (for large enough n) for the sum of n independent variables having any distribution as long as it has finite variance.

The Cauchy distribution has infinite variance so the distribution of the sum of independent Cauchy-distributed quantities Z_1+Z_2 + \ldots Z_n doesn’t tend to a Gaussian. In fact the distribution of the sum of any number of  independent Cauchy variates is itself a Cauchy distribution. Moreover the distribution of the mean of a sample of size n does not depend on n for Cauchy variates. This means that making a larger sample doesn’t reduce the margin of error on the mean value!

This was essentially the point I made in a previous post about the dangers of using standard statistical techniques – which usually involve the Gaussian assumption – to distributions of quantities formed as ratios.

We cosmologists should be grateful that we don’t seem to live in a Universe whose fluctuations are governed by Cauchy, rather than (nearly) Gaussian, statistics. Measuring more of the Universe wouldn’t be any use in determining its global properties as we’d always be dominated by cosmic variance

The Great Photon Escape

Posted in The Universe and Stuff with tags , on March 14, 2016 by telescoper

Although it won’t be launched for a few years yet, the communications team behind the James Webb Space Telescope project, or JSWST for short, is already gearing up. Here’s a nice video they’ve made which I came across the other day and thought I would share on here..

The Universe is inhomogeneous. Does it matter?

Posted in The Universe and Stuff with tags , on January 20, 2016 by telescoper

Interesting piece by Buchert et al. about the role of inhomogeneities in cosmology….

Adam Day's avatarCQG+

Yes! The biggest problem in cosmology—the apparent acceleration of the expansion of the Universe and the nature of dark energy—has stimulated a debate about “backreaction”, namely the effect of inhomogeneities in matter and geometry on the average evolution of the Universe. Our recent paper aims to close a chapter of that debate, to encourage exciting new research in the future.

Although matter in the Universe was extremely uniform when the cosmic microwave background radiation formed, since then gravitational instability led to

View original post 1,044 more words