Archive for Cosmology

Why the Big Bang is Wrong…

Posted in Biographical, The Universe and Stuff with tags , , on July 7, 2009 by telescoper

I suspect that I’m not the only physicist who has a filing cabinet filled with unsolicited correspondence from people with wacky views on everything from UFOs to Dark Matter. Being a cosmologist, I probably get more of this stuff than those working in less speculative branches of physics. Because I’ve written a few things that appeared in the public domain (and even appeared on TV and radio a few times), I probably even get more than most cosmologists (except the really  famous ones of course).

I would estimate that I get two or three items of correspondence of this kind per week. Many “alternative” cosmologists have now discovered email, but there are still a lot who send their ideas through regular post. In fact, whenever I get a envelope with an address on it that has been typed by an old-fashioned typewriter it’s usually a dead giveaway that it’s going to be one of  those. Sometimes they are just letters (typed or handwritten), but sometimes they are complete manuscripts often with wonderfully batty illustrations. I have one in front of me now called Dark Matter, The Great Pyramid and the Theory of Crystal Healing. I might even go so far as to call that one bogus. I have an entire filing cabinet in my office at work filled with things like it. I could make a fortune if I set up a journal for these people. Alarmingly, electrical engineers figure prominently in my files. They seem particularly keen to explain why Einstein was wrong…

I never reply, of course. I don’t have time, for one thing.  I’m also doubtful whether there’s anything useful to be gained by trying to engage in a scientific argument with people whose grip on the basic concepts is so tenuous (as perhaps it is on reality). Even if they have some scientific training, their knowledge and understanding of physics is usually pretty poor.

I should explain that, whenever I can, if someone writes or emails with a genuine question about physics or astronomy – which often happens – I always reply. I think that’s a responsibility for anyone who gets taxpayers’ money. However, I don’t reply to letters that are confrontational or aggressive or which imply that modern science is some sort of conspiracy to conceal the real truth.

One particular correspondent started writing to me after the publication of my little book, Cosmology: A Very Short Introduction. I won’t gave his name, but he was an individual who had some scientific training (not an electrical engineer, I hasten to add). This chap sent a terse letter to me pointing out that the Big Bang theory was obviously completely wrong.  The reason was  obvious to anyone who understood thermodynamics. He had spent a lifetime designing high-quality refrigeration equipment  and therefore knew what he was talking about (or so he said).

His point was that, according to  the Big Bang theory, the Universe cools as it expands. Its current temperature is about 3 Kelvin (-270 Celsius or therabouts) but it is now expanding. Turning the clock back gives a Universe that was hotter when it was younger. He thought this was all wrong.

The argument is false, my correspondent asserted, because the Universe – by definition –  hasn’t got any surroundings and therefore isn’t expanding into anything. Since it isn’t pushing against anything it can’t do any work. The internal energy of the gas must therefore remain constant and since the internal energy of an ideal gas is only a function of its temperature, the expansion of the Universe must therefore be at a constant temperature (i.e. isothermal, rather than adiabatic, as in the Big Bang theory). He backed up his argument with bona fide experimental results on the free expansion of gases.

I didn’t reply and filed the letter away. Another came, and I did likewise. Increasingly overcome by some form of apoplexy his letters got ruder and ruder, eventually blaming me for the decline of the British education system and demanding that I be fired from my job. Finally, he wrote to the President of the Royal Society demanding that I be “struck off” – not that I’ve ever been “struck on” – and forbidden (on grounds of incompetence) ever to teach thermodynamics in a University.

Actually, I’ve never taught thermodynamics in any University anyway, but I’ve kept the letter (which was cc-ed to me) in case I am ever asked. It’s much better than a sick note….

This is a good example of a little knowledge being a dangerous thing. My correspondent clearly knew something about thermodynamics. But, obviously, I don’t agree with him that the Big Bang is wrong.

Although I never actually replied to this question myself, I thought it might be fun to turn this into a little competition, so here’s a challenge for you: provide the clearest and most succint explanation of why the temperature of the expanding Universe does fall with time, despite what my correspondent thought.

Answers via the comment box please, in language suitable for a nutter non-physicist.

Multiversalism

Posted in The Universe and Stuff with tags , , on June 17, 2009 by telescoper

The word “cosmology” is derived from the Greek κόσμος (“cosmos”) which means, roughly speaking, “the world as considered as an orderly system”. The other side of the coin to “cosmos” is Χάος (“chaos”). In one world-view the Universe comprised two competing aspects: the orderly part that was governed by laws and which could (at least in principle) be predicted, and the “random” part which was disordered and unpredictable. To make progress in scientific cosmology we do need to assume that the Universe obeys laws. We also assume that these laws apply everywhere and for all time or, if they vary, then they vary in accordance with another law.  This is the cosmos that makes cosmology possible.  However, with the rise of quantum theory, and its applications to the theory of subatomic particles and their interactions, the field of cosmology has gradually ceded some of its territory to chaos.

In the early twentieth century, the first mathematical world models were constructed based on Einstein’s general theory of relativity. This is a classical theory, meaning that it describes a system that evolves smoothly with time. It is also entirely deterministic. Given sufficient information to specify the state of the Universe at a particular epoch, it is possible to calculate with certainty what its state will be at some point in the future. In a sense the entire evolutionary history described by these models is not a succession of events laid out in time, but an entity in itself. Every point along the space-time path of a particle is connected to past and future in an unbreakable chain. If ever the word cosmos applied to anything, this is it.

But as the field of relativistic cosmology matured it was realised that these simple classical models could not be regarded as complete, and consequently that the Universe was unlikely to be as predictable as was first thought. The Big Bang model gradually emerged as the favoured cosmological theory during the middle of the last century, between the 1940s and the 1960s. It was not until the 1960s, with the work of Hawking and Penrose, that it was realised that expanding world models based on general relativity inevitably involve a break-down of known physics at their very beginning. The so-called singularity theorems demonstrate that in any plausible version of the Big Bang model, all physical parameters describing the Universe (such as its density, pressure and temperature) all become infinite at the instant of the Big Bang. The existence of this “singularity” means that we do not know what laws if any apply at that instant. The Big Bang contains the seeds of its own destruction as a complete theory of the Universe. Although we might be able to explain how the Universe subsequently evolves, we have no idea how to describe the instant of its birth. This is a major embarrassment. Lacking any knowledge of the laws we don’t even have any rational basis to assign probabilities. We are marooned with a theory that lets in water.

The second important development was the rise of quantum theory and its incorporation into the description of the matter and energy contained within the Universe. Quantum mechanics (and its development into quantum field theory) entails elements of unpredictability. Although we do not know how to interpret this feature of the theory, it seems that any cosmological theory based on quantum theory must include things that can’t be predicted with certainty.

As particle physicists built ever more complete descriptions of the microscopic world using quantum field theory, they also realised that the approaches they had been using for other interactions just wouldn’t work for gravity. Mathematically speaking, general relativity and quantum field theory just don’t fit together. It might have been hoped that quantum gravity theory would help us plug the gap at the very beginning of the Universe, but that has not happened yet because there isn’t such a theory. What we can say about the origin of the Universe is correspondingly extremely limited and mostly speculative, but some of these speculations have had a powerful impact on the subject.

One thing that has changed radically since the early twentieth century is the possibility that our Universe may actually be part of a much larger “collection” of Universes. The potential for semantic confusion here is enormous. The Universe is, by definition, everything that exists. Obviously, therefore, there can only be one Universe. The name given to a Universe that consists of bits and pieces like this is the multiverse.

 There are various ways a multiverse can be realised. In the “Many Worlds” interpretation of quantum mechanics there is supposed to be a plurality of versions of our Universe, but their ontological status is far from clear (at least to me). Do we really have to accept that each of the many worlds is “out there”, or can we get away with using them as inventions to help our calculations?

 On the other hand, some plausible models based on quantum field theory do admit the possibility that our observable Universe is part of collection of mini-universes, each of which “really” exists. It’s hard to explain precisely what I mean by that, but I hope you get my drift. These mini-universes form a classical ensemble in different domains of a single-space time, which is not what happens in quantum multiverses.

According to the Big Bang model, the Universe (or at least the part of it we know about) began about fourteen billion years ago. We do not know whether the Universe is finite or infinite, but we do know that if it has only existed for a finite time we can only observe a finite part of it. We can’t possibly see light from further away than fourteen billion light years because any light signal travelling further than this distance would have to have set out before the Universe began. Roughly speaking, this defines our “horizon”: the maximum distance we are in principle able to see. But the fact that we can’t observe anything beyond our horizon does not mean that such remote things do not exist at all. Our observable “patch” of the Universe might be a tiny part of a colossal structure that extends much further than we can ever hope to see. And this structure might be not at all homogeneous: distant parts of the Universe might be very different from ours, even if our local piece is well described by the Cosmological Principle.

Some astronomers regard this idea as pure metaphysics, but it is motivated by plausible physical theories. The key idea was provided by the theory of cosmic inflation, which I have blogged about already. In the simplest versions of inflation the Universe expands by an enormous factor, perhaps 1060, in a tiny fraction of a second. This may seem ridiculous, but the energy available to drive this expansion is inconceivably large. Given this phenomenal energy reservoir, it is straightforward to show that such a boost is not at all unreasonable. With inflation, our entire observable Universe could thus have grown from a truly microscopic pre-inflationary region. It is sobering to think that everything galaxy, star, and planet we can see might from a seed that was smaller than an atom. But the point I am trying to make is that the idea of inflation opens up ones mind to the idea that the Universe as a whole may be a landscape of unimaginably immense proportions within which our little world may be little more than a pebble. If this is the case then we might plausibly imagine that this landscape varies haphazardly from place to place, producing what may amount to an ensemble of mini-universes. I say “may” because there is yet no theory that tells us precisely what determines the properties of each hill and valley or the relative probabilities of the different types of terrain.

Many theorists believe that such an ensemble is required if we are to understand how to deal probabilistically with the fundamentally uncertain aspects of modern cosmology. I don’t think this is the case. It is, at least in principle, perfectly possible to apply probabilistic arguments to unique events like the Big Bang using Bayesian inference. If there is an ensemble, of course, then we can discuss proportions within it, and relate these to probabilities too. Bayesians can use frequencies if they are available but do not require them. It is one of the greatest fallacies in science that probabilities need to be interpreted as frequencies.

At the crux of many related arguments is the question of why the Universe appears to be so well suited to our existence within it. This fine-tuning appears surprising based on what (little) we know about the origin of the Universe and the many other ways it might apparently have turned out. Does this suggest that it was designed to be so or do we just happen to live in a bit of the multiverse nice enough for us to have evolved and survived in?  

Views on this issue are often boiled down into a choice between a theistic argument and some form of anthropic selection.  A while ago I gave a talk at a meeting in Cambridge called God or Multiverse? that was an attempt to construct a dialogue between theologians and cosmologists. I found it interesting, but it didn’t alter my view that science and religion don’t really overlap very much at all on this, in the sense that if you believe in God it doesn’t mean you have to reject the multiverse, or vice-versa. If God can create a Universe, he could create a multiverse to0. As it happens, I’m agnostic about both.

So having, I hope, opened up your mind to the possibility that the Universe may be amenable to a frequentist interpretation, I should confess that I think one can actually get along quite nicely without it.  In any case, you will probably have worked out that I don’t really like the multiverse. One reason I don’t like it is that it accepts that some things have no fundamental explanation. We just happen to live in a domain where that’s the way things are. Of course, the Universe may turn out to be like that –  there definitely will be some point at which our puny monkey brains  can’t learn anything more – but if we accept that then we certainly won’t find out if there is really a better answer, i.e. an explanation that isn’t accompanied by an infinite amount of untestable metaphysical baggage. My other objection is that I think it’s cheating to introduce an infinite thing to provide an explanation of fine tuning. Infinity is bad.

Neophlogistonianism

Posted in The Universe and Stuff with tags , , on May 18, 2009 by telescoper

What happens when something burns?

Ask a seventeenth century scientist that question and the chances are the answer would  have involved the word phlogiston, a name derived from the Greek  φλογιστόν, meaning “burning up”. This “fiery principle” or “element” was supposed to be present in all combustible materials and the idea was that it was released into air whenever any such stuff was ignited. The act of burning separated the phlogiston from the dephlogisticated “true” form of the material, also known as calx.

The phlogiston theory held sway until  the late 18th Century, when Antoine Lavoisier demonstrated that combustion results in an increase in weight of the material being burned. This poses a serious problem if burning also involves the loss of phlogiston unless phlogiston has negative weight. However, many serious scientists of the 18th Century, such as Georg Ernst Stahl, had already suggested that phlogiston might have negative weight or, as he put it, “levity”. Nowadays we would probably say “anti-gravity”.

Eventually, Joseph Priestley discovered what actually combines with materials during combustion:  oxygen. Instead of becoming dephlogisticated, things become oxidised by fixing oxygen from air, which is why their weight increases. It’s worth mentioning, though, the name that Priestley used for oxygen was in fact “dephlogisticated air” (because it was capable of combining more extensively with phlogiston than ordinary air). He  remained a phlogistonian longer after making the discovery that should have killed the theory.

So why am I rambling on about a scientific theory that has been defunct for more than two centuries?

Well,  it’s because there just might be a lesson from history about the state of modern cosmology…

The standard cosmological model involves the hypothesis that about 75% of the energy budget of the Universe is in the form of “dark energy”. We don’t know much about what this is, except that in order to make our current understanding work out it has to act like a source of anti-gravity. It does this by violating the strong energy condition of general relativity.

Dark energy is needed to reconcile three basic measurements: (i) the brightness distant supernovae that seem to indicate the Universe is accelerating (which is where the anti-gravity comes in); (ii) the cosmic microwave background that suggests the Universe has flat spatial sections; and (iii) the direct estimates of the mass associated with galaxy clusters that accounts for about 25% of the mass needed to close the Universe.

A universe without dark energy appears not to be able to account for these three observations simultaneously within our current understanding of gravity as obtained from Einstein’s theory of general relativity.

I’ve blogged before, with some levity of my own, about how uncomfortable this dark energy makes me feel. It makes me even more uncomfortable that such an enormous  industry has grown up around it and that its existence is accepted unquestioningly by so many modern cosmologists.

Isn’t there a chance that, with the benefit of hindsight, future generations will look back on dark energy in the same way that we now see the phlogiston theory?

Or maybe the dark energy really is phlogiston. That’s got to be worth a paper! At least I prefer the name to quintessence.

The Cosmic Tightrope

Posted in The Universe and Stuff with tags , , on May 3, 2009 by telescoper

Here’s a thought experiment for you.

Imagine you are standing outside a sealed room. The contents of the room are hidden from you, except for a small window covered by a curtain. You are told that you can open the curtain once and only briefly to take a peep at what is inside, and you may do this whenever you feel the urge.

You are told what is in the room. It is bare except for a tightrope suspended across it about two metres in the air. Inside the room is a man who at some time in the past – you’re not told when – began walking along the tightrope. His instructions were to carry on walking backwards and forwards along the tightrope until he falls off, either through fatigue or lack of balance. Once he falls he must lie motionless on the floor.

You are not told whether he is skilled in tightrope-walking or not, so you have no way of telling whether he can stay on the rope for a long time or a short time. Neither are you told when he started his stint as a stuntman.

What do you expect to see when you eventually pull the curtain?

Well, if the man does fall off sometime it will clearly take him a very short time to drop to the floor. Once there he has to stay there.One outcome therefore appears very unlikely: that at the instant you open the curtain, you see him in mid-air between a rope and a hard place.

Whether you expect him to be on the rope or on the floor depends on information you do not have. If he is a trained circus artist, like the great Charles Blondin here, he might well be capable of walking to and fro along the tightrope for days. If not, he would probably only manage a few steps before crashing to the ground. Either way it remains unlikely that you catch a glimpse of him in mid-air during his downward transit. Unless, of course, someone is playing a trick on you and someone has told the guy to jump when he sees the curtain move.

This probably seems to have very little to do with physical cosmology, but now forget about tightropes and think about the behaviour of the mathematical models that describe the Big Bang. To keep things simple, I’m going to ignore the cosmological constant and just consider how things depend on one parameter, the density parameter Ω0. This is basically the ratio between the present density of the matter in the Universe compared to what it would have to be to cause the expansion of the Universe eventually to halt. To put it a slightly different way, it measures the total energy of the Universe. If Ω0>1 then the total energy of the Universe is negative: its (negative) gravitational potential energy dominates over the (positive) kinetic energy. If Ω0<1 then the total energy is positive: kinetic trumps potential. If Ω0=1 exactly then the Universe has zero total energy: energy is precisely balanced, like the man on the tightrope.

A key point, however, is that the trade-off between positive and negative energy contributions changes with time. The result of this is that Ω is not fixed at the same value forever, but changes with cosmic epoch; we use Ω0 to denote the value that it takes now, at cosmic time t0, but it changes with time.

At the beginning, at the Big Bang itself,  all the Friedmann models begin with Ω arbitrarily close to unity at arbitrarily early times, i.e. the limit as t tends to zero is Ω=1.

In the case in which the Universe emerges from the Big bang with a value of Ω just a tiny bit greater than one then it expands to a maximum at which point the expansion stops. During this process Ω grows without bound. Gravitational energy wins out over its kinetic opponent.

If, on the other hand, Ω sets out slightly less than unity – and I mean slightly, one part in 1060 will do – the Universe evolves to a state where it is very close to zero. In this case kinetic energy is the winner  and Ω ends up on the ground, mathematically speaking.

In the compromise situation with total energy zero, this exact balance always applies. The universe is always described by Ω=1. It walks the cosmic tightrope. But any small deviation early on results in runaway expansion or catastrophic recollapse. To get anywhere close to Ω=1 now – I mean even within a factor ten either way – the Universe has to be finely tuned.

A slightly different way of describing this is to think instead about the radius of curvature of the Universe. In general relativity the curvature of space is determined by the energy (and momentum) density. If the Universe has zero total energy it is flat, so it doesn’t have any curvature at all so its curvature radius is infinite. If it has positive total energy the curvature radius is finite and positive, in much the same way that a sphere has positive curvature. In the opposite case it has negative curvature, like a saddle. I’ve blogged about this before.

I hope you can now see how this relates to the curious case of the tightrope walker.

If the case Ω0= 1 applied to our Universe then we can conclude that something trained it to have a fine sense of equilibrium. Without knowing anything about what happened at the initial singularity we might therefore be pre-disposed to assign some degree of probability that this is the case, just as we might be prepared to imagine that our room contained a skilled practitioner of the art of one-dimensional high-level perambulation.

On the other hand, we might equally suspect that the Universe started off slightly over-dense or slightly under-dense, at which point it should either have re-collapsed by now or have expanded so quickly as to be virtually empty.

About fifteen years ago, Guillaume Evrard and I tried to put this argument on firmer mathematical grounds by assigning a sensible prior probability to Ω based on nothing other than the assumption that our Universe is described by a Friedmann model.

The result we got was that it should be of the form

P(\Omega) \propto \Omega^{-1}(\Omega-1)^{-1}.

I was very pleased with this result, which is based on a principle advanced by physicist Ed Jaynes, but I have no space to go through the mathematics here. Note, however, that this prior has three interesting properties: it is infinite at Ω=0 and Ω=1, and it has a very long “tail” for very large values of Ω. It’s not a very well-behaved measure, in the sense that it can’t be integrated over, but that’s not an unusual state of affairs in this game. In fact it is an improper prior.

I think of this prior as being the probabilistic equivalent of Mark Twain’s description of a horse:

dangerous at both ends, and uncomfortable in the middle.

Of course the prior probability doesn’t tell usall that much. To make further progress we have to make measurements, form a likelihood and then, like good Bayesians, work out the posterior probability . In fields where there is a lot of reliable data the prior becomes irrelevant and the likelihood rules the roost. We weren’t in that situation in 1995 – and we’re arguably still not – so we should still be guided, to some extent by what the prior tells us.

The form we found suggests that we can indeed reasonably assign most of our prior probability to the three special cases I have described. Since we also know that the Universe is neither totally empty nor ready to collapse, it does indicate that, in the absence of compelling evidence to the contrary, it is quite reasonable to have a prior preference for the case Ω=1.  Until the late 1980s there was indeed a strong ideological preference for models with Ω=1 exactly, but not because of the rather simple argument given above but because of the idea of cosmic inflation.

From recent observations we now know, or think we know, that Ω is roughly 0.26. To put it another way, this means that the Universe has roughly 26% of the density it would need to have to halt the cosmic expansion at some point in the future. Curiously, this corresponds precisely to the unlikely or “fine-tuned” case where our Universe is in between  two states in which we might have expected it to lie.

Even if you accept my argument that Ω=1 is a special case that is in principle possible, it is still the case that it requires the Universe to have been set up with very precisely defined initial conditions. Cosmology can always appeal to special initial conditions to get itself out of trouble because we don’t know how to describe the beginning properly, but it is much more satisfactory if properties of our Universe are explained by understanding the physical processes involved rather than by simply saying that “things are the way they are because they were the way they were.” The latter statement remains true, but it does not enhance our understanding significantly. It’s better to look for a more fundamental explanation because, even if the search is ultimately fruitless, we might turn over a few interesting stones along the way.

The reasoning behind cosmic inflation admits the possibility that, for a very short period in its very early stages, the Universe went through a phase where it was dominated by a third form of energy, vacuum energy. This forces the cosmic expansion to accelerate. This drastically changes the arguments I gave above. Without inflation the case with Ω=1 is unstable: a slight perturbation to the Universe sends it diverging towards a Big Crunch or a Big Freeze. While inflationary dynamics dominate, however, this case has a very different behaviour. Not only stable, it becomes an attractor to which all possible universes converge. Whatever the pre-inflationary initial conditions, the Universe will emerge from inflation with Ω very close to unity. Inflation trains our Universe to walk the tightrope.

So how can we reconcile inflation with current observations that suggest a low matter density? The key to this question is that what inflation really does is expand the Universe by such a large factor that the curvature radius becomes infinitesimally small. If there is only “ordinary” matter in the Universe then this requires that the universe have the critical density. However, in Einstein’s theory the curvature is zero only if the total energy is zero. If there are other contributions to the global energy budget besides that associated with familiar material then one can have a low value of the matter density as well as zero curvature. The missing link is dark energy, and the independent evidence we now have for it provides a neat resolution of this problem.

Or does it? Although spatial curvature doesn’t really care about what form of energy causes it, it is surprising to some extent that the dark matter and dark energy densities are similar. To many minds this unexplained coincidence is a blemish on the face of an otherwise rather attractive structure.

It can be argued that there are initial conditions for non-inflationary models that lead to a Universe like ours. This is true. It is not logically necessary to have inflation in order for the Friedmann models to describe a Universe like the one we live in. On the other hand, it does seem to be a reasonable argument that the set of initial data that is consistent with observations is larger in models with inflation than in those without it. It is rational therefore to say that inflation is more probable to have happened than the alternative.

I am not totally convinced by this reasoning myself, because we still do not know how to put a reasonable measure on the space of possibilities existing prior to inflation. This would have to emerge from a theory of quantum gravity which we don’t have. Nevertheless, inflation is a truly beautiful idea that provides a framework for understanding the early Universe that is both elegant and compelling. So much so, in fact, that I almost believe it.

How Loud was the Big Bang?

Posted in The Universe and Stuff with tags , , , , , , on April 26, 2009 by telescoper

The other day I was giving a talk about cosmology at Cardiff University’s Open Day for prospective students. I was talking, as I usually do on such occasions, about the cosmic microwave background, what we have learnt from it so far and what we hope to find out from it from future experiments, assuming they’re not all cancelled.

Quite a few members of staff listened to the talk too and, afterwards, some of them expressed surprise at what I’d been saying, so I thought it would be fun to try to explain it on here in case anyone else finds it interesting.

As you probably know the Big Bang theory involves the assumption that the entire Universe – not only the matter and energy but also space-time itself – had its origins in a single event a finite time in the past and it has been expanding ever since. The earliest mathematical models of what we now call the  Big Bang were derived independently by Alexander Friedman and George Lemaître in the 1920s. The term “Big Bang” was later coined by Fred Hoyle as a derogatory description of an idea he couldn’t stomach, but the phrase caught on. Strictly speaking, though, the Big Bang was a misnomer.

Friedman and Lemaître had made mathematical models of universes that obeyed the Cosmological Principle, i.e. in which the matter was distributed in a completely uniform manner throughout space. Sound consists of oscillating fluctuations in the pressure and density of the medium through which it travels. These are longitudinal “acoustic” waves that involve successive compressions and rarefactions of matter, in other words departures from the purely homogeneous state required by the Cosmological Principle. The Friedman-Lemaitre models contained no sound waves so they did not really describe a Big Bang at all, let alone how loud it was.

However, as I have blogged about before, newer versions of the Big Bang theory do contain a mechanism for generating sound waves in the early Universe and, even more importantly, these waves have now been detected and their properties measured.

The above image shows the variations in temperature of the cosmic microwave background as charted by the Wilkinson Microwave Anisotropy Probe about five years ago. The average temperature of the sky is about 2.73 K but there are variations across the sky that have an rms value of about 0.08 milliKelvin. This corresponds to a fractional variation of a few parts in a hundred thousand relative to the mean temperature. It doesn’t sound like much, but this is evidence for the existence of primordial acoustic waves and therefore of a Big Bang with a genuine “Bang” to it.

A full description of what causes these temperature fluctuations would be very complicated but, roughly speaking, the variation in temperature you corresponds directly to variations in density and pressure arising from sound waves.

So how loud was it?

The waves we are dealing with have wavelengths up to about 200,000 light years and the human ear can only actually hear sound waves with wavelengths up to about 17 metres. In any case the Universe was far too hot and dense for there to have been anyone around listening to the cacophony at the time. In some sense, therefore, it wouldn’t have been loud at all because our ears can’t have heard anything.

Setting aside these rather pedantic objections – I’m never one to allow dull realism to get in the way of a good story- we can get a reasonable value for the loudness in terms of the familiar language of decibels. This defines the level of sound (L) logarithmically in terms of the rms pressure level of the sound wave Prms relative to some reference pressure level Pref

L=20 log10[Prms/Pref]

(the 20 appears because of the fact that the energy carried goes as the square of the amplitude of the wave; in terms of energy there would be a factor 10).

There is no absolute scale for loudness because this expression involves the specification of the reference pressure. We have to set this level by analogy with everyday experience. For sound waves in air this is taken to be about 20 microPascals, or about 2×10-10 times the ambient atmospheric air pressure which is about 100,000 Pa.  This reference is chosen because the limit of audibility for most people corresponds to pressure variations of this order and these consequently have L=0 dB. It seems reasonable to set the reference pressure of the early Universe to be about the same fraction of the ambient pressure then, i.e.

Pref~2×10-10 Pamb

The physics of how primordial variations in pressure translate into observed fluctuations in the CMB temperature is quite complicated, and the actual sound of the Big Bang contains a mixture of wavelengths with slightly different amplitudes so it all gets a bit messy if you want to do it exactly, but it’s quite easy to get a rough estimate. We simply take the rms pressure variation to be the same fraction of ambient pressure as the averaged temperature variation are compared to the average CMB temperature,  i.e.

Prms~ a few ×10-5Pamb

If we do this, scaling both pressures in logarithm in the equation in proportion to the ambient pressure, the ambient pressure cancels out in the ratio, which turns out to be a few times 10-5.

AudiogramsSpeechBanana

With our definition of the decibel level we find that waves corresponding to variations of one part in a hundred thousand of the reference level  give roughly L=100dB while part in ten thousand gives about L=120dB. The sound of the Big Bang therefore peaks at levels just over  110 dB. As you can see in the Figure above, this is close to the threshold of pain,  but it’s perhaps not as loud as you might have guessed in response to the initial question. Many rock concerts are actually louder than the Big Bang, at least near the speakers!

A useful yardstick is the amplitude  at which the fluctuations in pressure are comparable to the mean pressure. This would give a factor of about 1010 in the logarithm and is pretty much the limit that sound waves can propagate without distortion. These would have L≈190 dB. It is estimated that the 1883 Krakatoa eruption produced a sound level of about 180 dB at a range of 100 miles. By comparison the Big Bang was little more than a whimper.

PS. If you would like to read more about the actual sound of the Big Bang, have a look at John Cramer’s webpages. You can also download simulations of the actual sound. If you listen to them you will hear that it’s more of  a “Roar” than a “Bang” because the sound waves don’t actually originate at a single well-defined event but are excited incoherently all over the Universe.

Leonid’s Shower

Posted in The Universe and Stuff with tags , , , , on April 18, 2009 by telescoper

Yesterday (17th April) was the last day of our Easter vacation – back to the grind on Monday – and it was also the occasion of a special meeting to mark the retirement of Professor Leonid Petrovich Grishchuk.

Leonid has been a Distinguished Research Professor here in Cardiff since 1995. You can read more of his scientific biography and wider achievements here, but it should suffice to say that he is a pioneer of many aspects of relativistic cosmology and particularly primordial gravitational waves. He’s also a larger-than-life character who is known with great affection around the world.

Among other things, he’s a big fan of football. He still plays, as a matter of fact, although he generally spends more time ordering his team-mates about than actually running around himself. One of his retirement presents was a Cardiff City football shirt with his name on the back.

My first experience of Leonid was many years ago at a scientific meeting at which I attempted to give a talk. Leonid was in the audience and he interrupted me,  rather aggressively. I didn’t really understand his question so he had another go at me in the questions afterwards. I don’t mind admitting that I was quite upset with his behaviour. I think a large fraction of working cosmologists have probably been Grischchucked at one time or another.

Later on, though, people from the meeting were congregating at a bar when he arrived and headed for me. I didn’t really want to talk to him as I felt he had been quite rude. However, there wasn’t really any way of escaping so I ended up talking to him over a beer. We finally resolved the question he had been trying to ask me and his demeanour changed completely. We spent the rest of the evening having dinner and talking about all sorts of things and have been friends ever since.

Over the years I’ve learned that this is very much a tradition amongst Russian scientists of the older school. They can seem very hostile – even brutal – when discussing science, but that was the way things were done in the environment where they learned their trade.  In many cases the rather severe exterior masks a kindly and generous nature, as it certainly does with Leonid.

I also remember a spell in the States as a visitor during which I heard two Russian cosmologists screaming at each other in the room next door. I really thought they were about to have a fist fight. A few minutes later, though, they both emerged, smiling as if nothing had happened…

Appropriately enough Leonid’s bash was held immediately after BritGrav 9, a meeting dedicated to bringing together the gravitational research community of the UK and beyond, and to provide a forum for the exchange of ideas. It aimed to cover all aspects of gravitational physics, both theoretical and experimental, including cosmology, mathematical general relativity, quantum gravity, gravitational astrophysics, gravitational wave data analysis, and instrumentation. I chaired a session during the meeting and found Leonid in characteristic form as a member of the audience, never shy with questions or comments, and quite difficult to keep under control.

I enjoyed the meeting because priority was given to students when allocating speaking slots. I think too many conferences have the same senior scientists giving  the same talk over and over again. Relativists are also quite different to cosmologists in the level of mathematical rigour to which they aspire.  You can bullshit at a cosmology conference, but wouldn’t get away with it in front of a GR audience.

On the evening of 16th April we had a public lecture in Cardiff by Kip Thorne on The Warped Side of the Universe: from the Big Bang to Black Holes and Gravitational Waves and Kip also gave a talk as part of the subsequent meeting on Friday in Leonid’s honour.

lpg008_test

Kip and Leonid are shown together a few years ago in the photograph to the left here. The rest of the LPGFest meeting was interesting and eclectic, with talks from mathematical relativists as well as scientists in diverse fields who had come over from Russia specially to honour Leonid. We later adjourned to a “Welsh Banquet” at the 15th Century Undercroft of Cardiff Castle for dinner accompanied by something described as “entertainment” laid on by the hosts. That part was quite excruciating: like Butlins only not as classy. Heaven knows what our distinguished foreign visitors made of it, although Leonid seemed to think it was great fun, and that’s what matters.

Once the dinner was over it was time for Leonid to be showered with gifts from around the world and, by way of a finale, he was serenaded with a version of From Russian With Love, by Bernie and the Gravitones. Now at last I understand what the phrase “extraordinary rendition” means.

Full Blast

Posted in Science Politics, The Universe and Stuff with tags , , , , , on April 9, 2009 by telescoper

Yesterday, Paolo Calisse and I were paid a visit by a reporter (Martin Shipton) and a photographer from Welsh newspaper The Western Mail who wanted to cover the sad story of Clover.

Paolo is heavily involved with Clover, but I was a bit hesitant about doing this because I’m not really part of the Clover team. Paolo suggested it might be an advantage that I wasn’t so directly involved as I might be able to give a more balanced view of the importance of the experiment than him. Anyway, the story came out today in the newspaper and is available online too.

DrThis is the picture they took of me and Paolo in the Clover lab, fiddling with the cryostat. I’ve already had my leg pulled enough about pretending to be an instrumentalist for the photograph so no jokes please…

 

 

 

 

In the same issue of the paper there is another feature about Cardiff’s astronomy research, concerning BLAST (Balloon-borne Large Aperture Submillimetre Telescope). This is a much happier story, as it marks the release of results from a highly successful science run from 2006. In the print version of the Western Mail the two stories were run on the same page, one above the other, making very effectively the point that cutting the funding of the Astronomy Instrumentation Group jeopardizes a great deal of world-leading research besides Clover itself. And when I say “world-leading” I mean it, whatever the RAE panel might have thought.

A deluge of articles about BLAST appeared on the arXiv today, one of which is now published in Nature. I thought I’d put up the abstracts here in order to draw attention to these results. The author lists contain many Cardiff authors and, as you’ll see, the results are both fascinating and wide-ranging. I’ve put links to the arXiv after each abstract:

Title: BLAST: Correlations in the Cosmic Far-Infrared Background at 250, 350, and 500 microns Reveal Clustering of Star-Forming Galaxies

Authors: Marco P. Viero, Peter A. R. Ade, James J. Bock, Edward L. Chapin, Mark J. Devlin, Matthew Griffin, Joshua O. Gundersen, Mark Halpern, Peter C. Hargrave, David H. Hughes, Jeff Klein, Carrie J. MacTavish, Gaelen Marsden, Peter G. Martin, Philip Mauskopf, Lorenzo Moncelsi, Mattia Negrello, Calvin B. Netterfield, Luca Olmi, Enzo Pascale, Guillaume Patanchon, Marie Rex, Douglas Scott, Christopher Semisch, Nicholas Thomas, Matthew D. P. Truch, Carole Tucker, Gregory S. Tucker, Donald V. Wiebe

We detect correlations in the cosmic far-infrared background due to the clustering of star-forming galaxies, in observations made with the Balloon-borne Large Aperture Submillimeter Telescope (BLAST), at 250, 350, and 500 microns. Since the star-forming galaxies which make up the far-infrared background are expected to trace the underlying dark matter in a biased way, measuring clustering in the far infrared background provides a way to relate star formation directly to structure formation. We test the plausibility of the result by fitting a simple halo model to the data. We derive an effective bias b_eff = 2.2 +/- 0.2, effective mass log(M_eff/M_sun) = 13.2 (+0.3/-0.8), and minimum mass log(M_min/M_sun) = 9.9 (+1.5/-1.7). This is the first robust clustering measurement at submillimeter wavelengths.

http://arxiv.org/abs/0904.1200

Title: Over half of the far-infrared background light comes from galaxies at z >= 1.2

Authors: Mark J. Devlin, Peter A. R. Ade, Itziar Aretxaga, James J. Bock, Edward L. Chapin, Matthew Griffin, Joshua O. Gundersen, Mark Halpern, Peter C. Hargrave, David H. Hughes, Jeff Klein, Gaelen Marsden, Peter G. Martin, Philip Mauskopf, Lorenzo Moncelsi, Calvin B. Netterfield, Henry Ngo, Luca Olmi, Enzo Pascale, Guillaume Patanchon, Marie Rex, Douglas Scott, Christopher Semisch, Nicholas Thomas, Matthew D. P. Truch, Carole Tucker, Gregory S. Tucker, Marco P. Viero, Donald V. Wiebe

Journal-ref: Nature, vol. 458, 737-739 (2009) DOI: 10.1038/nature07918

Submillimetre surveys during the past decade have discovered a population of luminous, high-redshift, dusty starburst galaxies. In the redshift range 1 <= z <= 4, these massive submillimetre galaxies go through a phase characterized by optically obscured star formation at rates several hundred times that in the local Universe. Half of the starlight from this highly energetic process is absorbed and thermally re-radiated by clouds of dust at temperatures near 30 K with spectral energy distributions peaking at 100 microns in the rest frame. At 1 <= z <= 4, the peak is redshifted to wavelengths between 200 and 500 microns. The cumulative effect of these galaxies is to yield extragalactic optical and far-infrared backgrounds with approximately equal energy densities. Since the initial detection of the far-infrared background (FIRB), higher-resolution experiments have sought to decompose this integrated radiation into the contributions from individual galaxies. Here we report the results of an extragalactic survey at 250, 350 and 500 microns. Combining our results at 500 microns with those at 24 microns, we determine that all of the FIRB comes from individual galaxies, with galaxies at z >= 1.2 accounting for 70 per cent of it. As expected, at the longest wavelengths the signal is dominated by ultraluminous galaxies at z > 1.

http://arxiv.org/abs/0904.1201

Title: The Balloon-borne Large Aperture Submillimeter Telescope (BLAST) 2006:
Calibration and Flight Performance

Authors: Matthew D. P. Truch, Peter A. R. Ade, James J. Bock, Edward L. Chapin, Mark J. Devlin, Simon R. Dicker, Matthew Griffin, Joshua O. Gundersen, Mark Halpern, Peter C. Hargrave, David H. Hughes, Jeff Klein, Gaelen Marsden, Peter G. Martin, Philip Mauskopf, Lorenzo Moncelsi, Calvin B. Netterfield, Luca Olmi, Enzo Pascale, Guillaume Patanchon, Marie Rex, Douglas Scott, Christopher Semisch, Nicholas E. Thomas, Carole Tucker, Gregory S. Tucker, Marco P. Viero, Donald V. Wiebe

The Balloon-borne Large Aperture Submillimeter Telescope (BLAST) operated successfully during a 250-hour flight over Antarctica in December 2006 (BLAST06). As part of the calibration and pointing procedures, the red hypergiant star VY CMa was observed and used as the primary calibrator. Details of the overall BLAST06 calibration procedure are discussed. The 1-sigma absolute calibration is accurate to 10, 12, and 13% at the 250, 350, and 500 micron bands, respectively. The errors are highly correlated between bands
resulting in much lower error for the derived shape of the 250-500 micron continuum. The overall pointing error is <5″ rms for the 36, 42, and 60″ beams. The performance of the optics and pointing systems is discussed.

http://arxiv.org/abs/0904.1202

Title: A Bright Submillimeter Source in the Bullet Cluster (1E0657–56) Field Detected with BLAST

Authors: Marie Rex, Peter A. R. Ade, Itziar Aretxaga, James J. Bock, Edward L. Chapin, Mark J. Devlin, Simon R. Dicker, Matthew Griffin, Joshua O. Gundersen, Mark Halpern, Peter C. Hargrave, David H. Hughes, Jeff Klein, Gaelen Marsden, Peter G. Martin, Philip Mauskopf, Calvin B. Netterfield, Luca Olmi, Enzo Pascale, Guillaume Patanchon, Douglas Scott, Christopher Semisch, Nicholas Thomas, Matthew D. P. Truch, Carole Tucker, Gregory S. Tucker, Marco P. Viero, Donald V. Wiebe

We present the 250, 350, and 500 micron detection of bright submillimeter emission in the direction of the Bullet Cluster measured by the Balloon-borne Large-Aperture Submillimeter Telescope (BLAST). The 500 micron centroid is coincident with an AzTEC 1.1 millimeter detection at a position close to the peak lensing magnification produced by the cluster. However, the 250 micron and 350 micron emission is resolved and elongated, with centroid positions shifted toward the south of the AzTEC source and a differential shift between bands that cannot be explained by pointing uncertainties. We therefore conclude that the BLAST detection is contaminated by emission from foreground galaxies associated with the Bullet Cluster. The submillimeter redshift estimate based on 250-1100 micron photometry at the position of the AzTEC source is z_phot = 2.9 (+0.6/-0.3), consistent with the infrared color redshift estimation of the most likely Spitzer IRAC counterpart. These flux densities indicate an apparent far-infrared luminosity of L_FIR = 2E13 L_sun. When the amplification due to the gravitational lensing of the cluster is removed, the intrinsic far-infrared luminosity of the source is found to be L_FIR <= 1E12 L_sun, consistent with typical luminous infrared galaxies.

http://arxiv.org/abs/0904.1203

Title: Radio and mid-infrared identification of BLAST source counterparts in the Chandra Deep Field South

Authors: Simon Dye, Peter A. R. Ade, James J. Bock, Edward L. Chapin, Mark J. Devlin, James S. Dunlop, Stephen A. Eales, Matthew Griffin, Joshua O. Gundersen, Mark Halpern, Peter C. Hargrave, David H. Hughes, Jeff Klein, Gaelen Marsden, Philip Mauskopf, Lorenzo Moncelsi, Calvin B. Netterfield, Luca Olmi, Enzo Pascale, Guillaume Patanchon, Marie Rex, Douglas Scott, Christopher Semisch, Nicholas Thomas, Matthew D. P. Truch, Carole Tucker, Gregory S. Tucker, Marco P. Viero, Donald V. Wiebe

We have identified radio and/or mid-infrared counterparts to 198 out of 351 sources detected at >= 5 sigma over ~ 9 sq. degrees centered on the Chandra Deep Field South (CDFS) by the Balloon-borne Large Aperture Submillimeter Telescope (BLAST) at 250, 350, and 500 microns. We have matched 92 of these counterparts to optical sources with previously derived photometric redshifts and fitted SEDs to the BLAST fluxes and fluxes at 70 and 160 microns acquired with the Spitzer Space Telescope. In this way, we have constrained dust temperatures, total far-infrared/submillimeter luminosities and star formation rates for each source. Our findings show that the BLAST sources lie at significantly lower redshifts and have significantly lower rest-frame dust temperatures compared to submm sources detected in surveys conducted at 850 microns. We demonstrate that an apparent increase in dust temperature with redshift in our sample arises as a result of selection effects. This paper
constitutes the public release of the multi-wavelength catalog of >= 5 sigma BLAST sources contained within the full ~ 9 sq. degree survey area.

http://arxiv.org/abs/0904.1204

Title: BLAST: Resolving the Cosmic Submillimeter Background

Authors: Gaelen Marsden, Peter A. R. Ade, James J. Bock, Edward L. Chapin, Mark J. Devlin, Simon R. Dicker, Matthew Griffin, Joshua O. Gundersen, Mark Halpern, Peter C. Hargrave, David H. Hughes, Jeff Klein, Philip Mauskopf, Benjamin Magnelli, Lorenzo Moncelsi, Calvin B. Netterfield, Henry Ngo, Luca Olmi, Enzo Pascale, Guillaume Patanchon, Marie Rex, Douglas Scott, Christopher Semisch, Nicholas Thomas, Matthew D. P. Truch, Carole Tucker, Gregory S. Tucker, Marco P. Viero, Donald V. Wiebe

The Balloon-borne Large Aperture Submillimeter Telescope (BLAST) has made one square-degree, deep, confusion-limited maps at three different bands, centered on the Great Observatories Origins Deep Survey South field. By calculating the covariance of these maps with catalogs of 24 micron sources from the Far-Infrared Deep Extragalactic Legacy Survey (FIDEL), we have determined that the total submillimeter intensities are 8.60 +/- 0.59, 4.93 +/- 0.34, and 2.27 +/- 0.20 nW m^-2 sr^-1 at 250, 350, and 500 microns, respectively. These numbers are more precise than previous estimates of the cosmic infrared background (CIB) and are consistent with 24 micron-selected galaxies generating the full intensity of the CIB. We find that more than half of the CIB originates from sources at z >= 1.2. At all BLAST wavelengths, the relative intensity of high-z sources is higher for 24 micron-faint sources than it is for 24 micron-bright sources. Galaxies identified very broadly as AGN by their Spitzer Infrared Array Camera (IRAC) colors contribute 32-48% of the CIB, although X-ray-selected AGN contribute only 7%. BzK-selected galaxies are found to be brighter than typical 24 micron-selected galaxies in the BLAST bands, and contribute 32-42% of the CIB. These data provide high-precision constraints for models of the evolution of the number density and intensity of star-forming galaxies at high redshift.

http://arxiv.org/abs/0904.1205

Title: BLAST: A Far-Infrared Measurement of the History of Star Formation

Authors: Enzo Pascale, Peter A. R. Ade, James J. Bock, Edward L. Chapin, Mark J. Devlin, Simon Dye, Steve A. Eales, Matthew Griffin, Joshua O. Gundersen, Mark Halpern, Peter C. Hargrave, David H. Hughes, Jeff Klein, Gaelen Marsden, Philip Mauskopf, Lorenzo Moncelsi, Calvin B. Netterfield, Luca Olmi, Guillaume Patanchon, Marie Rex, Douglas Scott, Christopher Semisch, Nicholas Thomas, Matthew D. P. Truch, Carole Tucker, Gregory S. Tucker, Marco P. Viero, Donald V. Wiebe

We use measurements from the Balloon-borne Large Aperture Sub-millimeter Telescope (BLAST) at wavelengths spanning 250 to 500 microns, combined with data from the Spitzer Infrared telescope and ground-based optical surveys in GOODS-S, to determine the average star formation rate of the galaxies that comprise the cosmic infrared background (CIB) radiation from 70 to 500 microns, at redshifts 0 < z < 3. We find that different redshifts are preferentially probed at different wavelengths within this range, with most of the 70 micron background generated at z < ~1 and the 500 micron background generated at z >~1. The spectral coverage of BLAST and Spitzer in the region of the peak of the background at ~200 microns allows us to directly estimate the mean physical properties (temperature, bolometric luminosity and mass) of the dust in the galaxies responsible for contributing more than 80% of the CIB. By utilizing available redshift information we directly measure the evolution of the far infrared luminosity density and therefore the optically obscured star formation history up to redshift z ~3.

http://arxiv.org/abs/0904.1206

Title: BLAST: The Mass Function, Lifetimes, and Properties of Intermediate Mass Cores from a 50 Square Degree Submillimeter Galactic Survey in Vela (l = ~265)

Authors: Calvin. B. Netterfield, Peter A. R. Ade, James J. Bock, Edward L. Chapin, Mark J. Devlin, Matthew Griffin, Joshua O. Gundersen, Mark Halpern, Peter C. Hargrave, David H. Hughes, Jeff Klein, Gaelen Marsden, Peter G. Martin, Phillip Mauskopf, Luca Olmi, Enzo Pascale, Guillaume Patanchon, Marie Rex, Arabindo Roy, Douglas Scott, Christopher Semisch, Nicholas Thomas, Matthew D. P. Truch, Carole Tucker, Gregory S. Tucker, Marco P. Viero, Donald V. Wiebe

We present first results from an unbiased, 50 square degree submillimeter Galactic survey at 250, 350, and 500 microns from the 2006 flight of the Balloon-borne Large Aperture Submillimeter Telescope (BLAST). The map has resolution ranging from 36″ to 60″ in the three submillimeter bands spanning the thermal emission peak of cold starless cores. We determine the temperature, luminosity, and mass of more than a thousand compact sources in a range of evolutionary stages and an unbiased statistical characterization of the population. From comparison with C^18 O data, we find the dust opacity per gas mass, kappa/R = 0.16 cm^2/g at 250 microns, for cold clumps. We find that 2% of the mass of the molecular gas over this diverse region is in cores colder than 14 K, and that the mass function for these cold cores is consistent with a power law with index alpha = -3.22 +/- 0.14 over the mass range 14 M_sun < M < 80 M_sun, steeper than the Salpeter alpha = -2.35 initial massfunction for stars. Additionally, we infer a mass dependent cold core lifetime of tau(M) = 4E6 (M/20 M_sun)^-0.9 years — longer than what has been found in previous surveys of either low or high mass cores, and significantly longer than free fall or turbulent decay time scales. This implies some form of non-thermal support for cold cores during this early stage of star formation.

http://arxiv.org/abs/0904.1207

You can find a lot more detailed information on the dedicated BLAST website.

Random Thoughts: Points and Poisson (d’Avril)

Posted in The Universe and Stuff with tags , , , on April 4, 2009 by telescoper

I’ve got a thing about randomness. For a start I don’t like the word, because it covers such a multitude of sins. People talk about there being randomness in nature when what they really mean is that they don’t know how to predict outcomes perfectly. That’s not quite the same thing as things being inherently unpredictable; statements about the nature of reality are ontological, whereas I think randomness is only a useful concept in an epistemological sense. It describes our lack of knowledge: just because we don’t know how to predict doesn’t mean that it can’t be predicted.

Nevertheless there are useful mathematical definitions of randomness and it is also (somtimes) useful to make mathematical models that display random behaviour in a well-defined sense, especially in situations where one has to take into account the effects of noise.

I thought it would be fun to illustrate one such model. In a point process, the random element is a “dot” that occurs at some location in time or space. Such processes occur in wide range of contexts: arrivals of buses at a bus stop, photons in a detector, darts on a dartboard, and so on.

Let us suppose that we think of such a process happening in time, although what follows can straightforwardly be generalised to things happening over an area (such a dartboard) or within some higher-dimensional region. It is also possible to invest the points with some other attributes; processes like this are sometimes called marked point processes, but I won’t discuss them here.

The “most” random way of constructing a simple point process is to assume that each event happens independently of every other event, and that there is a constant probability per unit time of an event happening. This type of process is called a Poisson process, after the French mathematician Siméon-Denis Poisson, who was born in 1781. He was one of the most creative and original physicists of all time: besides fundamental work on electrostatics and the theory of magnetism for which he is famous, he also built greatly upon Laplace’s work in probability theory. His principal result was to derive a formula giving the number of random events if the probability of each one is very low. The Poisson distribution, as it is now known and which I will come to shortly, is related to this original calculation; it was subsequently shown that this distribution amounts to a limiting of the binomial distribution. Just to add to the connections between probability theory and astronomy, it is worth mentioning that in 1833 Poisson wrote an important paper on the motion of the Moon.

In a finite interval of duration T the mean (or expected) number of events for a Poisson process will obviously just be proportional to the product of the rate per unit time and T itself; call this product l.
 
The full distribution is then

This gives the probability that a finite interval contains exactly x events. It can be neatly derived from the binomial distribution by dividing the interval into a very large number of very tiny pieces, each one of which becomes a Bernoulli trial. The probability of success (i.e. of an event occurring) in each trial is extremely small, but the number of trials becomes extremely large in such a way that the mean number of successes is l. In this limit the binomial distribution takes the form of the above expression. The variance of this distribution is interesting: it is alsol.  This means that the typical fluctuations within the interval are of order the square root of l on a mean level of l, so the fractional variation is of the famous “one over root n” form that is a useful estimate of the expected variation in point processes.  Indeed, it’s a useful rule-of-thumb for estimating likely fluctuation levels in a host of statistical situations.

If football were a Poisson process with a mean number of goals per game of, say, 2 then would expect must games to have 2 plus or minus 1.4 (the square root of 2)  goals, i.e. between about 0.6 and 3.4. That is actually not far from what is observed and the distribution of goals per game in football matches is actually quite close to a Poisson distribution.

This idea can be straightforwardly extended to higher dimensional processes. If points are scattered over an area with a constant probability per unit area then the mean number in a finite area will also be some number l and the same formula applies.

As a matter of fact I first learned about the Poisson distribution when I was at school, doing A-level mathematics (which in those days actually included some mathematics). The example used by the teacher to illustrate this particular bit of probability theory was a two-dimensional one from biology. The skin of a fish was divided into little squares of equal area, and the number of parasites found in each square was counted. A histogram of these numbers accurately follows the Poisson form. For years I laboured under the delusion that it was given this name because it was something to do with fish, but then I never was very quick on the uptake.

This is all very well, but point processes are not always of this Poisson form. Points can be clustered, so that having one point at a given position increases the conditional probability of having others nearby. For example, galaxies like those shown in the nice picture are distributed throughout space in a clustered pattern that is very far from the Poisson form. But it’s very difficult to tell from just looking at the picture. What is needed is a rigorous statistical analysis.

The statistical description of clustered point patterns is a fascinating subject, because it makes contact with the way in which our eyes and brain perceive pattern. I’ve spent a large part of my research career trying to figure out efficient ways of quantifying pattern in an objective way and I can tell you it’s not easy, especially when the data are prone to systematic errors and glitches. I can only touch on the subject here, but to see what I am talking about look at the two patterns below:

 

pointbpointa

You will have to take my word for it that one of these is a realization of a two-dimensional Poisson point process and the other contains correlations between the points. One therefore has a real pattern to it, and one is a realization of a completely unstructured random process.

I show this example in popular talks and get the audience to vote on which one is the random one. The vast majority usually think that the top  is the one that is random and the bottom one is the one with structure to it. It is not hard to see why. The top pattern is very smooth (what one would naively expect for a constant probability of finding a point at any position in the two-dimensional space) , whereas the bottom one seems to offer a profusion of linear, filamentary features and densely concentrated clusters.

In fact, it’s the bottom  picture that was generated by a Poisson process using a  Monte Carlo random number generator. All the structure that is visually apparent is imposed by our own sensory apparatus, which has evolved to be so good at discerning patterns that it finds them when they’re not even there!

The top  process is also generated by a Monte Carlo technique, but the algorithm is more complicated. In this case the presence of a point at some location suppresses the probability of having other points in the vicinity. Each event has a zone of avoidance around it; the points are therefore anticorrelated. The result of this is that the pattern is much smoother than a truly random process should be. In fact, this simulation has nothing to do with galaxy clustering really. The algorithm used to generate it was meant to mimic the behaviour of glow-worms which tend to eat each other if they get  too close. That’s why they spread themselves out in space more uniformly than in the random pattern.

Incidentally, I got both pictures from Stephen Jay Gould’s collection of essays Bully for Brontosaurus and used them, with appropriate credit and copyright permission, in my own book From Cosmos to Chaos. I forgot to say this in earlier versions of this post.

The tendency to find things that are not there is quite well known to astronomers. The constellations which we all recognize so easily are not physical associations of stars, but are just chance alignments on the sky of things at vastly different distances in space. That is not to say that they are random, but the pattern they form is not caused by direct correlations between the stars. Galaxies form real three-dimensional physical associations through their direct gravitational effect on one another.

People are actually pretty hopeless at understanding what “really” random processes look like, probably because the word random is used so often in very imprecise ways and they don’t know what it means in a specific context like this.  The point about random processes, even simpler ones like repeated tossing of a coin, is that coincidences happen much more frequently than one might suppose.

I suppose there is an evolutionary reason why our brains like to impose order on things in a general way. More specifically scientists often use perceived patterns in order to construct hypotheses. However these hypotheses must be tested objectively and often the initial impressions turn out to be figments of the imagination, like the canals on Mars.

Now, I think I’ll complain to wordpress about the widget that links pages to a “random blog post”.

I’m sure it’s not really random….

Talking Planck

Posted in The Universe and Stuff with tags , , on April 3, 2009 by telescoper

Since the Planck mission is due to be launched very soon, I thought it would be nice to put this lecture by George Efstathiou here in order to give some background. It’s from a page of science talks about Planck.

George is the Professor of Astrophysics (1909) at the University of Cambridge. The 1909 isn’t when he was born, but when the Chair he holds was set up. I have a hundred-year-old Chair in my house too.
He is also the Director of the impressive Kavli Institute for Cosmology.
He’s a leading member of the Planck science team and is coordinating the UK effort that will be applied to analysing the data. He’s an FRS, citation millionaire, and general all-round clever clogs. He would cut an even more impressive figure were it not for the fact that he supports Arsenal.

Clover and Out

Posted in Science Politics, The Universe and Stuff with tags , , , , , , , , , on March 31, 2009 by telescoper

One of the most exciting challenges facing the current generation of cosmologists is to locate in the pattern of fluctuations in the cosmic microwave background evidence for the primordial gravitational waves predicted by models of the Universe that involve inflation.

Looking only at the temperature variation across the sky, it is not possible to distinguish between tensor  (gravitational wave) and scalar (density wave) contributions  (both of which are predicted to be excited during the inflationary epoch).  However, scattering of photons off electrons is expected to leave the radiation slightly polarized (at the level of a few percent). This gives us additional information in the form of the  polarization angle at each point on the sky and this extra clue should, in principle, enable us to disentangle the tensor and scalar components.

The polarization signal can be decomposed into two basic types depending on whether the pattern has  odd or even parity, as shown in the nice diagram (from a paper by James Bartlett)

The top row shows the E-mode (which look the same when reflected in a mirror and can be produced by either scalar or tensor modes) and the bottom shows the B-mode (which have a definite handedness that changes when mirror-reflected and which can’t be generated by scalar modes because they can’t have odd parity).

The B-mode is therefore (in principle)  a clean diagnostic of the presence of gravitational waves in the early Universe. Unfortunately, however, the B-mode is predicted to be very small, about 100 times smaller than the E-mode, and foreground contamination is likely to be a very serious issue for any experiment trying to detect it.

An experiment called Clover (involving the Universities of  Cardiff, Oxford, Cambridge and Manchester) was designed to detect the primordial B-mode signal from its vantage point in Chile. You can read more about the way it works at the dedicated webpages here at Cardiff and at Oxford. I won’t describe it in more detail here, for reasons which will become obvious.

The chance to get involved in a high-profile cosmological experiment was one of the reasons I moved to Cardiff a couple of years ago, and I was looking forward to seeing the data arriving for analysis. Although I’m primarily a theorist, I have some experience in advanced statistical methods that might have been useful in analysing the output.  It would have been fun blogging about it too.

Unfortunately, however, none of that is ever going to happen. Because of its budget crisis, and despite the fact that it has spent a large amount (£4.5M) on it already,  STFC has just decided to withdraw the funding needed to complete it (£2.5M)  and cancel the Clover experiment.

Clover wasn’t the only B-mode experiment in the game. Its rivals include QUIET and SPIDER, both based in the States. It wasn’t clear that Clover would have won the race, but now that we know  it’s a non-runner  we can be sure it won’t.