Archive for Cosmology

More on MacGuffins

Posted in Science Politics, The Universe and Stuff with tags , , , , , , , , on August 17, 2011 by telescoper

I’m very pressed for time this week  so I thought I’d cheat by resurrecting and updating an old post from way back when I had just started blogging, about three years ago.  I thought of doing this because I just came across a Youtube clip of the late great Alfred Hitchcock, which you’ll now find in the post. I’ve also made a couple of minor editorial changes, but basically it’s a recycled piece and you should therefore read it for environmental reasons.

–0–

Unpick the plot of any thriller or suspense movie and the chances are that somewhere within it you will find lurking at least one MacGuffin. This might be a tangible thing, such the eponymous sculpture of a Falcon in the archetypal noir classic The Maltese Falcon or it may be rather nebulous, like the “top secret plans” in Hitchcock’s The Thirty Nine Steps. Its true character may be never fully revealed, such as in the case of the glowing contents of the briefcase in Pulp Fiction , which is a classic example of the “undisclosed object” type of MacGuffin. Or it may be scarily obvious, like a doomsday machine or some other “Big Dumb Object” you might find in a science fiction thriller. It may even not be a real thing at all. It could be an event or an idea or even something that doesn’t exist in any real sense at all, such the fictitious decoy character George Kaplan in North by Northwest.

Whatever it is or is not, the MacGuffin is responsible for kick-starting the plot. It makes the characters embark upon the course of action they take as the tale begins to unfold. This plot device was particularly beloved by Alfred Hitchcock (who was responsible for introducing the word to the film industry). Hitchcock was however always at pains to ensure that the MacGuffin never played as an important a role in the mind of the audience as it did for the protagonists. As the plot twists and turns – as it usually does in such films – and its own momentum carries the story forward, the importance of the MacGuffin tends to fade, and by the end we have often forgotten all about it. Hitchcock’s movies rarely bother to explain their MacGuffin(s) in much detail and they often confuse the issue even further by mixing genuine MacGuffins with mere red herrings.

Here is the man himself explaining the concept at the beginning of this clip. (The rest of the interview is also enjoyable, convering such diverse topics as laxatives, ravens and nudity..)

North by North West is a fine example of a multi-MacGuffin movie. The centre of its convoluted plot involves espionage and the smuggling of what is only cursorily described as “government secrets”. But although this is behind the whole story, it is the emerging romance, accidental betrayal and frantic rescue involving the lead characters played by Cary Grant and Eve Marie Saint that really engages the characters and the audience as the film gathers pace. The MacGuffin is a trigger, but it soon fades into the background as other factors take over.

There’s nothing particular new about the idea of a MacGuffin. I suppose the ultimate example is the Holy Grail in the tales of King Arthur and the Knights of the Round Table and, much more recently, the Da Vinci Code. The original Grail itself is basically a peg on which to hang a series of otherwise disconnected stories. It is barely mentioned once each individual story has started and, of course, is never found.

Physicists are fond of describing things as “The Holy Grail” of their subject, such as the Higgs Boson or gravitational waves. This always seemed to me to be an unfortunate description, as the Grail quest consumed a huge amount of resources in a predictably fruitless hunt for something whose significance could be seen to be dubious at the outset.The MacGuffin Effect nevertheless continues to reveal itself in science, although in different forms to those found in Hollywood.

The Large Hadron Collider (LHC), switched on to the accompaniment of great fanfares a few years ago, provides a nice example of how the MacGuffin actually works pretty much backwards in the world of Big Science. To the public, the LHC was built to detect the Higgs Boson, a hypothetical beastie introduced to account for the masses of other particles. If it exists the high-energy collisions engineered by LHC should reveal its presence. The Higgs Boson is thus the LHC’s own MacGuffin. Or at least it would be if it were really the reason why LHC has been built. In fact there are dozens of experiments at CERN and many of them have very different motivations from the quest for the Higgs, such as evidence for supersymmetry.

Particle physicists are not daft, however, and they have realised that the public and, perhaps more importantly, government funding agencies need to have a really big hook to hang such a big bag of money on. Hence the emergence of the Higgs as a sort of master MacGuffin, concocted specifically for public consumption, which is much more effective politically than the plethora of mini-MacGuffins which, to be honest, would be a fairer description of the real state of affairs.

Even this MacGuffin has its problems, though. The Higgs mechanism is notoriously difficult to explain to the public, so some have resorted to a less specific but more misleading version: “The Big Bang”. As I’ve already griped, the LHC will never generate energies anything like the Big Bang did, so I don’t have any time for the language of the “Big Bang Machine”, even as a MacGuffin.

While particle physicists might pretend to be doing cosmology, we astrophysicists have to contend with MacGuffins of our own. One of the most important discoveries we have made about the Universe in the last decade is that its expansion seems to be accelerating. Since gravity usually tugs on things and makes them slow down, the only explanation that we’ve thought of for this perverse situation is that there is something out there in empty space that pushes rather than pulls. This has various possible names, but Dark Energy is probably the most popular, adding an appropriately noirish edge to this particular MacGuffin. It has even taken over in prominence from its much older relative, Dark Matter, although that one is still very much around.

We have very little idea what Dark Energy is, where it comes from, or how it relates to other forms of energy we are more familiar with, so observational astronomers have jumped in with various grandiose strategies to find out more about it. This has spawned a booming industry in surveys of the distant Universe (such as the Dark Energy Survey) all aimed ostensibly at unravelling the mystery of the Dark Energy. It seems that to get any funding at all for cosmology these days you have to sprinkle the phrase “Dark Energy” liberally throughout your grant applications.

The old-fashioned “observational” way of doing astronomy – by looking at things hard enough until something exciting appears (which it does with surprising regularity) – has been replaced by a more “experimental” approach, more like that of the LHC. We can no longer do deep surveys of galaxies to find out what’s out there. We have to do it “to constrain models of Dark Energy”. This is just one example of the not necessarily positive influence that particle physics has had on astronomy in recent times and it has been criticised very forcefully by Simon White.

Whatever the motivation for doing these projects now, they will undoubtedly lead to new discoveries. But my own view is that there will never be a solution of the Dark Energy problem until it is understood much better at a conceptual level, and that will probably mean major revisions of our theories of both gravity and matter. I venture to speculate that in twenty years or so people will look back on the obsession with Dark Energy with some amusement, as our theoretical language will have moved on sufficiently to make it seem irrelevant.

But that’s how it goes with MacGuffins. Even the Maltese Falcon turned out to be a fake in the end.

Dear Peter Coles … (via Letters to Nature)

Posted in The Universe and Stuff with tags , , , , on August 7, 2011 by telescoper

Oooh….somebody’s written me a letter via a blog!

I was just re-reading this post over at Cosmic Variance about a paper by Sean Carroll, which he summarises as: Our observed universe is highly non-generic, and in the past it was even more non-generic, or “finely tuned.” One way of describing this state of affairs is to say that the early universe had a very low entropy. … The basic argument is an old one, going back to Roger Penrose in the late 1970′s. The advent of inflation in the early 1980 … Read More

via Letters to Nature

 

 

Hints of Bubbles in the Background?

Posted in Astrohype, Cosmic Anomalies, The Universe and Stuff with tags , , , on August 4, 2011 by telescoper

Looking around for a hot cosmological topic for a brief diversionary post, I came across a news item on the BBC website entitled ‘Multiverse theory suggested by microwave background‘. I’ll refer you to the item itself for a general description of the study and to the actual paper (by Feeney et al.), which has been accepted for publication in Physical Review D, for technical details.

I will, however, flagrantly steal Auntie Beeb’s nice picture which shows the location on the sky of a number of allegedly anomalous features; they being the coloured blobs that look like Smarties in the bottom right. The greyed out bits of the map are areas of the sky masked out to avoid contamination from our own Galaxy or various other foreground sources.

One possible explanation of the Smarties from Outer Space is furnished by a variant of the theory known as chaotic inflation in which the universe comprises a collection of mini-universes  which nucleate and expand rather like bubbles in a glass of champagne. Assuming this “multiverse” picture is correct – a very big “if”, in my opinion –  it is just possible that two bubbles might collide just after nucleation leaving a sort of dent in space that we see in the microwave background.

It’s a speculative idea, of course, but there’s nothing wrong with such things. Everything starts off with speculation, really. I’ve actually read the paper, and I think it’s an excellent piece of work.  I can’t resist commenting, however, that there’s a considerable gap between the conclusions of the study and the title of the BBC article, either the present `Multiverse  theory suggested by microwave background’ or the original one `Study hints at bubble universes’.

My point is that the authors  concede that they do not find any statistically significant evidence for the bubble collision interpretation, i.e. this is essentially  a null result. I’m not sure how “study fails to find evidence for..” turned into “study hints at…”.

Nonetheless, it’s an interesting paper and there’s certainly a possibility that better, cleaner and less noisy data  may find evidence where WMAP couldn’t. Yet another reason to look forward to future data from Planck!

Haloes, Hosts and Quasars

Posted in The Universe and Stuff with tags , , , , , , , , on July 20, 2011 by telescoper

Not long ago I posted an item about the exciting discovery of a quasar at redshift 7.085. I thought I’d return briefly to that topic in order (a) to draw your attention to a nice guest post by Daniel Mortlock on Andrew Jaffe’s blog giving more background to the discovery, and (b) to say  something  about the theoretical interpretation of the results.

The reason for turning the second theme is to explain a little bit about what difficulties this observation might pose for the standard “Big Bang” cosmological model. Our general understanding of galaxies form is that gravity gathers cold non-baryonic matter into clumps  into which “ordinary” baryonic material subsequently falls, eventually forming a luminous galaxy forms surrounded by a “halo” of (invisible) dark matter.  Quasars are galaxies in which enough baryonic matter has collected in the centre of the halo to build a supermassive black hole, which powers a short-lived phase of extremely high luminosity.

The key idea behind this picture is that the haloes form by hierarchical clustering: the first to form are small but  merge rapidly  into objects of increasing mass as time goes on. We have a fairly well-established theory of what happens with these haloes – called the Press-Schechter formalism – which allows us to calculate the number-density N(M,z) of objects of a given mass M as a function of redshift z. As an aside, it’s interesting to remark that the paper largely responsible for establishing the efficacy of this theory was written by George Efstathiou and Martin Rees in 1988, on the topic of high redshift quasars.

Anyway, courtesy of my estimable PhD student Jo Short, this is how the mass function of haloes is predicted to evolve in the standard cosmological model (the different lines show the distribution as a function of redshift for redshifts from 0 to 9):

It might be easier to see what’s going on looking instead at this figure which shows Mn(M) instead of n(M).

You can see that the typical size of a halo increases with decreasing redshift, but it’s only at really high masses where you see a really dramatic effect.

The mass of the black hole responsible for the recently-detected high-redshift quasar is estimated to be about 2 \times 10^{9} M_{\odot}. But how does that relate to the mass of the halo within which it resides? Clearly the dark matter halo has to be more massive than the baryonic material it collects, and therefore more massive than the central black hole, but by how much?

This question is very difficult to answer, as it depends on how luminous the quasar is, how long it lives, what fraction of the baryons in the halo fall into the centre, what efficiency is involved in generating the quasar luminosity, etc.   Efstathiou and Rees argued that to power a quasar with luminosity of order 10^{13} L_{\odot} for a time order 10^{8} years requires a parent halo of mass about 2\times 10^{11} M_{\odot}.

The abundance of such haloes is down by quite a factor at redshift 7 compared to redshift 0 (the present epoch), but the fall-off is even more precipitous for haloes of larger mass than this. We really need to know how abundant such objects are before drawing definitive conclusions, and one object isn’t enough to put a reliable estimate on the general abundance, but with the discovery of this object  it’s certainly getting interesting. Haloes the size of a galaxy cluster, i.e.  10^{14} M_{\odot}, are rarer by many orders of magnitude at redshift 7 than at redshift 0 so if anyone ever finds one at this redshift that would really be a shock to many a cosmologist’s  system, as would be the discovery of quasars at  redshifts significantly higher than seven.

Another thing worth mentioning is that, although there might be a sufficient number of potential haloes to serve as hosts for a quasar, there remains the difficult issue of understanding how precisely the black hole forms and especially how long that  takes. This aspect of the process of quasar formation is much more complicated than the halo distribution, so it’s probably on detailed models of  black-hole  growth that this discovery will have the greatest impact in the short term.

JWST: Too Big to Fail?

Posted in Finance, Science Politics, The Universe and Stuff with tags , , , , , on July 7, 2011 by telescoper

News emerged last night that the US Government may be about to cancel the  James Webb Space Telescope, which is intended to be the successor to the Hubble Space Telescope. I’m slow out of the blocks on this one, as I had an early night last night, but there’s already extensive reaction to the JWST crisis around the blogosphere: see, for example, Andy Lawrence, Sarah Kendrew, and Amanda Bauer; I’m sure there are many more articles elsewhere.

The US House Appropriations Committee has released its Science Appropriations Bill for the Fiscal Year 2012, which will be voted on tomorrow. Among other announcements (of big cuts to NASA’s budget) listed in the accompanying press release we find

The bill also terminates funding for the James Webb Space Telescope, which is billions of dollars over budget and plagued by poor management.

It is undoubtedly the case that JWST is way over budget and very late. Initial estimates put the cost of the at $1.6 billion and that it would be launched this year (2011). Now it can’t launch until at least 2018,  and probably won’t fly until as late as 2020, with an estimated final price tag of $6.8 billion. I couldn’t possibly comment on whether that is due to poor management or just that it’s an incredibly challenging project.

There’s a very informative piece on the Nature News Blog that explains that this is an early stage of the passage of the bill and that there’s a long way to go before JWST is definitely axed, but it is a worrying time for all those involved in it. There are serious implications for the European Space Agency, which is also involved in JWST, to STFC, which supports UK activity in related projects, and indeed for many groups of astronomers around the world who are currently engaged in building and testing instruments.

One of the arguments against cancelling JWST now is that all the money that has been spent on it so far would have been wasted, in other words that it’s “too big to fail”, which is an argument that obviously can’t be sustained indefinitely. It may be now it’s so far over budget that it’s become a political liability to NASA, i.e. it’s too big to succeed. It’s too early to say that JWST is doomed – this draft budget is partly a political shot across the bows of the President by the Republicans in the House – but it does that the politicians are prepared to think what has previously been unthinkable.

UPDATE: A statement has been issued by the American Astronomical Association.

 

False Convergence and the Bandwagon Effect

Posted in The Universe and Stuff with tags , , , , , , on July 3, 2011 by telescoper

In idle moments, such as can be found during sunny sunday summer afternoons in the garden, it’s  interesting to reminisce about things you worked on in the past. Sometimes such trips down memory lane turn up some quite interesting lessons for the present, especially when you look back at old papers which were published when the prevailing paradigms were different. In this spirit I was lazily looking through some old manuscripts on an ancient laptop I bought in 1993. I thought it was bust, but it turns out to be perfectly functional; they clearly made things to last in those days! I found a paper by Plionis et al. which I co-wrote in 1992; the abstract is here

We have reanalyzed the QDOT survey in order to investigate the convergence properties of the estimated dipole and the consequent reliability of the derived value of \Omega^{0.6}/b. We find that there is no compelling evidence that the QDOT dipole has converged within the limits of reliable determination and completeness. The value of  \Omega_0 derived by Rowan-Robinson et al. (1990) should therefore be considered only as an upper limit. We find strong evidence that the shell between 140 and 160/h Mpc does contribute significantly to the total dipole anisotropy, and therefore to the motion of the Local Group with respect to the cosmic microwave background. This shell contains the Shapley concentration, but we argue that this concentration itself cannot explain all the gravitational acceleration produced by it; there must exist a coherent anisotropy which includes this structure, but extends greatly beyond it. With the QDOT data alone, we cannot determine precisely the magnitude of any such anisotropy.

(I’ve added a link to the Rowan-Robinson et al. paper for reference). This was  a time long before the establishment of the current standard model of cosmology (“ΛCDM”) and in those days the favoured theoretical paradigm was a flat universe, but one without a cosmological constant but with a critical density of matter, corresponding to a value of the density parameter \Omega_0 =1.

In the late eighties and early nineties, a large number of observational papers emerged claiming to provide evidence for the (then) standard model, the Rowan-Robinson et al. paper being just one. The idea behind this analysis is very neat. When we observe the cosmic microwave background we find it has a significant variation in temperature across the sky on a scale of 180°, i.e. it has a strong dipole component

There is also some contamination from Galactic emission in the middle, but you can see the dipole in the above map from COBE. The interpretation of this is that the Earth is not at rest. The  temperature variation causes by our motion with respect to a frame in which the cosmic microwave background (CMB) would be isotropic (i.e. be the same temperature everywhere on the sky) is just \Delta T/T \sim v/c. However, the Earth moves around the Sun. The Sun orbits the center of the Milky Way Galaxy. The Milky Way Galaxy orbits in the Local Group of Galaxies. The Local Group falls toward the Virgo Cluster of Galaxies. We know these velocities pretty well, but they don’t account for the size of the observed dipole anisotropy. The extra bit must be due the gravitational pull of larger scale structures.

If one can map the distribution of galaxies over the whole sky, as was first done with the QDOT galaxy redshift survey, then one can compare the dipole expected from the distribution of galaxies with that measured using the CMB. We can only count the galaxies – we don’t know how much mass is associated with each one but if we find that the CMB and the galaxy dipole line up in direction we can estimate the total amount of mass needed to give the right magnitude. I refer you to the papers for details.

Rowan-Robinson et al. argued that the QDOT galaxy dipole reaches convergence with the CMB dipole (i.e. they line up with one another) within a relatively small volume – small by cosmological standards, I mean, i.e. 100 Mpc or so- which means that  there has to be quite a lot of mass in that small volume to generate the relatively large velocity indicated by the CMB dipole. Hence the result is taken to indicate a high density universe.

In our paper we questioned whether convergence had actually been reached within the QDOT sample. This is crucial because if there is significant structure beyond the scale encompassed by the survey a lower overall density of matter may be indicated. We looked at a deeper survey (of galaxy clusters) and found evidence of a large-scale structure (up to 200 Mpc) that was lined up with the smaller scale anisotropy found by the earlier paper. Our best estimate was \Omega_0\sim 0.3, with a  large uncertainty. Now, 20 years later, we have a  different standard cosmology which does indeed have \Omega_0 \simeq 0.3. We were right.

Now I’m not saying that there was anything actually wrong with the Rowan-Robinson et al. paper – the uncertainties in their analysis are clearly stated, in the body of the paper as well as in the abstract. However, that result was widely touted as evidence for a high-density universe which was an incorrect interpretation. Many other papers published at the time involved similar misinterpretations. It’s good to have a standard model, but it can lead to a publication bandwagon – papers that agree with the paradigm get published easily, while those that challenge it (and are consequently much more interesting) struggle to make it past referees. The accumulated weight of evidence in cosmology is much stronger now than it was in 1990, of course, so the standard model is a more robust entity than the corresponding version of twenty years ago. Nevertheless, there’s still a danger that by treating ΛCDM as if it were the absolute truth, we might be closing our eyes to precisely those clues that will lead us to an even better understanding.  The perils of false convergence  are real even now.

As a grumpy postscript, let me just add that Plionis et al. has attracted a meagre 18 citations whereas Rowan-Robinson et al. has 178. Being right doesn’t always get you cited.

Thought for the Day

Posted in The Universe and Stuff with tags , , , on July 1, 2011 by telescoper

For naturalism, fed on recent cosmological speculations, mankind is in a position similar to that of a set of people living on a frozen lake, surrounded by cliffs over which there is no escape, yet knowing that little by little the ice is melting, and the inevitable day drawing near when the last film of it will disappear, and to be drowned ignominiously will be the human creature’s portion. The merrier the skating, the warmer and more sparkling the sun by day, and the ruddier the bonfires at night, the more poignant the sadness with which one must take in the meaning of the total situation.

From The Varieties of Religious Experience by William James, first published in 1902…

Bright and Early

Posted in The Universe and Stuff with tags , , , , , , on June 29, 2011 by telescoper

Some interesting astronomy news emerged this evening relating to a paper published in 30th June issue of the journal Nature. The press release from the European Southern Observatory (ESO) is quite detailed, so I’ll refer you there for the minutiae, but in a nutshell:

A team of European astronomers has used ESO’s Very Large Telescope and a host of other telescopes to discover and study the most distant quasar found to date. This brilliant beacon, powered by a black hole with a mass two billion times that of the Sun, is by far the brightest object yet discovered in the early Universe.

and the interesting numbers are given here (with links from the press release):

The quasar that has just been found, named ULAS J1120+0641 [2], is seen as it was only 770 million years after the Big Bang (redshift 7.1, [3]). It took 12.9 billion years for its light to reach us.

Although more distant objects have been confirmed (such as a gamma-ray burst at redshift 8.2, eso0917, and a galaxy at redshift 8.6, eso1041), the newly discovered quasar is hundreds of times brighter than these. Amongst objects bright enough to be studied in detail, this is the most distant by a large margin.

When I was a lad, or at least a postdoc, the most distant objects known were quasars, although in those days the record holders had redshifts just over half that of the newly discovered one. Nowadays technology has improved so much that astronomers can detect “normal” galaxies at even higher redshifts but quasars remain interesting because of their extraordinary luminosity. The standard model for how a quasar can generate so much power involves a central black hole onto which matter falls, liberating vast amounts of gravitational energy.

You can understand how efficient this is by imagining a mass m falling onto a black hole of Mass M from a large distance to the horizon of the black hole, which is at the Schwarzschild radius R=2GM/c^2. Since the gravitational potential energy at a radius R is -GMm/R the energy involved in bringing a mass m from infinity to the horizon is a staggering \frac{1}{2} mc^2, i.e. half the rest mass energy of the infalling material. This is an overestimate  for various reasons but it gives you an idea of how much energy is available if you can get gravity to do the work; doing the calculation properly still gives an answer much higher than the amount of energy that can be released by, e.g., nuclear reactions.

The point is, though, that black holes aren’t built in a day, so if you see one so far away that its light has taken most of the age of the Universe to reach us then it tells us that its  black hole must have grown very quickly. This one seems to be a particularly massive one, which means it must have grown very quickly indeed. Through observations like this  we learn something potentially very interesting about the relationship between galaxies and their central black holes, and how they both form and evolve.

On the lighter side, ESO have also produced the following animation which I suppose is quite illustrative, but what are the sound effects all about?

Cosmic Clumpiness Conundra

Posted in The Universe and Stuff with tags , , , , , , , , , , , , , , on June 22, 2011 by telescoper

Well there’s a coincidence. I was just thinking of doing a post about cosmological homogeneity, spurred on by a discussion at the workshop I attended in Copenhagen a couple of weeks ago, when suddenly I’m presented with a topical hook to hang it on.

New Scientist has just carried a report about a paper by Shaun Thomas and colleagues from University College London the abstract of which reads

We observe a large excess of power in the statistical clustering of luminous red galaxies in the photometric SDSS galaxy sample called MegaZ DR7. This is seen over the lowest multipoles in the angular power spectra Cℓ in four equally spaced redshift bins between 0.4 \leq z \leq 0.65. However, it is most prominent in the highest redshift band at z\sim 4\sigma and it emerges at an effective scale k \sim 0.01 h{\rm Mpc}^{-1}. Given that MegaZ DR7 is the largest cosmic volume galaxy survey to date (3.3({\rm Gpc} h^{-1})^3) this implies an anomaly on the largest physical scales probed by galaxies. Alternatively, this signature could be a consequence of it appearing at the most systematically susceptible redshift. There are several explanations for this excess power that range from systematics to new physics. We test the survey, data, and excess power, as well as possible origins.

To paraphrase, it means that the distribution of galaxies in the survey they study is clumpier than expected on very large scales. In fact the level of fluctuation is about a factor two higher than expected on the basis of the standard cosmological model. This shows that either there’s something wrong with the standard cosmological model or there’s something wrong with the survey. Being a skeptic at heart, I’d bet on the latter if I had to put my money somewhere, because this survey involves photometric determinations of redshifts rather than the more accurate and reliable spectroscopic variety. I won’t be getting too excited about this result unless and until it is confirmed with a full spectroscopic survey. But that’s not to say it isn’t an interesting result.

For one thing it keeps alive a debate about whether, and at what scale, the Universe is homogeneous. The standard cosmological model is based on the Cosmological Principle, which asserts that the Universe is, in a broad-brush sense, homogeneous (is the same in every place) and isotropic (looks the same in all directions). But the question that has troubled cosmologists for many years is what is meant by large scales? How broad does the broad brush have to be?

At our meeting a few weeks ago, Subir Sarkar from Oxford pointed out that the evidence for cosmological homogeneity isn’t as compelling as most people assume. I blogged some time ago about an alternative idea, that the Universe might have structure on all scales, as would be the case if it were described in terms of a fractal set characterized by a fractal dimension D. In a fractal set, the mean number of neighbours of a given galaxy within a spherical volume of radius R is proportional to R^D. If galaxies are distributed uniformly (homogeneously) then D = 3, as the number of neighbours simply depends on the volume of the sphere, i.e. as R^3, and the average number-density of galaxies. A value of D < 3 indicates that the galaxies do not fill space in a homogeneous fashion: D = 1, for example, would indicate that galaxies were distributed in roughly linear structures (filaments); the mass of material distributed along a filament enclosed within a sphere grows linear with the radius of the sphere, i.e. as R^1, not as its volume; galaxies distributed in sheets would have D=2, and so on.

The discussion of a fractal universe is one I’m overdue to return to. In my previous post  I left the story as it stood about 15 years ago, and there have been numerous developments since then. I will do a “Part 2” to that post before long, but I’m waiting for some results I’ve heard about informally, but which aren’t yet published, before filling in the more recent developments.

We know that D \simeq 1.2 on small scales (in cosmological terms, still several Megaparsecs), but the evidence for a turnover to D=3 is not so strong. The point is, however, at what scale would we say that homogeneity is reached. Not when D=3 exactly, because there will always be statistical fluctuations; see below. What scale, then?  Where D=2.9? D=2.99?

What I’m trying to say is that much of the discussion of this issue involves the phrase “scale of homogeneity” when that is a poorly defined concept. There is no such thing as “the scale of homogeneity”, just a whole host of quantities that vary with scale in a way that may or may not approach the value expected in a homogeneous universe.

It’s even more complicated than that, actually. When we cosmologists adopt the Cosmological Principle we apply it not to the distribution of galaxies in space, but to space itself. We assume that space is homogeneous so that its geometry can be described by the Friedmann-Lemaitre-Robertson-Walker metric.

According to Einstein’s  theory of general relativity, clumps in the matter distribution would cause distortions in the metric which are roughly related to fluctuations in the Newtonian gravitational potential \delta\Phi by \delta\Phi/c^2 \sim \left(\lambda/ct \right)^{2} \left(\delta \rho/\rho\right), give or take a factor of a few, so that a large fluctuation in the density of matter wouldn’t necessarily cause a large fluctuation of the metric unless it were on a scale \lambda reasonably large relative to the cosmological horizon \sim ct. Galaxies correspond to a large \delta \rho/\rho \sim 10^6 but don’t violate the Cosmological Principle because they are too small to perturb the background metric significantly. Even the big clumps found by the UCL team only correspond to a small variation in the metric. The issue with these, therefore, is not so much that they threaten the applicability of the Cosmological Principle, but that they seem to suggest structure might have grown in a different way to that usually supposed.

The problem is that we can’t measure the gravitational potential on these scales directly so our tests are indirect. Counting galaxies is relatively crude because we don’t even know how well galaxies trace the underlying mass distribution.

An alternative way of doing this is to use not the positions of galaxies, but their velocities (usually called peculiar motions). These deviations from a pure Hubble flow are caused by lumps of matter pulling on the galaxies; the more lumpy the Universe is, the larger the velocities are and the larger the lumps are the more coherent the flow becomes. On small scales galaxies whizz around at speeds of hundreds of kilometres per second relative to each other, but averaged over larger and larger volumes the bulk flow should get smaller and smaller, eventually coming to zero in a frame in which the Universe is exactly homogeneous and isotropic.

Roughly speaking the bulk flow v should relate to the metric fluctuation as approximately \delta \Phi/c^2 \sim \left(\lambda/ct \right) \left(v/c\right).

It has been claimed that some observations suggest the existence of a dark flow which, if true, would challenge the reliability of the standard cosmological framework, but these results are controversial and are yet to be independently confirmed.

But suppose you could measure the net flow of matter in spheres of increasing size. At what scale would you claim homogeneity is reached? Not when the flow is exactly zero, as there will always be fluctuations, but exactly how small?

The same goes for all the other possible criteria we have for judging cosmological homogeneity. We are free to choose the point where we say the level of inhomogeneity is sufficiently small to be satisfactory.

In fact, the standard cosmology (or at least the simplest version of it) has the peculiar property that it doesn’t ever reach homogeneity anyway! If the spectrum of primordial perturbations is scale-free, as is usually supposed, then the metric fluctuations don’t vary with scale at all. In fact, they’re fixed at a level of \delta \Phi/c^2 \sim 10^{-5}.

The fluctuations are small, so the FLRW metric is pretty accurate, but don’t get smaller with increasing scale, so there is no point when it’s exactly true. So lets have no more of “the scale of homogeneity” as if that were a meaningful phrase. Let’s keep the discussion to the behaviour of suitably defined measurable quantities and how they vary with scale. You know, like real scientists do.

The Laws of Extremely Improbable Things

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , , , , on June 9, 2011 by telescoper

After a couple of boozy nights in Copenhagen during the workshop which has just finished, I thought I’d take things easy this evening and make use of the free internet connection in my hotel to post a short item about something I talked about at the workshop here.

Actually I’ve been meaning to mention a nice bit of statistical theory called Extreme Value Theory on here for some time, because not so many people seem to be aware of it, but somehow I never got around to writing about it. People generally assume that statistical analysis of data revolves around “typical” quantities, such as averages or root-mean-square fluctuations (i.e. “standard” deviations). Sometimes, however, it’s not the typical points that are interesting, but those that appear to be drawn from the extreme tails of a probability distribution. This is particularly the case in planning for floods and other natural disasters, but this field also finds a number of interesting applications in astrophysics and cosmology. What should be the mass of the most massive cluster in my galaxy survey? How bright the brightest galaxy? How hot the hottest hotspot in the distribution of temperature fluctuations on the cosmic microwave background sky? And how cold the coldest? Sometimes just one anomalous event can be enormously useful in testing a theory.

I’m not going to go into the theory in any great depth here. Instead I’ll just give you a simple idea of how things work. First imagine you have a set of n observations labelled X_i. Assume that these are independent and identically distributed with a distribution function F(x), i.e.

\Pr(X_i\leq x)=F(x)

Now suppose you locate the largest value in the sample, X_{\rm max}. What is the distribution of this value? The answer is not F(x), but it is quite easy to work out because the probability that the largest value is less than or equal to, say, z is just the probability that each one is less than or equal to that value, i.e.

F_{\rm max}(z) = \Pr \left(X_{\rm max}\leq z\right)= \Pr \left(X_1\leq z, X_2\leq z\ldots, X_n\leq z\right)

Because the variables are independent and identically distributed, this means that

F_{\rm max} (z) = \left[ F(z) \right]^n

The probability density function associated with this is then just

f_{\rm max}(z) = n f(z) \left[ F(z) \right]^{n-1}

In a situation in which F(x) is known and in which the other assumptions apply, then this simple result offers the best way to proceed in analysing extreme values.

The mathematical interest in extreme values however derives from a paper in 1928 by Fisher \& Tippett which paved the way towards a general theory of extreme value distributions. I don’t want to go too much into details about that, but I will give a flavour by mentioning a historically important, perhaps surprising, and in any case rather illuminating example.

It turns out that for any distribution F(x) of exponential type, which means that

\lim_{x\rightarrow\infty} \frac{1-F(x)}{f(x)} = 0

then there is a stable asymptotic distribution of extreme values, as n \rightarrow \infty which is independent of the underlying distribution, F(x), and which has the form

G(z) = \exp \left(-\exp \left( -\frac{(z-a_n)}{b_n} \right)\right)

where a_n and b_n are location and scale parameters; this is called the Gumbel distribution. It’s not often you come across functions of the form e^{-e^{-y}}!

This result, and others, has established a robust and powerful framework for modelling extreme events. One of course has to be particularly careful if the variables involved are not independent (e.g. part of correlated sequences) or if there are not identically distributed (e.g. if the distribution is changing with time). One also has to be aware of the possibility that an extreme data point may simply be some sort of glitch (e.g. a cosmic ray hit on a pixel, to give an astronomical example). It should also be mentioned that the asymptotic theory is what it says on the tin – asymptotic. Some distributions of exponential type converge extremely slowly to the asymptotic form. A notable example is the Gaussian, which converges at the pathetically slow rate of \sqrt{\ln(n)}! This is why I advocate using the exact distribution resulting from a fully specified model whenever this is possible.

The pitfalls are dangerous and have no doubt led to numerous misapplications of this theory, but, done properly, it’s an approach that has enormous potential.

I’ve been interested in this branch of statistical theory for a long time, since I was introduced to it while I was a graduate student by a classic paper written by my supervisor. In fact I myself contributed to the classic old literature on this topic myself, with a paper on extreme temperature fluctuations in the cosmic microwave background way back in 1988..

Of course there weren’t any CMB maps back in 1988, and if I had thought more about it at the time I should have realised that since this was all done using Gaussian statistics, there was a 50% chance that the most interesting feature would actually be a negative rather than positive fluctuation. It turns out that twenty-odd years on, people are actually discussing an anomalous cold spot in the data from WMAP, proving that Murphy’s law applies to extreme events…