Archive for Cosmology

The Meaning of Inflation

Posted in Biographical, The Universe and Stuff with tags , , on June 4, 2010 by telescoper

Our little meeting here in Copenhagen is more-or-less over and I’ve now got a free day to enjoy my birthday. It’s a lovely sunny morning and I’m looking forward to being a tourist. Yesterday we had a busy day of talks and discussions followed by a pleasant dinner in a nearby restaurant. One of the good things about small informal meetings like this is that you really get the chance to ask proper questions and have a meaningful dialogue, although sometimes things get a bit heated – especially when people like Leonid Grishchuk are present!

Leonid’s talk yesterday contained various polemical statements about cosmic inflation involving words like “bullshit” and “nonsense”. In the subsequent discussion the question arose as to what, precisely, the word inflation means.

In a nutshell, cosmic inflation is the name given to a short period of rapidly accelerating expansion in the very early Universe that caused it to expand by an enormous factor and also laid down a spectrum of fluctuations through quantum-mechanical processes.  Inflation is a part of the standard “Big Bang” cosmological model, and there is a great deal of circumstantial evidence for it having happened and it’s a very elegant theory. I think it’s safe to say that there isn’t definitive proof but it’s certainly a thriving industry associated with its many versions.

However, the point is that there are many variants of the basic inflationary universe scenario – involving different fields, energy scales and so on – and, although they share some common features, they also differ dramatically from one to the other. What, it was asked, are the essential elements of inflation and what bits are just the trimmings?

In order to contribute meaningfully to the discussion I called upon the assistance of the Oxford English Dictionary to see how it defines inflation. The result was unexpectedly hilarious. Here are the first four definitions as they appear in the OED’s online edition:

  1. The action of inflating or distending with air or gas
  2. The condition of being inflated with air or gas, or being distended or swollen as if with air
  3. The condition of being puffed up with vanity, pride or baseless notions
  4. The quality of language or style when it is swollen with big or pompous words; turgidity, bombast

I was quite surprised that definitions to do with economics only appear further down the list, but cosmology’s position even lower down wasn’t unexpected.   However, the leading entries are brilliant, especially definition number 3, which I think is hilarious. I’ll never be able to mention inflation again without thinking of that!

I fear I may have given Leonid quite a bit of ammunition for future anti-inflation rants although if he uses the phrase “baseless notions” in future talks he should perhaps also be careful  to steer clear of “bombast”…

Clustering in the Deep

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , , on May 27, 2010 by telescoper

I couldn’t resist a quick lunchtime post about the results that have come out concerning the clustering of galaxies found by the HerMES collaboration using the Herschel Telescope. There’s quite a lengthy press release accompanying the new results, and there’s not much point in repeating the details here, so I’ll just show a wonderful image showing thousands of galaxies and their far-infrared colours.

Image Credit: European Space Agency, SPIRE and HERMES consortia

According to the press release, this looks “like grains of sand”. I wonder if whoever wrote the text was deliberately referring to Genesis 22:17?

.. they shall multiply as the stars of the heaven, and as the grains of sand upon the sea shore.

However, let me take issue a little with the following excerpt from said press release:

While at a first glance the galaxies look to be scattered randomly over the image, in fact they are not. A closer look will reveals that there are regions which have more galaxies in, and regions that have fewer.

A while ago I posted an item asking what “scattered randomly” is meant to mean. It included this picture

This is what a randomly-scattered set of points actually looks like. You’ll see that it also has some regions with more galaxies in them than others. Coincidentally, I showed the same  picture again this morning in one of my postgraduate lectures on statistics and a majority of the class – as I’m sure do many of you seeing it for the first time –  thought it showed a clustered pattern. Whatever “randomness” means precisely, the word certainly implies some sort of variation whereas the press release implies the opposite. I think a little re-wording might be in order.

What galaxy clustering statistics reveal is that the variation in density from place-to-place is greater than that expected in a random distribution like that shown. This has been known since the 1960s, so it’s not  the result that these sources are clustered that’s so important. In fact, The preliminary clustering results from the HerMES surveys – described in a little more detail in a short paper available on the arXIv – are especially  interesting because they show that some of the galaxies seen in this deep field are extremely bright (in the far-infrared), extremely distant, high-redshift objects which exhibit strong spatial correlations. The statistical form of this clustering provides very useful input for theorists trying to model the processes of galaxy formation and evolution.In particular, the brightest objects at high redshift have a propensity to appear preferentially in dense concentrations, making them even more strongly clustered than rank-and-file galaxies. This fact probably contains important information about the environmental factors responsible for driving their enormous luminosities.

The results are still preliminary, but we’re starting to see concrete evidence of the impact Herschel is going to have on extragalactic astrophysics.

A Star is Porn

Posted in The Universe and Stuff with tags , , , on May 16, 2010 by telescoper

I started thinking about the analogy between astronomy and pornography after seeing a hilarious blog post by Amanda Bauer  that has a connection with my forthcoming (popular) book, which has the working title Naked Universe. It’s basically a collection of essays about cosmology, trying to look at the subject from unusual and provocative angles. I decided to give you a bit of a flavour of this connection here. It’s intended to be a bit of a joke, but it does make a semi-serious point about the difference between astronomy and other branches of science.

Although it’s one of the oldest fields of scientific enquiry, astronomy possesses a number of features that set it apart from most other branches of science. One of the most important is that it isn’t really an experimental science, but an observational one. Hands-on disciplines, specifically those involving laboratory experiments,  require a dialogue between the scientist and nature. The scientist can control the physical parameters of the system under scrutiny and explore its behaviour under different conditions in order to establish patterns and test theoretical explanations. The scientist chooses the questions to ask, the experiment is run, and nature gives its answer. If more information is needed, another experiment is set up with different parameter choices.

Astronomy is different. Its subject matter, the Universe of stars and galaxies,  is remote and inaccessible.   We only have what is “out there” already. We had no hand in setting it up, and we can’t intervene if it behaves in an unexpected way. We are forced to work only with what has been given to us. Out there in the darkness the Cosmos may be beautiful, but all we can do is look at  pictures of it. We never get to experience it in the flesh. Experimentalists have real intercourse with nature, but astronomers have to be content with being mere voyeurs.

This is not to say that all astronomers are dirty old men in grubby raincoats – although I have to say that I know a few who could be described like that – but  many mainstream scientists do indeed tend to look down on us, at least partly because of the unconventional practices I’ve alluded to. On the other hand,  I suspect they also secretly envy us. From time to time they probably also have a guilty peek at their favourite pictures too.  Every time physicists look at astronomical images, do they feel just a little bit guilty?

You can hardly go on the internet these days without finding a website devoted to pornography astronomy.This is hardly surprising because both astronomy and pornography have led to technological advances that helped fuel the digital revolution. Astronomy gave us the CCD camera, which ushered in the digital camera that has made it much easier for both amateurs and professionals to make their own pornographic astronomical images. On the other hand, the porn industry was largely responsible for the rapid evolution of video-streaming technology. That must be why astronomers spend so much of their time doing video conferences…

Astronomers also led the way in the development of virtual reality. Frustrated by their inability to get  up close and personal with the objects of their desire, they have resorted to the construction of elaborate three-dimensional computer simulations. In these they can interact with and manipulate what goes on until they reach a satisfactory outcome. I’ve never found this kind of thing at all rewarding – the simulations are just not sufficiently realistic –  but large numbers of cosmologists seem to be completely hooked on them.

Skepsis

Posted in Politics, The Universe and Stuff with tags , , , , , , on May 1, 2010 by telescoper

This past week was the final week of proper teaching at Cardiff University, so I’ve done my last full lectures, tutorials and exercise classes of the academic year. Yesterday I assessed a bunch of 3rd-year project talks, and soon those students will be handing in their written reports for marking.  Next week will be a revision week, shortly after that the examinations begin. And so the cycle of academic life continues, in a curious parallel to the  football league season – the other routine that provides me with important markers for the passage of the year.

Anyway, this week I gave the last lecture to my first-year class on Astrophysical Concepts. This is a beginning-level course that tries to introduce some of the theory behind astronomy, focussing on the role of gravity. I cover orbits in newtonian gravity, gravity and hydrostatic equilibrium in extended bodies, a bit about stellar structure, gravitational collapse, and so on. In the last part I do a bit of cosmology. I decided to end this time with a lecture about dark energy as, according to the standard model, this accounts for about 75% of the energy budget of the Universe. It’s also something we don’t understand very well at all.

To make a point, I usually show the following picture (credit to the High-z supernova search team).

 What is plotted is the redshift of each supernova (along the x-axis), which relates to the factor by which the universe has expanded since light set out from it. A redshift of 0.5 means the universe was compressed by a factor 1.5 in all dimensions at the time when that particular supernova went bang. The y-axis shows the really hard bit to get right. It’s the estimated distance (in terms of distance modulus) of the supernovae. In effect, this is a measure of how faint the sources are. The theoretical curves show the faintness expected of a standard source observed at a given redshift in various cosmological models. The bottom panel shows these plotted with a reference curve taken out so the trend is easier to see.

The argument from this data is that the high redshift supernovae are fainter than one would expect in models without dark energy (represented by the \Omega_{\Lambda}  in the diagram. If this is true then it means the luminosity distance of these sources is greater than it would be in a decelerating universe. They can be accounted for, however, if the universe’s expansion rate has been accelerating since light set out from the supernovae. In the bog standard cosmological models we all like to work with, acceleration requires that \rho + 3p/c^2 be negative. The “vacuum” equation of state p=-\rho c^2 provides a simple way of achieving this but there are many other forms of energy that could do it also, and we don’t know which one is present or why…

This plot contains the principal evidence that has led to most cosmologists accepting that the Universe is accelerating.  However, when I show it to first-year undergraduates (or even to members of the public at popular talks), they tend to stare in disbelief. The errors are huge, they say, and there are so  few data points. It just doesn’t look all that convincing. Moreover, there are other possible explanations. Maybe supernovae were different beasties back when the universe was young. Maybe something has absorbed their light making them look fainter rather than being further away. Maybe we’ve got the cosmological models wrong.

The reason I show this diagram is precisely because it isn’t superficially convincing. When they see it, students probably form the opinion that all cosmologists are gullible idiots. I’m actually pleased by that.  In fact, it’s the responsibility of scientists to be skeptical about new discoveries. However, it’s not good enough just to say “it’s not convincing so I think it’s rubbish”. What you have to do is test it, combine it with other evidence, seek alternative explanations and test those. In short you subject it to rigorous scrutiny and debate. It’s called the scientific method.

Some of my colleagues express doubts about me talking about dark energy in first-year lectures when the students haven’t learned general relativity. But I stick to my guns. Too many people think science has to be taught as great stacks of received wisdom, of theories that are unquestionably “right”. Frontier sciences such as cosmology give us the chance to demonstrate the process by which we find out about the answers to big questions, not by believing everything we’re told but by questioning it.

My attitude to dark energy is that, given our limited understanding of the constituents of the universe and the laws of matter, it’s the best explanation we have of what’s going on. There is corroborating evidence of missing energy, from the cosmic microwave background and measurements of galaxy clustering, so it does have explanatory power. I’d say it was quite reasonable to believe in dark energy on the basis of what we know (or think we know) about the Universe.  In other words, as a good Bayesian, I’d say it was the most probable explanation. However, just because it’s the best explanation we have now doesn’t mean it’s a fact. It’s a credible hypothesis that deserves further work, but I wouldn’t bet much against it turning out to be wrong when we learn more.

I have to say that too many cosmologists seem to accept the reality of dark energy  with the unquestioning fervour of a religious zealot.  Influential gurus have turned the dark energy business into an industrial-sized bandwagon that sometimes makes it difficult, especially for younger scientists, to develop independent theories. On the other hand, it is clearly a question of fundamental importance to physics, so I’m not arguing that such projects should be axed. I just wish the culture of skepticism ran a little deeper.

Another context in which the word “skeptic” crops up frequently nowadays is  in connection with climate change although it has come to mean “denier” rather than “doubter”. I’m not an expert on climate change, so I’m not going to pretend that I understand all the details. However, there is an interesting point to be made in comparing climate change with cosmology. To make the point, here’s another figure.

There’s obviously a lot of noise and it’s only the relatively few points at the far right that show a clear increase (just as in the first Figure, in fact). However, looking at the graph I’d say that, assuming the historical data points are accurate,  it looks very convincing that the global mean temperature is rising with alarming rapidity. Modelling the Earth’s climate is very difficult and we have to leave it to the experts to assess the effects of human activity on this curve. There is a strong consensus from scientific experts, as monitored by the Intergovernmental Panel on Climate Change, that it is “very likely” that the increasing temperatures are due to increased atmospheric concentrations of greenhouse gas emissions.

There is, of course, a bandwagon effect going on in the field of climatology, just as there is in cosmology. This tends to stifle debate, make things difficult for dissenting views to be heard and evaluated rationally,  and generally hinders the proper progress of science. It also leads to accusations of – and no doubt temptations leading to – fiddling of the data to fit the prevailing paradigm. In both fields, though, the general consensus has been established by an honest and rational evaluation of data and theory.

I would say that any scientist worthy of the name should be skeptical about the human-based interpretation of these data and that, as in cosmology (or any scientific discipline), alternative theories should be developed and additional measurements made. However, this situation in climatology is very different to cosmology in one important respect. The Universe will still be here in 100 years time. We might not.

The big issue relating to climate change is not just whether we understand what’s going on in the Earth’s atmosphere, it’s the risk to our civilisation of not doing anything about it. This is a great example where the probability of being right isn’t the sole factor in making a decision. Sure, there’s a chance that humans aren’t responsible for global warming. But if we carry on as we are for decades until we prove conclusively that we are, then it will be too late. The penalty for being wrong will be unbearable. On the other hand, if we tackle climate change by adopting greener technologies, burning less fossil fuels, wasting less energy and so on, these changes may cost us a bit of money in the short term but  frankly we’ll be better off anyway whether we did it for the right reasons or not. Of course those whose personal livelihoods depend on the status quo are the ones who challenge the scientific consensus most vociferously. They would, wouldn’t they? Moreover, as Andy Lawrence pointed out on his blog recently, the oil is going to run out soon anyway…

This is a good example of a decision that can be made on the basis of a  judgement of the probability of being right. In that respect , the issue of how likely it is that the scientists are correct on this one is almost irrelevant. Even if you’re a complete disbeliever in science you should know  how to respond to this issue, following the logic of Blaise Pascal. He argued that there’s no rational argument for the existence or non-existence of God but that the consequences of not believing if God does exist (eternal damnation) were much worse than those of behaving as if you believe in God when he doesn’t. For “God” read “climate change” and let Pascal’s wager be your guide….

The Next Three Weeks

Posted in Biographical, The Universe and Stuff with tags , , , on April 11, 2010 by telescoper

Busy day today, getting ready for tomorrow’s return to teaching. The year’s second semester is always a strangely fragmented affair because of the Easter hiatus. We teach for eight weeks from late January until late March, have three weeks off for Easter, and then return to teach another three weeks before a brief revision period and, then, the examinations. It’s an awkward business, that gap.  There’s also quite a danger of missing lectures later on, if you happen to be teaching on a Monday, owing to the Spring Bank Holiday. I lose a lecture in that way,  for my first year module Astrophysical Concepts, although it’s only in revision week so I’m not going to be struggling for time. I hope.

I’ve organized my first year lectures (if “organized” is the right word!) in four sections and managed to make sure I finished three of them, representing areas covered by three of the four questions in the forthcoming examination, before the break. Now I just have half-a-dozen  lectures on cosmology to get through, so this bit should be reasonably self-contained and it won’t matter too much if the students have forgotten the other three parts I did before Easter.

I’ve also got my third-year particle physics lectures to finish off in this period, so it’s going to be quite a busy three weeks. Still, I’ll have plenty to distract me from the General Election campaign which will cover pretty much the same period. Polling day is May 6th, and my last (revision) lecture will be on May 7th.

Another curiosity about Cardiff’s calendar is that we only get three weeks for Easter. I seem to remember it’s usually been four weeks in the other places I’ve worked. One of the downsides of this is that we’re back to term-time while the annual National Astronomy Meeting is going on. This moves around from year to year, and this time is in the splendid city of Glasgow. I’d like to have gone, and would have done if I hadn’t had so much teaching concentrated in this period. Regrettably I’ll have to give it a miss this year.

Anyway, I was getting my notes together this afternoon, sitting in the April sunshine among the new flowers and listening to the birds singing. Completely by accident I came across this little quote from Johannes Kepler, translated from the Mysterium Cosmographicum, which I thought I’d share with you…

We do not ask what useful purpose the birds do sing, for song is their pleasure since they were created for singing.  Similarly, we ought not to ask why the human mind troubles to fathom the secrets of the heavens…  The diversity of the phenomena of Nature is so great, and the treasures hidden in the heavens so rich, precisely in order that the human mind shall never be lacking in fresh nourishment.

Dark Horizons

Posted in Cosmic Anomalies, The Universe and Stuff with tags , , , , , , on March 21, 2010 by telescoper

Last Tuesday night I gave a public lecture as part of  Cardiff University’s contribution to National Science and Engineering Week. I had an audience of about a hundred people, although more than half were students from the School of Physics & Astronomy rather than members of the public. I’d had a very full day already by the time it began (at 7pm) and I don’t mind admitting I was pretty exhausted even before I started the talk. I’m offering that as an excuse for struggling to get going, although I think I got better as I got into it. Anyway, I trotted out the usual stuff about the  Cosmic Web and it seemed to go down fairly well, although I don’t know about that because I wasn’t really paying attention.

At the end of the lecture, as usual, there was a bit of time for questions and no shortage of hands went up. One referred to something called Dark Flow which, I’ve just noticed, has actually got its own wikipedia page. It was also the subject of a recent Horizon documentary on BBC called Is Everything we Know about the Universe Wrong? I have to say I thought the programme was truly terrible, but that’s par for the course for Horizon these days I’m afraid. It used to be quite an interesting and informative series, but now it’s full of pointless special effects, portentous and sensationalising narration, and is repetitive to the point of torture. In this case also, it also portrayed a very distorted view of its subject matter.

The Dark Flow is indeed quite interesting, but of all the things that might threaten the foundations of the Big Bang theory this is definitely not it. I certainly have never lost any sleep worrying about it. If it’s real and not just the result of a systematic error in the data – and that’s a very big “if” – then the worst it would do would be to tell us that the Universe was a bit more complicated than our standard model. The same is true of the other cosmic anomalies I discuss from time to time on here.  

But we know our standard model leaves many questions unanswered and, as a matter of fact, many questions unasked. The fact that Nature may present us with a few surprises doesn’t mean the whole framework is wrong. It could be wrong, of course. In fact I’d be very surprised if our standard view of cosmology survives the next few decades without major revision. A healthy dose of skepticism is good for cosmology. To some extent, therefore, it’s good to have oddities like the Dark Flow out in the open.

However, that shouldn’t divert our attention from the fact that the Big Bang model isn’t just an arbitrary hypothesis with no justification. It’s the result of almost a century of  vigorous interplay between theory and observation, using an old-fashioned thing called the scientific method. That’s probably too dull for the producers of  Horizon, who would rather portray it as a kind of battle of wills between individuals competing for the title of next Einstein.

Anyway, just to emphasize the fact that I think questioning the Big Bang model is a good thing to do, here is a list of fundamental questions that should trouble modern cosmologists. Most of them are fundamental,  and we do not have answers to them. 

Is General Relativity right?

Virtually everything in the standard model depends on the validity of Einstein’s general theory of relativity (or theory of general relativity…). In a sense we already know that the answer to this question is “no”.

At sufficiently high energies (near the Planck scale) we expect classical relativity to be replaced by a quantum theory of gravity. For this reason, a great deal of interest is being directed at cosmological models inspired by superstring theory. These models require the existence of extra dimensions beyond the four we are used to dealing with. This is not in itself a new idea, as it dates back to the work of Kaluza and Klein in the 1920s, but in older versions of the idea the extra dimensions were assumed to be wrapped up so small as to be invisible. In “braneworld models”, the extra dimensions can be large but we are confined to a four-dimensional subset of them (a “brane”). In one version of this idea, dubbed the Ekpyrotic Universe, the origin of our observable universe lies in the collision between two branes in a higher-dimensional “bulk”. Other models are less dramatic, but do result in the modification of the Friedmann equations at early times.

 It is not just in the early Universe that departures from general relativity are possible. In fact there are many different alternative theories on the market. Some are based on modifications of Newton’s gravitational mechanics, such as MOND, modifications of Einstein’s theory, such as the Brans-Dicke theory, as well as those theories involving extra dimensions, such as braneworld theory, and so on

There remain very few independent tests of the validity of Einstein’s theory, particularly in the limit of strong gravitational fields. There is very little independent evidence that the curvature of space time on cosmological scales is related to the energy density of matter. The chain of reasoning leading to the cosmic concordance model depends entirely this assumption. Throw it away and we have very little to go on.

What is the Dark Energy?

In the standard cosmology, about 75% of the energy density of the Universe is in a form we do not understand. Because we’re in the dark about it, we call it Dark Energy. The question here is twofold. One part is whether the dark energy is of the form of an evolving scalar field, such as quintessence, or whether it really is constant as in Einstein’s original version. This may be answered by planned observational studies, but both of these are at the mercy of funding decisions. The second part is to whether dark energy can be understood in terms of fundamental theory, i.e. in understanding why “empty space” contains this vacuum energy.  I think it is safe to say we are still very far from knowing how vacuum energy on a cosmological scale arises from fundamental physics. It’s just a free parameter.

 

What is the Dark Matter?

Around 25% of the mass in the Universe is thought to be in the form of dark matter, but we don’t know what form it takes. We do have some information about this, because the nature of the dark matter determines how it tends to clump together under the action of gravity. Current understanding of how galaxies form, by condensing out of the primordial explosion, suggests the dark matter particles should be relatively massive. This means that they should move relatively slowly and can consequently be described as “cold”. As far as gravity is concerned, one cold particle is much the same as another so there is no prospect for learning about the nature of cold dark matter (CDM) particles through astronomical means unless they decay into radiation or some other identifiable particles. Experimental attempts to detect the dark matter directly are pushing back the limits of technology, but it would have to be a long shot for them to succeed when we have so little idea of what we are looking for.

Did Inflation really happen?

The success of concordance cosmology is largely founded on the appearance of “Doppler peaks” in the fluctuation spectrum of the cosmic microwave background (CMB). These arise from acoustic oscillations in the primordial plasma that have particular statistical properties consistent owing to their origin as quantum fluctuations in the scalar field driving a short-lived period of rapid expansion called inflation. This is strong circumstantial evidence in favour of inflation, but perhaps not strong enough to obtain a conviction. The smoking gun for inflation is probably the existence of a stochastic gravitational wave background. The identification and extraction of this may be possible using future polarisation-sensitive CMB studies even before direct experimental probes of sufficient sensitivity become available. As far as I am concerned, the jury will be out for a considerable time.

Despite these gaps and uncertainties, the ability of the standard framework to account for such a diversity of challenging phenomena provides strong motivation for assigning it a higher probability than its competitors. Part of this  is that no other theory has been developed to the point where we know what predictions it can make. Some of the alternative  ideas  I discussed above are new, and consequently we do not really understand them well enough to know what they say about observable situations. Others have adjustable parameters so one tends to disfavour them on grounds of Ockham’s razor unless and until some observation is made that can’t be explained in the standard framework.

Alternative ideas should be always explored. The business of cosmology, however,  is not only in theory creation but also in theory testing. The great virtue of the standard model is that it allows us to make precise predictions about the behaviour of the Universe and plan observations that can test these predictions. One needs a working hypothesis to target the multi-million-pound investment that is needed to carry out such programmes. By assuming this model we can make rational decisions about how to proceed. Without it we would be wasting taxpayers’ money on futile experiments that have very little chance of improving our understanding. Reasoned belief  in a plausible working hypothesis is essential to the advancement of our knowledge.

 Cosmologists may appear a bit crazy (especially when they appear on TV), but there is method in their madness. Sometimes.

Planck and the Cold Galaxy

Posted in The Universe and Stuff with tags , , , , , , on March 17, 2010 by telescoper

Just a quick post to show a cool result from Planck which has just been released by the European Space Agency (ESA). It will be a while before any real cosmological results are available, but in the meantime here are a couple of glimpses into the stuff we cosmologists think of as foreground contamination but which are of course of great interest in themselves to other kinds of astronomers.

The beautiful image above (courtesy of ESA and the HFI Consortium) covers a portion of the sky about 55 degrees across. It is a three-colour combination constructed from Planck’s two shortest wavelength channels (540 and 350 micrometres, corresponding to frequencies of 545 and 857 GHz respectively), and an image at 100 micrometres obtained with the Infrared Astronomical Satellite (IRAS). This combination effectively traces the dust temperature: reddish tones correspond to temperatures as cold as 12 degrees above absolute zero, and whitish tones to significantly warmer ones (a few tens of degrees above absolute zero) in regions where massive stars are currently forming. Overall, the image shows local dust structures within 500 light years of the Sun.

Our top man in the HFI Consortium,  Professor Peter Ade, is quoted as saying

..the HFI is living up to our most optimistic pre-flight expectations.  The wealth of the data is seen in these beautiful multicolour images exposing previously unseen detail in the cold dust components of our galaxy.  There is much to be learned from detailed interpretation of the data which will significantly enhance our understanding of the star formation processes and galactic morphology.

This Planck image was obtained during the first Planck all-sky survey which began in mid-August 2009. By mid-March 2010 more than 98% of the sky has been observed by Planck. Because of the way Planck scans the sky 100% sky coverage for the first survey will take until late-May 2010.

Other new results and a more detailed discussion of this one can be found here and here.

Cosmic Vision

Posted in Science Politics, The Universe and Stuff with tags , , , , , , , , , on February 20, 2010 by telescoper

It’s nice to have a bit of science stuff to blog about for a change. Just this week the European Space Agency (ESA) has  announced the results of its recent selection process for part of its Cosmic Visions programme, which represents ESA’s scientific activity for the period 2015-2025.

The selection process actually began in 2007, with over 50 proposals. This list was then whittled down so that there were six candidate missions under consideration for the so-called M-class launch slots (M meaning medium-sized), and three in the L-class list of larger missions. The latest exercise was to select three of the M-class missions for further study. They succeeded in selecting three, but have also kept another, much cheaper, mission in the frame.

As far as I understand it, only two M-class missions are actually envisaged so the race isn’t over yet, but the missions still in the running are:

PLATO.  The PLATO mission is planned to study planets around other stars. This would include terrestrial planets in a star’s habitable zone, so-called Earth-analogues. In addition, PLATO would probe stellar interiors by through stellar seismology. In some sense, this mission is the descendant of a previous proposal called Eddington. (PLATO stands for PLAnetary Transits and Oscillations of stars – I’ll give it 3/10 for quality of acronym).

EUCLID. Euclid would address key questions relevant to fundamental physics and cosmology, namely the nature of the mysterious dark energy and dark matter. Astronomers are now convinced that these substances dominate ordinary matter. Euclid would map the distribution of galaxies to reveal the underlying ‘dark’ architecture of the Universe. I don’t think this is meant to be an acronym, but I could be wrong. Perhaps it’s European Union Cosmologists Lost in Darkness?

SOLAR ORBITER. Disappointingly, this is neither an acronym nor a Greek person. It would take the closest look at our Sun yet possible, approaching to just 62 solar radii. It would deliver images and data that include views of the Sun’s polar regions and the solar far side when it is not visible from Earth.

These are the three main nominations, but the panel also decided to endorse another mission, SPICA, because it is much cheaper than the approximately 500 Million Euro price tag on the other contenders. SPICA would be an infrared space telescope led by the Japanese Space Agency JAXA. It would provide ‘missing-link’ infrared coverage in the region of the spectrum between that seen by the ESA-NASA Webb telescope and the ground-based ALMA telescope. SPICA would focus on the conditions for planet formation and distant young galaxies.

Many of Cardiff’s astronomers will be very happy if SPICA does end up being selected as it is the one most directly related to their interests and also their experience with Herschel which is, incidentally,  continuing to produce fantastic quality data. If SPICA is to happen, however, extra money will have to be found and that, in the current financial climate, is far from guaranteed.

Which of these missions will get selected in the end is impossible to say at this stage. There are dark mutterings going on about how realistic is the price tag that has been put on some of the contenders. Based on past experience, cost overruns on space missions are far from unlikely and when they happen they can cause a great deal of damage in budgets. Let’s hope the technical studies do their job and put realistic figures on them so the final selection will be fair.

Whatever missions fly in the end, I also hope that the Science and Technology Research Council (STFC) – or whatever replaces it – remembers that these are science missions, and its responsibility extends beyond the building of instruments to fly on them. Let’s to hope we can count on their support for research grants enabling us to answer the science questions they were designed to address.

Colour in Fourier Space

Posted in The Universe and Stuff with tags , , , , , on February 9, 2010 by telescoper

As I threatened promised after Anton’s interesting essay on the perception of colour, a couple of days ago, I thought I’d write a quick item about something vaguely relevant that relates to some of my own research. In fact, this ended up as a little paper in Nature written by myself and Lung-Yih Chiang, a former student of mine who’s now based in his homeland of Taiwan.

This is going to be a bit more technical than my usual stuff, but it also relates to a post I did some time ago concerning the cosmic microwave background and to the general idea of the cosmic web, which has also featured in a previous item. You may find it useful to read these contributions first if you’re not au fait with cosmological jargon.

Or you may want to ignore it altogether and come back when I’ve found another look-alike

The large-scale structure of the Universe – the vast chains of galaxies that spread out over hundreds of millions of light-years and interconnect in a complex network (called the cosmic web) – is thought to have its origin in small fluctuations generated in the early universe by quantum mechnical effects during a bout of cosmic inflation.

These fluctuations in the density of an otherwise homogeneous universe are usually expressed in dimensionless form via the density contrast, defined as\delta({\bf x})=(\rho({\bf x})-\bar{\rho})/\bar{\rho}, where \bar{\rho} is the mean density. Because it’s what physicists always do when they can’t think of anything better, we take the Fourier transform of this and write it as \tilde{\delta}, which is a complex function of the wavevector {\bf k}, and can therefore be written

\tilde{\delta}({\bf k})=A({\bf k}) \exp [i\Phi({\bf k})],

where A is the amplitude and \Phi is the phase belonging to the wavevector {\bf k}; the phase is an angle between zero and 2\pi radians.

This is a particularly useful thing to do because the simplest versions of inflation predict that the phases of each of the Fourier modes should be randomly distributed. Each is independent of the others and is essentially a random angle designating any point on the unit circle. What this really means is that there is no information content in their distribution, so that the harmonic components are in a state of maximum statistical disorder or entropy. This property also guarantees that fluctuations from place to place have a Gaussian distribution, because the density contrast at any point is formed from a superposition of a large number of independent plane-wave modes  to which the central limit theorem applies.

However, this just describes the initial configuration of the density contrast as laid down very early in the Big Bang. As the Universe expands, gravity acts on these fluctuations and alters their properties. Regions with above-average initial density (\delta >0) attract material from their surroundings and get denser still. They then attract more material, and get denser. This is an unstable process that eventually ends up producing enormous concentrations of matter (\delta>>1) in some locations and huge empty voids everywhere else.

This process of gravitational instability has been studied extensively in a variety of astrophysical settings. There are basically two regimes: the linear regime covering the early stages when \delta << 1 and the non-linear regime when large contrasts begin to form. The early stage is pretty well understood; the latter isn’t. Although many approximate analytical methods have been invented which capture certain aspects of the non-linear behaviour, general speaking we have to  run N-body simulations that calculate everything numerically by brute force to get anywhere.

The difference between linear and non-linear regimes is directly reflected in the Fourier-space behaviour. In the linear regime, each Fourier mode evolves independently of the others so the initial statistical form is preserved. In the non-linear regime, however, modes couple together and the initial Gaussian distribution begins to distort.

About a decade ago, Lung-Yih and I started to think about whether one might start to understand the non-linear regime a bit better by looking at the phases of the Fourier modes, an aspect of the behaviour that had been largely neglected until then. Our point was that mode-coupling effects must surely generate phase correlations that were absent in the initial random-phase configuration.

In order to explore the phase distribution we hit upon the idea of representing the phase of each Fourier mode using a  colour model. Anton’s essay discussed the  RGB (red-green-blue) parametrization of colour is used on computer screens as well as the CMY (Cyan-Magenta-Yellow) system preferred for high-quality printing.

However, there are other systems that use parameters different to those representing basic tones in these schemes. In particular, there are colour models that involve a parameter called the hue, which represents the position of a particular colour on the colour wheel shown left. In terms of the usual RGB framework you can see that red has a hue of zero, green is 120 degrees, and blue is 240. The complementary colours cyan, magenta and yellow lie 180 degrees opposite their RGB counterparts.

This representation is handy because it can be employed in a scheme that uses colour to represent Fourier phase information. Our idea was simple. The phases of the initial conditions should be random, so in this representation the Fourier transform should just look like a random jumble of colours with equal amounts of, say, red green and blue. As non-linear mode coupling takes hold of the distribution, however, a pattern should emerge in the phases in a manner which is characteristic of gravitational instability.

I won’t go too much further into the details here, but I will show a picture that proves that it works!

What you see here are four columns. The leftmost shows (from top to bottom) the evolution of a two-dimensional simulation of gravitational clustering. You can see the structure develops hierarchically, with an increasing characteristic scale of structure as time goes on.

The second column shows a time sequence of (part of) the Fourier transform of the distribution seen in the first; for the aficianados I should say that this is only one quadrant of the transform and that the rest is omitted for reasons of symmetry. Amplitude information is omitted here and the phase at each position is represented by an appropriate hue. To represent on this screen, however, we had to convert back to the RGB system.

The pattern is hard to see on this low resolution plot but two facts are noticeable. One is that a definite texture emerges, a bit like Harris Tweed, which gets stronger as the clustering develops. The other is that the relative amount of red green and blue does not change down the column.

The reason for the second property is that although clustering develops and the distribution of density fluctuations becomes non-Gaussian, the distribution of phases remains uniform in the sense that binning the phases of the entire Fourier transform would give a flat histogram. This is a consequence of the fact that the statistical properties of the fluctuations remain invariant under spatial translations even when they are non-linear.

Although the one-point distribuition of phases stays uniform even into the strongly non-linear regime, they phases do start to learn about each other, i.e. phase correlations emerge. Columns 3 and 4 illustrate this in the simplest possible way; instead of plotting the phases of each wavemode we plot the differences between the phases of neighbouring modes in the x  and y directions respectively.

If the phases are random then the phase differences are also random. In the initial state, therefore, columns 3 and 4 look just like column 2. However, as time goes on you should be able to see the emergence of a preferred colour in both columns, showing that the distribution of phase differences is no longer random.

The hard work is to describe what’s going on mathematically. I’ll spare you the details of that! But I hope I’ve at least made the point that this is a useful way of demonstrating that phase correlations exist and of visualizing some of their properties.

It’s also – I think – quite a lot of fun!

P.S. If you’re interested in the original paper, you will find it in Nature, Vol. 406 (27 July 2000), pp. 376-8.

The Seven Year Itch

Posted in Bad Statistics, Cosmic Anomalies, The Universe and Stuff with tags , , , on January 27, 2010 by telescoper

I was just thinking last night that it’s been a while since I posted anything in the file marked cosmic anomalies, and this morning I woke up to find a blizzard of papers on the arXiv from the Wilkinson Microwave Anisotropy Probe (WMAP) team. These relate to an analysis of the latest data accumulated now over seven years of operation; a full list of the papers is given here.

I haven’t had time to read all of them yet, but I thought it was worth drawing attention to the particular one that relates to the issue of cosmic anomalies. I’ve taken the liberty of including the abstract here:

A simple six-parameter LCDM model provides a successful fit to WMAP data, both when the data are analyzed alone and in combination with other cosmological data. Even so, it is appropriate to search for any hints of deviations from the now standard model of cosmology, which includes inflation, dark energy, dark matter, baryons, and neutrinos. The cosmological community has subjected the WMAP data to extensive and varied analyses. While there is widespread agreement as to the overall success of the six-parameter LCDM model, various “anomalies” have been reported relative to that model. In this paper we examine potential anomalies and present analyses and assessments of their significance. In most cases we find that claimed anomalies depend on posterior selection of some aspect or subset of the data. Compared with sky simulations based on the best fit model, one can select for low probability features of the WMAP data. Low probability features are expected, but it is not usually straightforward to determine whether any particular low probability feature is the result of the a posteriori selection or of non-standard cosmology. We examine in detail the properties of the power spectrum with respect to the LCDM model. We examine several potential or previously claimed anomalies in the sky maps and power spectra, including cold spots, low quadrupole power, quadropole-octupole alignment, hemispherical or dipole power asymmetry, and quadrupole power asymmetry. We conclude that there is no compelling evidence for deviations from the LCDM model, which is generally an acceptable statistical fit to WMAP and other cosmological data.

Since I’m one of those annoying people who have been sniffing around the WMAP data for signs of departures from the standard model, I thought I’d comment on this issue.

As the abstract says, the  LCDM model does indeed provide a good fit to the data, and the fact that it does so with only 6 free parameters is particularly impressive. On the other hand, this modelling process involves the compression of an enormous amount of data into just six numbers. If we always filter everything through the standard model analysis pipeline then it is possible that some vital information about departures from this framework might be lost. My point has always been that every now and again it is worth looking in the wastebasket to see if there’s any evidence that something interesting might have been discarded.

Various potential anomalies – mentioned in the above abstract – have been identified in this way, but usually there has turned out to be less to them than meets the eye. There are two reasons not to get too carried away.

The first reason is that no experiment – not even one as brilliant as WMAP – is entirely free from systematic artefacts. Before we get too excited and start abandoning our standard model for more exotic cosmologies, we need to be absolutely sure that we’re not just seeing residual foregrounds, instrument errors, beam asymmetries or some other effect that isn’t anything to do with cosmology. Because it has performed so well, WMAP has been able to do much more science than was originally envisaged, but every experiment is ultimately limited by its own systematics and WMAP is no different. There is some (circumstantial) evidence that some of the reported anomalies may be at least partly accounted for by  glitches of this sort.

The second point relates to basic statistical theory. Generally speaking, an anomaly A (some property of the data) is flagged as such because it is deemed to be improbable given a model M (in this case the LCDM). In other words the conditional probability P(A|M) is a small number. As I’ve repeatedly ranted about in my bad statistics posts, this does not necessarily mean that P(M|A)- the probability of the model being right – is small. If you look at 1000 different properties of the data, you have a good chance of finding something that happens with a probability of 1 in a thousand. This is what the abstract means by a posteriori reasoning: it’s not the same as talking out of your posterior, but is sometimes close to it.

In order to decide how seriously to take an anomaly, you need to work out P(M|A), the probability of the model given the anomaly, which requires that  you not only take into account all the other properties of the data that are explained by the model (i.e. those that aren’t anomalous), but also specify an alternative model that explains the anomaly better than the standard model. If you do this, without introducing too many free parameters, then this may be taken as compelling evidence for an alternative model. No such model exists -at least for the time being – so the message of the paper is rightly skeptical.

So, to summarize, I think what the WMAP team say is basically sensible, although I maintain that rummaging around in the trash is a good thing to do. Models are there to be tested and surely the best way to test them is to focus on things that look odd rather than simply congratulating oneself about the things that fit? It is extremely impressive that such intense scrutiny over the last seven years has revealed so few oddities, but that just means that we should look even harder..

Before too long, data from Planck will provide an even sterner test of the standard framework. We really do need an independent experiment to see whether there is something out there that WMAP might have missed. But we’ll have to wait a few years for that.

So far it’s WMAP 7 Planck 0, but there’s plenty of time for an upset. Unless they close us all down.