Yesterday, being the second Friday of the month, was the day for the Ordinary Meeting of the Royal Astronomical Society (followed by dinner at the Athenaeum for members of the RAS Club). Living and working in Cardiff it’s difficult for me to get the specialist RAS Meetings earlier in the day, but if I get myself sufficiently organized I can usually get to Burlington House in time for the 4pm start of the Ordinary Meeting, which is open to the public.
The distressing news we learnt on Thursday about the events of Wednesday night cast a shadow over the proceedings. Given that I was going to dinner afterwards, for which a jacket and tie are obligatory, I went through my collection of (rarely worn) ties, and decided that a black one would be appropriate. When I arrived at Burlington House I was just in time to hear a warm tribute paid by a clearly upset Professor Roger Davies, President of the RAS and Oxford colleague of the late Steve Rawlings. There then followed a minute’s silence in his memory.
The principal reaction to this news amongst the astronomers present was one of disbelief and/or incomprehension. Some friends and colleagues of Steve clearly knew much more about what had happened than has so far appeared in the press, but I don’t think it’s appropriate for me to make these public at this stage. We will know the facts soon enough. A colleague also pointed out to me that Steve had spent most of his recent working life as a central figure in the project to build the Square Kilometre Array, which will be the world’s largest radio telescope. He has died just a matter of days before the announcement will be made of where the SKA will actually be built. It’s sobering to think that one can spend so many years working on a project, only for something wholly unforeseen to prevent one seeing it through to completion.
Anyway, the meeting included an interesting talk by Tom Kitching of the University of Edinburgh who talked about recent results from the Canada-France-Hawaii Telescope Lensing Survey (CHFTLenS). The same project was the subject of a press release because the results were presented earlier in the week at the American Astronomical Society meeting in Austin, Texas. I haven’t got time to go into the technicalities of this study – which exploits the phenomenon of weak gravitational lensing to reconstruct the distribution of unseen (dark) matter in the Universe through its gravitational effect on light from background sources – but Tom Kitching actually contributed a guest post to this blog some time ago which will give you some background.
In the talk he presented one of the first dark matter maps obtained from this survey, in which the bright colours represent regions of high dark matter density
Getting maps like this is no easy process, so this is mightily impressive work, but what struck me is that it doesn’t look very filamentary. In other words, the dark matter appears to reside predominantly in isolated blobs with not much hint of the complicated network of filaments we call the Cosmic Web. That’s a very subjective judgement, of course, and it will be necessary to study the properties of maps like this in considerable detail in order to see whether they really match the predictions of cosmological theory.
After the meeting, and a glass of wine in Burlington House, I toddled off to the Athenaeum for an extremely nice dinner. It being the Parish meeting of the RAS Club, afterwards we went through a number of items of Club business, including the election of four new members.
Life goes on, as does astronomy, even in darkness.
Following hard on the heels of the announcement of a Nobel Prize for cosmology earlier this morning, the European Space Agency has this afternoon officially announced the two candidates which have been chosen for its next M-class missions from a shortlist of three.
One of the successful candidates, EUCLID, is directly relevant to the topic covered by the Nobel Prize announced this morning. “Euclid will address key questions relevant to fundamental physics and cosmology, namely the nature of the mysterious dark energy and dark matter. Astronomers are now convinced that these substances dominate ordinary matter. Euclid would map the distribution of galaxies to reveal the underlying ‘dark’ architecture of the Universe.”
Now that it’s definitely been selected, I hope to devote time in due course for a longer post about EUCLID’s capabilities and intentions, but in the meantime I’ll just say that it’s been a very good day for Dark Energy.
P.S. The other successful candidate is called Solar Orbiter. Commiserations to advocates of the third mission on the shortlist of three, PLATO. Close, but no cigar…
I’ve taken the liberty of copying the following text from the press release on the Nobel Foundation website
In 1998, cosmology was shaken at its foundations as two research teams presented their findings. Headed by Saul Perlmutter, one of the teams had set to work in 1988. Brian Schmidt headed another team, launched at the end of 1994, where Adam Riess was to play a crucial role.
The research teams raced to map the Universe by locating the most distant supernovae. More sophisticated telescopes on the ground and in space, as well as more powerful computers and new digital imaging sensors (CCD, Nobel Prize in Physics in 2009), opened the possibility in the 1990s to add more pieces to the cosmological puzzle.
The teams used a particular kind of supernova, called type Ia supernova. It is an explosion of an old compact star that is as heavy as the Sun but as small as the Earth. A single such supernova can emit as much light as a whole galaxy. All in all, the two research teams found over 50 distant supernovae whose light was weaker than expected – this was a sign that the expansion of the Universe was accelerating. The potential pitfalls had been numerous, and the scientists found reassurance in the fact that both groups had reached the same astonishing conclusion.
For almost a century, the Universe has been known to be expanding as a consequence of the Big Bang about 14 billion years ago. However, the discovery that this expansion is accelerating is astounding. If the expansion will continue to speed up the Universe will end in ice.
The acceleration is thought to be driven by dark energy, but what that dark energy is remains an enigma – perhaps the greatest in physics today. What is known is that dark energy constitutes about three quarters of the Universe. Therefore the findings of the 2011 Nobel Laureates in Physics have helped to unveil a Universe that to a large extent is unknown to science. And everything is possible again.
I’m definitely among the skeptics when it comes to the standard interpretation of the supernova measurements, and more recent complementary data, in terms of dark energy. However this doesn’t diminish in any way my delight that these three scientists have been rewarded for their sterling observational efforts. The two groups involved in the Supernova Cosmology Project on the one hand, and the High Z Supernova Search, on the other, are both supreme examples of excellence in observational astronomy, taking on and overcoming what were previously thought to be insurmountable observational challenges. This award has been in the air for a few years now, and I’m delighted for all three scientists that their time has come at last. To my mind their discovery is all the more exciting because nobody really knows precisely what it is that they have discovered!
I know that Brian Schmidt is an occasional reader and commenter on this blog. I suspect he might be a little busy right now with the rest of the world’s media right to read this, let alone comment on here, but that won’t stop me congratulating him and the other winners on their achievement. I’m sure they’ll enjoy their visit to Stockholm!
Meanwhile the rest of us can bask in their reflected glory. There’s also been a huge amount of press interest in this announcement which has kept my phone ringing this morning. It’s only been five years since a Nobel Prize in physics went to cosmology, which says something for how exciting a field this is to work in!
UPDATE: There’s an interesting collection of quotes and reactions on the Guardian website, updated live.
I’m very pressed for time this week so I thought I’d cheat by resurrecting and updating an old post from way back when I had just started blogging, about three years ago. I thought of doing this because I just came across a Youtube clip of the late great Alfred Hitchcock, which you’ll now find in the post. I’ve also made a couple of minor editorial changes, but basically it’s a recycled piece and you should therefore read it for environmental reasons.
–0–
Unpick the plot of any thriller or suspense movie and the chances are that somewhere within it you will find lurking at least one MacGuffin. This might be a tangible thing, such the eponymous sculpture of a Falcon in the archetypal noir classic The Maltese Falcon or it may be rather nebulous, like the “top secret plans” in Hitchcock’s The Thirty Nine Steps. Its true character may be never fully revealed, such as in the case of the glowing contents of the briefcase in Pulp Fiction, which is a classic example of the “undisclosed object” type of MacGuffin. Or it may be scarily obvious, like a doomsday machine or some other “Big Dumb Object” you might find in a science fiction thriller. It may even not be a real thing at all. It could be an event or an idea or even something that doesn’t exist in any real sense at all, such the fictitious decoy character George Kaplan in North by Northwest.
Whatever it is or is not, the MacGuffin is responsible for kick-starting the plot. It makes the characters embark upon the course of action they take as the tale begins to unfold. This plot device was particularly beloved by Alfred Hitchcock (who was responsible for introducing the word to the film industry). Hitchcock was however always at pains to ensure that the MacGuffin never played as an important a role in the mind of the audience as it did for the protagonists. As the plot twists and turns – as it usually does in such films – and its own momentum carries the story forward, the importance of the MacGuffin tends to fade, and by the end we have often forgotten all about it. Hitchcock’s movies rarely bother to explain their MacGuffin(s) in much detail and they often confuse the issue even further by mixing genuine MacGuffins with mere red herrings.
Here is the man himself explaining the concept at the beginning of this clip. (The rest of the interview is also enjoyable, convering such diverse topics as laxatives, ravens and nudity..)
North by North West is a fine example of a multi-MacGuffin movie. The centre of its convoluted plot involves espionage and the smuggling of what is only cursorily described as “government secrets”. But although this is behind the whole story, it is the emerging romance, accidental betrayal and frantic rescue involving the lead characters played by Cary Grant and Eve Marie Saint that really engages the characters and the audience as the film gathers pace. The MacGuffin is a trigger, but it soon fades into the background as other factors take over.
There’s nothing particular new about the idea of a MacGuffin. I suppose the ultimate example is the Holy Grail in the tales of King Arthur and the Knights of the Round Table and, much more recently, the Da Vinci Code. The original Grail itself is basically a peg on which to hang a series of otherwise disconnected stories. It is barely mentioned once each individual story has started and, of course, is never found.
Physicists are fond of describing things as “The Holy Grail” of their subject, such as the Higgs Boson or gravitational waves. This always seemed to me to be an unfortunate description, as the Grail quest consumed a huge amount of resources in a predictably fruitless hunt for something whose significance could be seen to be dubious at the outset.The MacGuffin Effect nevertheless continues to reveal itself in science, although in different forms to those found in Hollywood.
The Large Hadron Collider (LHC), switched on to the accompaniment of great fanfares a few years ago, provides a nice example of how the MacGuffin actually works pretty much backwards in the world of Big Science. To the public, the LHC was built to detect the Higgs Boson, a hypothetical beastie introduced to account for the masses of other particles. If it exists the high-energy collisions engineered by LHC should reveal its presence. The Higgs Boson is thus the LHC’s own MacGuffin. Or at least it would be if it were really the reason why LHC has been built. In fact there are dozens of experiments at CERN and many of them have very different motivations from the quest for the Higgs, such as evidence for supersymmetry.
Particle physicists are not daft, however, and they have realised that the public and, perhaps more importantly, government funding agencies need to have a really big hook to hang such a big bag of money on. Hence the emergence of the Higgs as a sort of master MacGuffin, concocted specifically for public consumption, which is much more effective politically than the plethora of mini-MacGuffins which, to be honest, would be a fairer description of the real state of affairs.
Even this MacGuffin has its problems, though. The Higgs mechanism is notoriously difficult to explain to the public, so some have resorted to a less specific but more misleading version: “The Big Bang”. As I’ve already griped, the LHC will never generate energies anything like the Big Bang did, so I don’t have any time for the language of the “Big Bang Machine”, even as a MacGuffin.
While particle physicists might pretend to be doing cosmology, we astrophysicists have to contend with MacGuffins of our own. One of the most important discoveries we have made about the Universe in the last decade is that its expansion seems to be accelerating. Since gravity usually tugs on things and makes them slow down, the only explanation that we’ve thought of for this perverse situation is that there is something out there in empty space that pushes rather than pulls. This has various possible names, but Dark Energy is probably the most popular, adding an appropriately noirish edge to this particular MacGuffin. It has even taken over in prominence from its much older relative, Dark Matter, although that one is still very much around.
We have very little idea what Dark Energy is, where it comes from, or how it relates to other forms of energy we are more familiar with, so observational astronomers have jumped in with various grandiose strategies to find out more about it. This has spawned a booming industry in surveys of the distant Universe (such as the Dark Energy Survey) all aimed ostensibly at unravelling the mystery of the Dark Energy. It seems that to get any funding at all for cosmology these days you have to sprinkle the phrase “Dark Energy” liberally throughout your grant applications.
The old-fashioned “observational” way of doing astronomy – by looking at things hard enough until something exciting appears (which it does with surprising regularity) – has been replaced by a more “experimental” approach, more like that of the LHC. We can no longer do deep surveys of galaxies to find out what’s out there. We have to do it “to constrain models of Dark Energy”. This is just one example of the not necessarily positive influence that particle physics has had on astronomy in recent times and it has been criticised very forcefully by Simon White.
Whatever the motivation for doing these projects now, they will undoubtedly lead to new discoveries. But my own view is that there will never be a solution of the Dark Energy problem until it is understood much better at a conceptual level, and that will probably mean major revisions of our theories of both gravity and matter. I venture to speculate that in twenty years or so people will look back on the obsession with Dark Energy with some amusement, as our theoretical language will have moved on sufficiently to make it seem irrelevant.
But that’s how it goes with MacGuffins. Even the Maltese Falcon turned out to be a fake in the end.
While I’m in reblogging mood I’ll try to send some traffic the way of this post, which is somewhat related to Friday’s one about the Wigglezeddy survey (or whatever it’s called)…
Paper Title: The Atacama Cosmology Telescope: Evidence for Dark Energy from the CMB Alone Authors: Blake D. Sherwin et al. 1st Author’s Affiliation: Dept. of Physics, Princeton University Introduction Continuing with Monday’s theme of cosmology, today’s astrobite features an ApJ Letter that describes new evidence for dark energy. In the past decade a number of cosmological tests have been developed that show a need for a cosmological constant th … Read More
I don’t have much time to post today after spending all morning in a meeting about Assuring a Quality Experience in the Graduate College and in between reading project reports this afternoon.
However, I couldn’t resist a quickie just to draw your attention to a cosmology story that’s made it into the mass media, e.g. BBC Science. This concerns the recent publication of a couple of papers from the WiggleZ Dark Energy Survey which has used the Anglo-Australian Telescope. You can read a nice description of what WiggleZ (pronounced “Wiggle-Zee”) is all about here, but in essence it involves making two different sorts of measurements of how galaxies cluster in order to constrain the Universe’s geometry and dynamics. The first method is the “wiggle” bit, in that it depends on the imprint of baryon acoustic oscillations in the power-spectrum of galaxy clustering. The other involves analysing the peculiar motions of the galaxies by measuring the distortion of the clustering pattern introduced seen in redshift space; redshifts are usually denoted z in cosmology so that accounts for the “zee”.
The paper describing the results from the former method can be found here, while the second technique is described there.
This survey has been a major effort by an extensive team of astronomers: it has involved spectroscopic measurements of almost a quarter of a million galaxies, spread over 1000 square degrees on the sky, and has taken almost five years to complete. The results are consistent with the standard ΛCDM cosmological model, and in particular with the existence of the dark energy that this model implies, but which we don’t have a theoretical explanation for.
This is all excellent stuff and it obviously lends further observational support to the standard model. However, I’m not sure I agree with the headline of press release put out by the WiggleZ team Dark Energy is Real. I certainly agree that dark energy is a plausible explanation for a host of relevant observations, but do we really know for sure that it is “real”? Can we really be sure that there is no other explanation? Wiggle Z has certainly produced evidence that’s sufficient to rule out some alternative models, but that’s not the same as proof. I worry when scientists speak like this, with what sounds like certainty, about things that are far from proven. Just because nobody has thought of an alternative explanation doesn’t mean that none exists.
The problem is that a press release entitled “dark energy is real” is much more likely to be picked up by a newspaper radio or TV editor than one that says “dark energy remains best explanation”….
As a cosmologist, I am often asked why it is that people talk about the cosmological constant as if it were some sort of vacuum energy or “dark energy“. I was explaining it again to a student today so I thought I’d jot something down here so I can use it for future reference. In a nutshell, it goes like this. The original form of Einstein’s equations for general relativity can be written
The precise meaning of the terms on the left hand side doesn’t really matter, but basically they describe the curvature of space-time and are derived from the Ricci tensor and the metric tensor; this is how Einstein’s theory expresses the effect of gravity warping space. On the right hand side we have the energy-momentum tensor (sometimes called the stress tensor) , which describes the distribution of matter and its motion. Einstein’s equations can be summarised in John Archibald Wheeler’s pithy phrase: “Space tells matter how to move; matter tells space how to curve”.
In standard cosmology we usually assume that we can describe the matter-energy content of the Universe as a uniform perfect fluid, for which the energy-momentum tensor takes the simple form
in which is the pressure and the density; is the fluid’s 4-velocity.
Einstein famously modified (or perhaps generalised) the original equations by adding a cosmological constant term to the left hand side thus:
Doing this essentially modifies the description of gravity, or appears to do so because it happens to be written on the left hand side of the equation. In fact one could equally well move the term involving to the other side and absorb it into a redefined energy-momentum tensor, :
The new energy-momentum tensor needed to make this work is of the form
where
So the cosmological constant now looks like you didn’t modify gravity at all, but created an additional contribution to the pressure and density of the original fluid. In fact, considering the correction terms on their own it is clear that the cosmological constant acts exactly like an additional perfect fluid contribution with .
This is just one simple example wherein a modification of the gravitational part of the theory can be made to look like the appearance of a peculiar form of matter. More complicated versions of this idea – most of them entirely speculative – abound in theoretical cosmology. That’s just what cosmologists are like.
Over the last few decades cosmology has suffered an invasion by been stimulated and enriched by particle physicists who would like to understand how such a mysterious form of energy might arise in their theories. That at least partly explains why, in one sense at least, modern cosmologists prefer to dress to the right.
Incidentally, another interesting point is why people say such a fluid describes a cosmological “vacuum” energy. In the cosmological setting, i.e. assuming the fluid is distributed in a homogeneous and isotropic fashion then the energy density of the expanding Universe varies with (cosmological proper) time according to
so for our strange fluid, the second term in brackets vanishes and we have . As the universe expands, normal forms of matter and radiation get diluted, but the energy density of this stuff remains constant. It seems to me to be quite appropriate for a vacuum to something which, no matter how hard you try, you can’t dilute!
Just time for a quickie today because tomorrow is the first day of teaching (in what we optimistically call the “Spring Semester”) and I’ve decided to head into the department this afternoon to prepare some handouts and concoct some appropriately fiendish examples for my first problem set.
I thought I’d take the opportunity to add a little postscript to some comments I made in a post earlier this week on the subject of misguided criticisms of science. Where I (sometimes) tend to agree with some such attacks is when they are aimed at scientists who have exaggerated levels of confidence in the certainty of their results. The point is that scientific results are always conditional, which is to say that they are of the form “IF we assume this theoretical framework and have accounted for all sources of error THEN we can say this”.
To give an example from my own field of cosmology we could say “IF we assume the general theory of relativity applies and the Universe is homogeneous and isotropic on large scales and we have dealt with all the instrumental uncertainties involved etc etc THEN 74% of the energy density in the Universe is in a form we don’t understand (i.e. dark energy).” We don’t know for sure that dark energy exists, although it’s a pretty solid inference, because it’s by no means certain that our assumptions – and there are a lot of them – are all correct.
Similar statements are made in the literature across the entire spectrum of science. We don’t deal with absolute truths, but always work within a given theoretical framework which we should always be aware might be wrong. Uncertainty also derives from measurement error and statistical noise. A scientist’s job is to deal with all these ifs buts and don’t-knows in as hard-nosed a way as possible.
The big problem is that, for a variety of reasons, many people out there don’t understand that this is the way science works. They think of science in terms of a collection of yes or no answers to well-posed questions, not the difficult and gradual process of gathering understanding from partial clues and (occasionally inspired) guesswork.
Why is this? There are several reasons. One is that our system of science education does not place sufficient emphasis on science-as-method as opposed to science-as-facts. Another is that the media don’t have time for scientists to explain the uncertainties. With only a two-minute slot on the news to explain cosmology to a viewer waiting for the football results all you can do is deliver a soundbite.
This is what I wrote in my book From Cosmos to Chaos:
Very few journalists or television producers know enough about science to report sensibly on the latest discoveries or controversies. As a result, important matters that the public needs to know about do not appear at all in the media, or if they do it is in such a garbled fashion that they do more harm than good. I have listened many times to radio interviews with scientists on the Today programme on BBC Radio 4. I even did such an interview once. It is a deeply frustrating experience. The scientist usually starts by explaining what the discovery is about in the way a scientist should, with careful statements of what is assumed, how the data is interpreted, and what other possible interpretations might be. The interviewer then loses patience and asks for a yes or no answer. The scientist tries to continue, but is badgered. Either the interview ends as a row, or the scientist ends up stating a grossly oversimplified version of the story.
Here’s another, more recent, example. A couple of weeks ago, a clutch of early release papers from the Planck satellite came out; I blogged about them here. Among these results were some interesting new insights concerning the nature of the Anomalous Microwave Emission (AME) from the Milky Way; the subject of an excellent presentation by Clive Dickinson at the conference where the results were announced.
Now look at the actual result. The little bump in the middle is the contribution from the anomalous emission, and the curve underneath it shows the corresponding “spinning dust” model:
There’s certainly evidence that supports this interpretation, but it’s clearly nowhere near the level of “proof”. In fact, in Clive’s talk he stated the result as follows:
Plausible physical models appear to fit the data
OK, so that would never do for a headline in a popular magazine, but I hope I’ve made my point. There’s a big difference between what this particular scientist said and what was presented through the media.
I hope you’re not thinking that I’m criticising this bit of work. Having read the papers I think it’s excellent science.
But it’s not just the fault of the educationalists and the media. Certain scientists play this dangerous game themselves. Some enjoy their 15 minutes – or, more likely, two minutes – of fame so much that they will happily give the journalists what they want regardless of the consequences. Worse still, even in the refereed scientific literature you can find examples of scientists clearly overstating the confidence that should be placed in their results. We’re all human, of course, but my point is that a proper statement of the caveats is at least as much a part of good science as theoretical calculation, clever instrument design or accurate observation and experiment.
We can complain all we like about non-scientists making ill-informed criticisms of science, but we need to do a much better job at being honest about what little we really know and resist the temptation to be too certain.
With the reaction to Simon Jenkins’ rant about science being just a kind of religion gradually abating, I suddenly remembered that I ended a book I wrote in 1998 with a discussion of the image of science as a kind of priesthood. The book was about the famous eclipse expedition of 1919 that provided some degree of experimental confirmation of Einstein’s general theory of relativity and which I blogged about at some length last year, on its 90th anniversary.
I decided to post the last few paragraphs here to show that I do think there is a valuable point that Simon Jenkins could have made out of the scientist-as-priest idea. It’s to do with the responsibility scientists have to be honest about the limitations of their research and the uncertainties that surround any new discovery. Science has done great things for humanity, but it is fallible. Too many scientists are too certain about things that are far from proven. This can be damaging to science itself, as well as to the public perception of it. Bandwagons proliferate, stifling original ideas and leading to the construction of self-serving cartels. This is a fertile environment for conspiracy theories to flourish.
To my mind the thing that really separates science from religion is that science is an investigative process, not a collection of truths. Each answer simply opens up more questions. The public tends to see science as a collection of “facts” rather than a process of investigation. The scientific method has taught us a great deal about the way our Universe works, not through the exercise of blind faith but through the painstaking interplay of theory, experiment and observation.
This is what I wrote in 1998:
Science does not deal with ‘rights’ and ‘wrongs’. It deals instead with descriptions of reality that are either ‘useful’ or ‘not useful’. Newton’s theory of gravity was not shown to be ‘wrong’ by the eclipse expedition. It was merely shown that there were some phenomena it could not describe, and for which a more sophisticated theory was required. But Newton’s theory still yields perfectly reliable predictions in many situations, including, for example, the timing of total solar eclipses. When a theory is shown to be useful in a wide range of situations, it becomes part of our standard model of the world. But this doesn’t make it true, because we will never know whether future experiments may supersede it. It may well be the case that physical situations will be found where general relativity is supplanted by another theory of gravity. Indeed, physicists already know that Einstein’s theory breaks down when matter is so dense that quantum effects become important. Einstein himself realised that this would probably happen to his theory.
Putting together the material for this book, I was struck by the many parallels between the events of 1919 and coverage of similar topics in the newspapers of 1999. One of the hot topics for the media in January 1999, for example, has been the discovery by an international team of astronomers that distant exploding stars called supernovae are much fainter than had been predicted. To cut a long story short, this means that these objects are thought to be much further away than expected. The inference then is that not only is the Universe expanding, but it is doing so at a faster and faster rate as time passes. In other words, the Universe is accelerating. The only way that modern theories can account for this acceleration is to suggest that there is an additional source of energy pervading the very vacuum of space. These observations therefore hold profound implications for fundamental physics.
As always seems to be the case, the press present these observations as bald facts. As an astrophysicist, I know very well that they are far from unchallenged by the astronomical community. Lively debates about these results occur regularly at scientific meetings, and their status is far from established. In fact, only a year or two ago, precisely the same team was arguing for exactly the opposite conclusion based on their earlier data. But the media don’t seem to like representing science the way it actually is, as an arena in which ideas are vigorously debated and each result is presented with caveats and careful analysis of possible error. They prefer instead to portray scientists as priests, laying down the law without equivocation. The more esoteric the theory, the further it is beyond the grasp of the non-specialist, the more exalted is the priest. It is not that the public want to know – they want not to know but to believe.
Things seem to have been the same in 1919. Although the results from Sobral and Principe had then not received independent confirmation from other experiments, just as the new supernova experiments have not, they were still presented to the public at large as being definitive proof of something very profound. That the eclipse measurements later received confirmation is not the point. This kind of reporting can elevate scientists, at least temporarily, to the priesthood, but does nothing to bridge the ever-widening gap between what scientists do and what the public think they do.
As we enter a new Millennium, science continues to expand into areas still further beyond the comprehension of the general public. Particle physicists want to understand the structure of matter on tinier and tinier scales of length and time. Astronomers want to know how stars, galaxies and life itself came into being. But not only is the theoretical ambition of science getting bigger. Experimental tests of modern particle theories require methods capable of probing objects a tiny fraction of the size of the nucleus of an atom. With devices such as the Hubble Space Telescope, astronomers can gather light that comes from sources so distant that it has taken most of the age of the Universe to reach us from them. But extending these experimental methods still further will require yet more money to be spent. At the same time that science reaches further and further beyond the general public, the more it relies on their taxes.
Many modern scientists themselves play a dangerous game with the truth, pushing their results one-sidedly into the media as part of the cut-throat battle for a share of scarce research funding. There may be short-term rewards, in grants and TV appearances, but in the long run the impact on the relationship between science and society can only be bad. The public responded to Einstein with unqualified admiration, but Big Science later gave the world nuclear weapons. The distorted image of scientist-as-priest is likely to lead only to alienation and further loss of public respect. Science is not a religion, and should not pretend to be one.
PS. You will note that I was voicing doubts about the interpretation of the early results from supernovae in 1998 that suggested the universe might be accelerating and that dark energy might be the reason for its behaviour. Although more evidence supporting this interpretation has since emerged from WMAP and other sources, I remain skeptical that we cosmologists are on the right track about this. Don’t get me wrong – I think the standard cosmological model is the best working hypothesis we have _ I just think we’re probably missing some important pieces of the puzzle. I don’t apologise for that. I think skeptical is what a scientist should be.
Interesting press release today from the Royal Astronomical Society about a paper (preprint version here) which casts doubt on whether the Wilkinson Microwave Anisotropy Probe supports the standard cosmological model to the extent that is generally claimed. Apologies if this is a bit more technical than my usual posts (but I like occasionally to pretend that it’s a science blog).
The abstract of the paper (by Sawangwit & Shanks) reads
Using the published WMAP 5-year data, we first show how sensitive the WMAP power spectra are to the form of the WMAP beam. It is well known that the beam profile derived from observations of Jupiter is non-Gaussian and indeed extends, in the W band for example, well beyond its 12.’6 FWHM core out to more than 1 degree in radius. This means that even though the core width corresponds to wavenumber l ~ 1800, the form of the beam still significantly affects the WMAP results even at l~200 which is the scale of the first acoustic peak. The difference between the beam convolved Cl; and the final Cl is ~ 70% at the scale of the first peak, rising to ~ 400% at the scale of the second. New estimates of the Q, V and W-band beam profiles are then presented, based on a stacking analysis of the WMAP5 radio source catalogue and temperature maps. The radio sources show a significantly (3-4σ) broader beam profile on scales of 10′-30′ than that found by the WMAP team whose beam analysis is based on measurements of Jupiter. Beyond these scales the beam profiles from the radio sources are too noisy to give useful information. Furthermore, we find tentative evidence for a non-linear relation between WMAP and ATCA/IRAM 95 GHz source fluxes. We discuss whether the wide beam profiles could be caused either by radio source extension or clustering and find that neither explanation is likely. We also argue against the possibility that Eddington bias is affecting our results. The reasons for the difference between the radio source and the Jupiter beam profiles are therefore still unclear. If the radio source profiles were then used to define the WMAP beam, there could be a significant change in the amplitude and position of even the first acoustic peak. It is therefore important to identify the reasons for the differences between these two beam profile estimates.
The press release puts it somewhat more dramatically
New research by astronomers in the Physics Department at Durham University suggests that the conventional wisdom about the content of the Universe may be wrong. Graduate student Utane Sawangwit and Professor Tom Shanks looked at observations from the Wilkinson Microwave Anisotropy Probe (WMAP) satellite to study the remnant heat from the Big Bang. The two scientists find evidence that the errors in its data may be much larger than previously thought, which in turn makes the standard model of the Universe open to question. The team publish their results in a letter to the journal Monthly Notices of the Royal Astronomical Society.
I dare say the WMAP team will respond in due course, but this paper spurred me to mention some work on this topic that was done by my friend (and former student) Lung-Yih Chiang. During his last visit to Cardiff we discussed this at great length and got very excited at one point when we thought we had discovered an error along the lines that the present paper claims. However, looking more carefully into it we decided that this wasn’t the case and we abandoned our plans to publish a paper on it.
Let me show you a few slides from a presentation that Lung-Yih gave to me a while ago. For a start here is the famous power-spectrum of the temperature fluctuations of the cosmic microwave background which plays an essential role in determining the parameters of the standard cosmology:
The position of the so-called “acoustic peak” plays an important role in determining the overall curvature of space-time on cosmological scales and the higher-order peaks pin down other parameters. However, it must be remembered that WMAP doesn’t just observe the cosmic microwave background. The signal it receives is heavily polluted by contamination from within our Galaxy and there is also significant instrumental noise. To deal with this problem, the WMAP team exploit the five different frequency channels with which the probe is equipped, as shown in the picture below.
The CMB, being described by a black-body spectrum, has a sky temperature that doesn’t vary with frequency. Foreground emission, on the other hand, has an effective temperature that varies with frequency in way that is fairly well understood. The five available channels can therefore be used to model and subtract the foreground contribution to the overall signal. However, the different channels have different angular resolution (because they correspond to different wavelengths of radiation). Here are some sample patches of sky illustrating this
At each frequency the sky is blurred out by the “beam” of the WMAP optical system; the blurring is worse at low frequencies than at high frequencies. In order to do the foreground subtraction, the WMAP team therefore smooth all the frequency maps to have the same resolution, i.e. so the net effect of optical resolution and artificial smoothing produces the same overall blurring (actually 1 degree). This requires accurate knowledge of the precise form of the beam response of the experiment to do it accurately. A rough example (for illustration only) is given in the caption above.
Now, here are the power spectra of the maps in each frequency channel
Note this is Cl not l(l+1)Cl as in the first plot of the spectrum. Now you see how much foreground there is in the data: the curves would lie on top of each other if the signal were pure CMB, i.e. if it did not vary with frequency. The equation at the bottom basically just says that the overall spectrum is a smoothed version of the CMB plus the foregrounds plus noise. Note, crucially, that the smoothing suppresses the interesting high-l wiggles.
I haven’t got space-time enough to go into how the foreground subtraction is carried out, but once it is done it is necessary to “unblur” the maps in order to see the structure at small angular scales, i.e. at large spherical harmonic numbers l. The initial process of convolving the sky pattern with a filter corresponds to multiplying the power-spectrum with a “window function” that decreases sharply at high l, so to deconvolve the spectrum one essentially has to divide by this window function to reinstate the power removed at high harmonics.
This is where it all gets very tricky. The smoothing applied is very close to the scale of the acoustic peaks so you have to do it very carefully to avoid introducing artificial structure in Cl or obliterating structure that you want to see. Moreover, a small error in the beam gets blown up in the deconvolution so one can go badly wrong in recovering the final spectrum. In other words, you need to know the beam very well to have any chance of getting close to the right answer!
The next picture gives a rough model for how much the “recovered” spectrum depends on the error produced by making even a small error in the beam profile which, for illustration only, is assumed to be Gaussian. It also shows how sensitive the shape of the deconvolved spectrum is to small errors in the beam.
Incidentally, the ratty blue line shows the spectrum obtained from a small patch of the sky rather than the whole sky. We were interested to see how much the spectrum varied across the sky so broke it up into square patches about the same size as those analysed by the Boomerang experiment. This turns out to be a pretty good way of getting the acoustic peak position but, as you can see, you lose information at low l (i.e. on scales larger than the patch).
The WMAP beam isn’t actually Gaussian – it differs quite markedly in its tails, which means that there’s even more cross-talk between different harmonic modes than in this example – but I hope you get the basic point. As Sawangwit & Shanks say, you need to know the beam very well to get the right fluctuation spectrum out. Move the acoustic peak around only slightly and all bets are off about the cosmological parameters and, perhaps, the evidence for dark energy and dark matter. Lung-Yih looked at the way the WMAP had done it and concluded that if their published beam shape was right then they had done a good job and there’s nothing substantially wrong with the results shown in the first graph.
Sawangwit & Shanks suggest the beam isn’t right so the recovered angular spectrum is suspect. I’ll need to look a bit more at the evidence they consider before commenting on that, although if anyone else has worked through it I’d be happy to hear from them through the comments box!
The views presented here are personal and not necessarily those of my employer (or anyone else for that matter).
Feel free to comment on any of the posts on this blog but comments may be moderated; anonymous comments and any considered by me to be vexatious and/or abusive and/or defamatory will not be accepted. I do not necessarily endorse, support, sanction, encourage, verify or agree with the opinions or statements of any information or other content in the comments on this site and do not in any way guarantee their accuracy or reliability.