Archive for Cosmology

Astronomy Look-alikes, No. 40

Posted in Astronomy Lookalikes, The Universe and Stuff with tags , , , on September 10, 2010 by telescoper

Obviously someone else has already noticed the remarkable similarity between the structure of the human brain and that revealed by computer simulations of the large-scale structure of the Universe.

Does this mean that dark matter is really just all in the mind?


Share/Bookmark

The Next Decade of Astronomy?

Posted in Science Politics, The Universe and Stuff with tags , , , , , , , on August 14, 2010 by telescoper

I feel obliged to pass on the news that the results of the Decadal Review of US Astronomy were announced yesterday. There has already been a considerable amount of reaction to what the Review Panel (chaired by the esteemed Roger Blandford) came up with from people much more knowledgeable about observational astronomy and indeed US Science Politics, so I won’t try to do a comprehensive analysis here. I draw your attention instead to the report itself  (which you can download in PDF form for free)  and Julianne Dalcanton’s review of, and comments on, the Panel’s conclusions about the priorities for  space-based and ground-based astronomy for the next decade or so over on Cosmic Variance.  There’s also a piece by Andy Lawrence over on The e-Astronomer’s blog. I’ll just mention that Top of the Pops for space-based astronomy is the Wide-Field Infrared Survey Telescope (WFIRST) which you can read a bit more about here, and King of the Castle for the ground-based programme is the Large Synoptic Survey Telescope (LSST). Both of these hold great promise for the area I work in – cosmology and extragalactic astrophysics – so I’m pleased to see our American cousins placing such a high priority on them. The Laser Interferometer Space Antenna (LISA), which is designed to detect gravitational waves, also did very well, which is great news for Cardiff’s Gravitational Physics group.

It will be interesting to see what effect – if any – these priorities have on the ranking of corresponding projects this side of the Atlantic. Some of the space missions involved in the Decadal Review in fact depend on both NASA and ESA so there clearly will be a big effect on such cases. For example, the proposed International X-ray Observatory (IXO) did less well than many might have anticipated, with clear implications for  Europe (including the UK).  The current landscape  of X-ray astronomy is dominated by Chandra and XMM, both of which were launched in 1999 and which are both nearing the end of their operational lives. Since X-ray astronomy can only be done from space, abandoning IXO would basically mean the end of the subject  as we know it, but the question is how to bridge the  the gap between the end of these two missions and the start of IXO even if it does go ahead but not until long after 2020? Should we keep X-ray astronomers on the payroll twiddling their thumbs for the next decade when other fields are desperately short of manpower for science exploitation?

On a more general level, it’s not obvious how we should react when the US gives a high priority to a given mission anyway. Of course, it gives us confidence that we’re not being silly when very smart people across the Pond endorse missions and facilities similar to ones we are considering over here. However, generally speaking the Americans tend to be able to bring missions from the drawing board to completion much faster than we can in Europe. Just compare WMAP with Planck, for instance. Trying to compete with the US, rather than collaborate, seems likely to ensure only that we remain second best. There’s an argument, therefore, for Europe having a programme that is, in some respects at least, orthogonal to the United States; in matters where we don’t collaborate, we should go for facilities that complement rather than compete with those the Americans are building.

It’s all very well talking of priorities in the UK but we all know that the Grim Reaper is shortly going to be paying a visit to the budget of the  agency that administers funding for our astronomy, STFC. This organization went through a financial crisis all of its very own in 2007 from which it is still reeling. Now it has to face the prospect of further savage cuts. The level of “savings” being discussed  – at least 25%  -means that the STFC management must be pondering some pretty drastic measures, even pulling out of the European Southern Observatory (which we only joined in 2002). The trouble is that most of the other ground-based astronomical facilities used by UK astronomers have been earmarked for closure, or STFC has withdrawn from them. Britain’s long history of excellence in ground-based astronomy now hangs in the balance. It’s scary.

I hope the government can be persuaded that STFC should be spared another big cut and I’m sure that there’s extensive lobbying going on.  Indeed, STFC has already requested input to its plans for the ongoing Comprehensive Spending Review (CSR). With this in mind, the Royal Astronomical Society has produced a new booklet designed to point out the  relevance of astronomy to wider society. However I can’t rid from my mind the memory a certain meeting in London in 2007 at which the STFC Chief Executive revealed the true scale of STFC’s problems. He predicted that things would be much worse at the next CSR, i.e. this one. And that was before the Credit Crunch, and the consequent arrival of a new government swinging a very large axe. I wish I could be optimistic but, frankly, I’m not.

When the CSR is completed then STFC will have yet again to do another hasty re-prioritisation. Their Science Board has clearly been preparing:

… Science Board discussed a number of thought provoking scenarios designed to explore the sort of issues that the Executive may be confronted with if there were to be a significant funding reduction as a result of the 2010 comprehensive spending review settlement. As a result of these deliberations Science Board provided the Executive with guidance on how to take forward this strategic planning.

This illustrates a big difference in the way such prioritisation exercises are carried out in the UK versus the USA. The Decadal Review described above is a high-profile study, carried out by a panel of distinguished experts, which takes detailed input from a large number of scientists, and which delivers a coherent long-term vision for the future of the subject. I’m sure not everyone agrees with their conclusions, but the vast majority respect its impartiality and level-headedness and have confidence in the overall process. Here in the UK we have “consultation exercises” involving “advisory panels” who draw up detailed advice which then gets fed into STFC’s internal panels. That bit is much like the Decadal Review. However, at least in the case of the last prioritisation exercise, the community input doesn’t seem to bear any obvious relationship to what comes out the other end. I appreciate that there are probably more constraints on STFC’s Science Board than it has degrees of freedom, but there’s no getting away from the sense of alienation and cynicism this has generated across large sections of the UK astronomy community.

The problem with our is that we always seem to be reacting to financial pressure rather than taking the truly long-term “blue-skies” view that is clearly needed for big science projects of the type under discussion. The Decadal Review, for example, places great importance on striking a balance between large- and small-scale experiments. Here we tend slash the latter because they’re easier to kill than the former. If this policy goes on much longer, in the long run we’ll end up a with few enormous expensive facilities but none of the truly excellent science that can be done from using smaller kit.  A crucial aspect of this that that science seems to have been steadily relegated in importance in favour of technology ever since the creation of STFC.  This must be reversed. We need a proper strategic advisory panel with strong scientific credentials that stands outside the existing STFC structure but which has real influence on STFC planning, i.e. one which plays the same role in the UK as the Decadal Review does in the States.

Assuming, of course, that there’s any UK astronomy left in the next decade…

The Fractal Universe, Part 1

Posted in The Universe and Stuff with tags , , , , on August 4, 2010 by telescoper

A long time ago I blogged about the Cosmic Web and one of the comments there suggested I write something about the idea that the large-scale structure of the Universe might be some sort of fractal.  There’s a small (but vocal) group of cosmologists who favour fractal cosmological models over the more orthodox cosmology favoured by the majority, so it’s definitely something worth writing about. I have been meaning to post something about it for some time now, but it’s too big and technical a matter to cover in one item. I’ve therefore decided to start by posting a slightly edited version of a short News and Views piece I wrote about the  question in 1998. It’s very out of date on the observational side, but I thought it would be good to set the scene for later developments (mentioned in the last paragraph), which I hope to cover in future posts.

—0—

One of the central tenets of cosmological orthodoxy is the Cosmological Principle, which states that, in a broad-brush sense, the Universe is the same in every place and in every direction. This assumption has enabled cosmologists to obtain relatively simple solutions of Einstein’s General Theory of Relativity that describe the dynamical behaviour of the Universe as a whole. These solutions, called the Friedmann models [1], form the basis of the Big Bang theory. But is the Cosmological Principle true? Not according to Francesco Sylos-Labini et al. [2], who argue, controversially, that the Universe is not uniform at all, but has a never-ending hierarchical structure in which galaxies group together in clusters which, in turn, group together in superclusters, and so on.

These claims are completely at odds with the Cosmological Principle and therefore with the Friedmann models and the entire Big Bang theory. The central thrust of the work of Sylos-Labini et al. is that the statistical methods used by cosmologists to analyse galaxy clustering data are inappropriate because they assume the property of large-scale homogeneity at the outset. If one does not wish to assume this then one must use different methods.

What they do is to assume that the Universe is better described in terms of a fractal set characterized by a fractal dimension D. In a fractal set, the mean number of neighbours of a given galaxy within a volume of radius R is proportional to RD. If galaxies are distributed uniformly then D = 3, as the number of neighbours simply depends on the volume of the sphere, i.e. as R3 and the average number-density of galaxies. A value of D < 3 indicates that the galaxies do not fill space in a homogeneous fashion: D = 1, for example, would indicate that galaxies were distributed in roughly linear structures (filaments); the mass of material distributed along a filament enclosed within a sphere grows linear with the radius of the sphere, i.e. as R1, not as its volume.  Sylos-Labini et al. argue that D = 2, which suggests a roughly planar (sheet-like) distribution of galaxies.

Most cosmologists would accept that the distribution of galaxies on relatively small scales, up to perhaps a few tens of megaparsecs (Mpc), can indeed be described in terms of a fractal model.This small-scale clustering is expected to be dominated by purely gravitational physics, and gravity has no particular length scale associated with it. But standard theory requires that the fractal dimension should approach the homogeneous value D = 3 on large enough scales. According to standard models of cosmological structure formation, this transition should occur on scales of a few hundred Mpc.

The main source of the controversy is that most available three-dimensional maps of galaxy positions are not large enough to encompass the expected transition to homogeneity. Distances must be inferred from redshifts, and it is difficult to construct these maps from redshift surveys, which require spectroscopic studies of large numbers of galaxies.

Sylos-Labini et al. have analysed a number of redshift surveys, including the largest so far available, the Las Campanas Redshift Survey [3]; see below. They find D = 2 for all the data they look at, and argue that there is no transition to homogeneity for scales up to 4,000 Mpc, way beyond the expected turnover. If this were true, it would indeed be bad news for the orthodox among us.

The survey maps the Universe out to recession velocities of 60,000 km s-1, corresponding to distances of a few hundred million parsecs. Although no fractal structure on the largest scales is apparent (there are no clear voids or concentrations on the same scale as the whole map), one statistical analysis [2] finds a fractal dimension of two in this and other surveys, for all scales – conflicting with a basic principle of cosmology.

Their results are, however, at variance with the visual appearance of the Las Campanas survey, for example, which certainly seems to display large-scale homogeneity. Objections to these claims have been lodged by Luigi Guzzo [4], for instance, who has criticized their handling of the data and has presented independent results that appear to be consistent with a transition to homogeneity. It is also true that Sylos-Labini et al. have done their cause no good by basing some conclusions on a heterogeneous compilation of redshifts called the LEDA database [5], which is not a controlled sample and so is completely unsuitable for this kind of study. Finally, it seems clear that they have substantially overestimated the effective depth of the catalogues they are using. But although their claims remain controversial, the consistency of the results obtained by Sylos-Labini et al. is impressive enough to raise doubts about the standard picture.

Mainstream cosmologists are not yet so worried as to abandon the Cosmological Principle. Most are probably quite happy to admit that there is no overwhelming direct evidence in favour of global uniformity from current three-dimensional galaxy catalogues, which are in any case relatively shallow. But this does not mean there is no evidence at all: the near-isotropy of the sky temperature of the cosmic microwave background, the uniformity of the cosmic X-ray background, and the properties of source counts are all difficult to explain unless the Universe is homogeneous on large scales [6]. Moreover, Hubble’s law itself is a consequence of large-scale homogeneity: if the Universe were inhomogeneous one would not expect to see a uniform expansion, but an irregular pattern of velocities resulting from large-scale density fluctuations.

But above all, it is the principle of Occam’s razor that guides us: in the absence of clear evidence against it, the simplest model compatible with the data is to be preferred. Several observational projects are already under way, including the Sloan Digital Sky Survey and the Anglo-Australian 2DF Galaxy Redshift Survey, that should chart the spatial distribution of galaxies in enough detail to provide an unambiguous answer to the question of large-scale cosmic uniformity. In the meantime, and in the absence of clear evidence against it, the Cosmological Principle remains an essential part of the Big Bang theory.

References

  1. Friedmann, A. Z. Phys. 10, 377–386 ( 1922).
  2. Sylos-Labini, F., Montuori, M. & Pietronero, L. Phys. Rep. 293, 61-226 .
  3. Shectman, S.et al. Astrophys. J. 470, 172–188 (1996).
  4. Guzzo, L. New Astron. 2, 517–532 ( 1997).
  5. Paturel, G. et al. in Information and Online Data in Astronomy (eds Egret, D. & Albrecht, M.) 115 (Kluwer, Dordrecht,1995).
  6. Peebles, P. J. E. Principles of Physical Cosmology (Princeton Univ. Press, NJ, 1993).

Space: The Final Frontier?

Posted in The Universe and Stuff with tags , , , , , , , on July 9, 2010 by telescoper

I found this on my laptop just now. Apparently I wrote it in 2003, but I can’t remember what it was for. Still, when you’ve got a hungry blog to feed, who cares about a little recycling?

It seems to be part of our nature for we humans to feel the urge  to understand our relationship to the Universe. In ancient times, attempts to cope with the vastness and complexity of the world were usually in terms of myth or legend, but even the most primitive civilizations knew the value of careful observation. Astronomy, the science of the heavens, began with attempts to understand the regular motions of the Sun, planets and stars across the sky. Astronomy also aided the first human explorations of own Earth, providing accurate clocks and navigation aids. But during this age the heavens remained remote and inaccessible, their nature far from understood, and the idea that they themselves could some day be explored was unthinkable. Difficult frontiers may have been crossed on Earth, but that of space seemed impassable.

The invention of the telescope ushered in a new era of cosmic discovery, during which we learned for the first time precisely how distant the heavenly bodies were and what they were made of.  Galileo saw that Jupiter had moons going around it, just like the Earth. Why, then, should the Earth be thought of as the centre of the Universe? The later discovery, made in the 19th Century using spectroscopy, that the Sun and planets were even made of the same type of material as commonly found on Earth made it entirely reasonable to speculate that there could be other worlds just like our own. Was there any theoretical reason why we might not be able to visit them?

No theoretical reason, perhaps, but certainly practical ones. For a start, there’s the small matter of getting “up there”. Powered flying machines came on the scene about one hundred years ago, but conventional aircraft simply can’t travel fast enough to escape the pull of Earth’s gravity. This problem was eventually solved by adapting technology developed during World War II to produce rockets of increasingly large size and thrusting power. Cold-war rivalry between the USA and the USSR led to the space race of the 1960s culminating in the Apollo missions to the Moon in the late 60s and early 70s. These missions were enormously expensive and have never been repeated, although both NASA and the European Space Agency are currently attempting to gather sufficient funds to (eventually) send manned missions to Mars.

But manned spaceflights have been responsible for only a small fraction of the scientific exploration of space. Robotic probes have been dispatched all over the Solar System. Some have failed, but at tiny fraction of the cost of manned missions. Landings have been made on the solid surfaces of Venus, Mars and Titan and probes have flown past the beautiful gas giants Jupiter, Saturn, Uranus and Neptune taking beautiful images of these bizarre frozen worlds.

Space is also a superb vantage point for astronomical observation. Above the Earth’s atmosphere there is no twinkling of star images, so even a relatively small telescope like the Hubble Space Telescope (HST) can resolve details that are blurred when seen from the ground. Telescopes in space can also view the entire sky, which is not possible from a point on the Earth’s surface. From space we can see different kinds of light that do not reach the ground: from gamma rays and X-rays produced by very energetic objects such as black holes, down to the microwave background which bathes the Universe in a faint afterglow of its creation in the Big Bang. Recently the Wilkinson Microwave Anisotropy Probe (WMAP) charted the properties of this cosmic radiation across the entire sky, yielding precise measurements of the size and age of the Universe. Planck and Herschel are pushing back the cosmic frontier as I write, and many more missions are planned for the future.

Over the last decade, the use of dedicated space observatories, such as HST and WMAP, in tandem with conventional terrestrial facilities, has led to a revolution in our understanding of how the Universe works. We are now convinced that the Universe began with a Big Bang, about 14 billion years ago. We know that our galaxy, the Milky Way, is just one of billions of similar objects that condensed out of the cosmic fireball as it expanded and cooled. We know that most galaxies have a black hole in their centre which gobbles up everything falling into it, even light. We know that the Universe contains a great deal of mysterious dark matter and that empty space is filled with a form of dark energy, known in the trade as the cosmological constant. We know that our own star the Sun is a few billion years old and that the planets formed from a disk of dusty debris that accompanied the infant star during its birth. We also know that planets are by no means rare: nearly two hundred exoplanets (that is, planets outside our Solar System) have so far been discovered. Most of these are giants, some even larger than Jupiter which is itself about 300 times more massive than Earth, but this may simply because big objects are easier to find than small ones.

But there is still a lot we still don’t know, especially about the details. The formation of stars and planets is a process so complicated that it makes weather forecasting look simple. We simply have no way of knowing what determines how many stars have solid planets, how many have gas giants, how many have both and how many have neither. In order to support life, a planet must be in an orbit which is neither too close to its parent star (where it would be too hot for life to exist) nor too far aware (where it would be too cold). We also know very little about how life evolves from simple molecules or how robust it is to the extreme environments that might be found elsewhere in our Universe. It is safe to say that we have no absolutely idea how common life is within our own Galaxy or the Universe at large.

Within the next century it seems likely that we will whether there is life elsewhere in our Solar System. We will probably also be able to figure out how many earth-like exoplanets there are “out there”. But the unimaginable distances between stars in our galaxy make it very unlikely that crude rocket technology will ever enable us to physically explore anything beyond our own backyard for the foreseeable future.

So will space forever remain the final frontier? Will we ever explore our Galaxy in person, rather than through remote observation? The answer to these questions is that we don’t know for sure, but the laws of nature may have legal loopholes (called “wormholes”) that just might allow us to travel faster than light if we ever figure out how to exploit them. If we can do it then we could travel across our Galaxy in hours rather than aeons. This will require a revolution in our understanding not just of space, but also of time. The scientific advances of the past few years would have been unimaginable only a century ago, so who is to say that it will never happen?

Ten Facts about Space Exploration

  1. The human exploration of space began on October 4th 1957 when the Soviet Union launched Sputnik the first man-made satellite. The first man in space was also a Russian, Yuri Gagarin, who completed one orbit of the Earth in the Vostok spacecraft in 1961. Apparently he was violently sick during the entire flight.
  2. The first man to set foot on the Moon was Neil Armstrong, on July 20th 1969. As he descended to the lunar surface, he said “That’s one small step for a man, one giant leap for mankind.”
  3. In all, six manned missions landed on the Moon (Apollo 11, 12, 14, 15, 16 and 17; Apollo 13 aborted its landing and returned to Earth after an explosion seriously damaged the spacecraft). Apollo 17 landed on December 14th 1972, since when no human has set foot on the lunar surface.
  4. The first reusable space vehicle was the Space Shuttle, four of which were originally built. Columbia was the first, launched in 1981, followed by Challenger in 1983, Discovery in 1984 and Atlantis in 1985.  Challenger was destroyed by an explosion shortly after takeoff in 1992, and was replaced by Endeavour. Columbia disintegrated over Texas while attempting to land in 2003.
  5. Viking 1 and Viking 2 missions landed on surface of Mars in 1976; they sent back detailed information about the Martian soil. Tests for the presence of life proved inconclusive, but there is strong evidence that Mars once had running water on its surface.
  6. The outer planets (Jupiter, Saturn, Uranus and Neptune) have been studied by numerous fly-by probes, starting with Pioneer 10 (1973) and Pioneer 11 (1974) . Voyager 1 and Voyager 2 flew past Jupiter in 1979;  Voyager 2 went on to visit Uranus (1986)  and Neptune (1989) after receiving a gravity assist from a close approach to Jupiter. These missions revealed, among other things, that all these planets have spectacular ring systems – not just Saturn. More recently, in 2004, the Cassini spacecraft launched the Huygens probe into the atmosphere of Titan. It survived the descent and sent back amazing images of the surface of Saturn’s largest moon.
  7. Sending a vehicle into deep space requires enough energy to escape the gravitational pull of the Earth. This means exceeding the escape velocity of our planet, which is about 11 kilometres per second (nearly 40,000 kilometres per hour). Even travelling at this speed, a spacecraft will take many months to reach Mars, and years to escape the Solar System.
  8. The nearest star to our Sun is Proxima Centauri, about 4.5 light years away. This means that, even travelling at the speed of light (300,000 kilometres per second) which is as fast as anything can do according to known physics, a spacecraft would take 4.5 years to get there. At the Earth’s escape velocity (11 kilometres per second), it would take over a hundred thousand years.
  9. Our Sun orbits within our own galaxy – the Milky Way – at a distance of about 30,000 light years from the centre at a speed of about 200 kilometres per second, taking about a billion years to go around. The Milky Way contains about a hundred billion stars.
  10. The observable Universe has a radius of about 14 billion light years, and it contains about as many galaxies as there are stars in the Milky Way. If every star in every galaxy has just one planet then there are approximately ten thousand million million million other places where life could exist.

Science as a Religion

Posted in Books, Talks and Reviews, Science Politics, The Universe and Stuff with tags , , , , , , , on July 6, 2010 by telescoper

With the reaction to Simon Jenkins’ rant about science being just a kind of religion gradually abating, I suddenly remembered that I ended a book I wrote in 1998 with a discussion of the image of science as a kind of priesthood. The book was about the famous eclipse expedition of 1919 that provided some degree of experimental confirmation of Einstein’s general theory of relativity and which I blogged about at some length last year, on its 90th anniversary.

I decided to post the last few paragraphs here to show that I do think there is a valuable point that Simon Jenkins could have made out of the scientist-as-priest idea. It’s to do with the responsibility scientists have to be honest about the limitations of their research and the uncertainties that surround any new discovery. Science has done great things for humanity, but it is fallible. Too many scientists are too certain about things that are far from proven. This can be damaging to science itself, as well as to the public perception of it. Bandwagons proliferate, stifling original ideas and leading to the construction of self-serving cartels. This is a fertile environment for conspiracy theories to flourish.

To my mind the thing  that really separates science from religion is that science is an investigative process, not a collection of truths. Each answer simply opens up more questions.  The public tends to see science as a collection of “facts” rather than a process of investigation. The scientific method has taught us a great deal about the way our Universe works, not through the exercise of blind faith but through the painstaking interplay of theory, experiment and observation.

This is what I wrote in 1998:

Science does not deal with ‘rights’ and ‘wrongs’. It deals instead with descriptions of reality that are either ‘useful’ or ‘not useful’. Newton’s theory of gravity was not shown to be ‘wrong’ by the eclipse expedition. It was merely shown that there were some phenomena it could not describe, and for which a more sophisticated theory was required. But Newton’s theory still yields perfectly reliable predictions in many situations, including, for example, the timing of total solar eclipses. When a theory is shown to be useful in a wide range of situations, it becomes part of our standard model of the world. But this doesn’t make it true, because we will never know whether future experiments may supersede it. It may well be the case that physical situations will be found where general relativity is supplanted by another theory of gravity. Indeed, physicists already know that Einstein’s theory breaks down when matter is so dense that quantum effects become important. Einstein himself realised that this would probably happen to his theory.

Putting together the material for this book, I was struck by the many parallels between the events of 1919 and coverage of similar topics in the newspapers of 1999. One of the hot topics for the media in January 1999, for example, has been the discovery by an international team of astronomers that distant exploding stars called supernovae are much fainter than had been predicted. To cut a long story short, this means that these objects are thought to be much further away than expected. The inference then is that not only is the Universe expanding, but it is doing so at a faster and faster rate as time passes. In other words, the Universe is accelerating. The only way that modern theories can account for this acceleration is to suggest that there is an additional source of energy pervading the very vacuum of space. These observations therefore hold profound implications for fundamental physics.

As always seems to be the case, the press present these observations as bald facts. As an astrophysicist, I know very well that they are far from unchallenged by the astronomical community. Lively debates about these results occur regularly at scientific meetings, and their status is far from established. In fact, only a year or two ago, precisely the same team was arguing for exactly the opposite conclusion based on their earlier data. But the media don’t seem to like representing science the way it actually is, as an arena in which ideas are vigorously debated and each result is presented with caveats and careful analysis of possible error. They prefer instead to portray scientists as priests, laying down the law without equivocation. The more esoteric the theory, the further it is beyond the grasp of the non-specialist, the more exalted is the priest. It is not that the public want to know – they want not to know but to believe.

Things seem to have been the same in 1919. Although the results from Sobral and Principe had then not received independent confirmation from other experiments, just as the new supernova experiments have not, they were still presented to the public at large as being definitive proof of something very profound. That the eclipse measurements later received confirmation is not the point. This kind of reporting can elevate scientists, at least temporarily, to the priesthood, but does nothing to bridge the ever-widening gap between what scientists do and what the public think they do.

As we enter a new Millennium, science continues to expand into areas still further beyond the comprehension of the general public. Particle physicists want to understand the structure of matter on tinier and tinier scales of length and time. Astronomers want to know how stars, galaxies  and life itself came into being. But not only is the theoretical ambition of science getting bigger. Experimental tests of modern particle theories require methods capable of probing objects a tiny fraction of the size of the nucleus of an atom. With devices such as the Hubble Space Telescope, astronomers can gather light that comes from sources so distant that it has taken most of the age of the Universe to reach us from them. But extending these experimental methods still further will require yet more money to be spent. At the same time that science reaches further and further beyond the general public, the more it relies on their taxes.

Many modern scientists themselves play a dangerous game with the truth, pushing their results one-sidedly into the media as part of the cut-throat battle for a share of scarce research funding. There may be short-term rewards, in grants and TV appearances, but in the long run the impact on the relationship between science and society can only be bad. The public responded to Einstein with unqualified admiration, but Big Science later gave the world nuclear weapons. The distorted image of scientist-as-priest is likely to lead only to alienation and further loss of public respect. Science is not a religion, and should not pretend to be one.

PS. You will note that I was voicing doubts about the interpretation of the early results from supernovae  in 1998 that suggested the universe might be accelerating and that dark energy might be the reason for its behaviour. Although more evidence supporting this interpretation has since emerged from WMAP and other sources, I remain skeptical that we cosmologists are on the right track about this. Don’t get me wrong – I think the standard cosmological model is the best working hypothesis we have _ I just think we’re probably missing some important pieces of the puzzle. I don’t apologise for that. I think skeptical is what a scientist should be.

The Planck Sky

Posted in The Universe and Stuff with tags , , , , , , , on July 5, 2010 by telescoper

Hot from the press today is a release of all-sky images from the European Space Agency’s Planck mission, including about a year’s worth of data. You can find a full set of high-resolution images here at the ESA website, along with a lot of explanatory text, and also here and here. Here’s a low-resolution image showing the galactic dust (blue) and radio (pink) emission concentrated in the plane of the Milky Way but extending above and below it. Only well away from the Galactic plane do you start to see an inkling of the pattern of fluctuations in the Cosmic Microwave Background that the survey is primarily intended to study.

It will take a lot of sustained effort and clever analysis to clean out the foreground contamination from the maps, so the cosmological interpretation will have to wait a while. In fact, the colour scale seems to have been chosen in such a way as to deter people from even trying to analyse the CMB component of the data contained in these images. I’m not sure that will work, however, and it’s probably just a matter of days before some ninny posts a half-baked paper on the arXiv claiming that the standard cosmological model is all wrong and that the Universe is actually the shape of a vuvuzela. (This would require only a small modification of an earlier suggestion.)

These images are of course primarily for PR purposes, but there’s nothing wrong with that. Apart from being beautiful in its own right, they demonstrate that Planck is actually working and that results it will eventually produce should be well worth waiting for!

Oh, nearly forgot to mention that the excellent Jonathan Amos has written a nice piece about this on the BBC Website too.

Science Examination Blues

Posted in Education, The Universe and Stuff with tags , , , , , on June 16, 2010 by telescoper

I woke up this morning …

.. to the 7am news on BBC Radio 3, including a story about how GCSE science examinations are not “sufficiently rigorous”. Then, on Twitter, I saw an example of an Edexcel GCSE (Multiple-choice) Physics paper.  It’s enough to make any practising physicist weep.

Most of the questions are very easy, but there’s just as many that are so sloppily put together that they  don’t make any sense at all. Take Question 1:

I suppose the answer is meant to be C, but since it doesn’t say that A is the orbit of a planet, as far as I’m concerned, it might just as well be D. Are we meant to eliminate D simply because it doesn’t have another orbit going through it?

On the other hand, the orbit of a moon around the Sun is in fact similar to the orbit of its planet around the Sun, since the orbital speed and radius of the moon around its planet are smaller than those of the planet around the Sun. At a push, therefore you could argue that A is the closest choice to a moon’s orbit around the Sun. The real thing would be something close to a circle with a 4-week wobble variation superposed.

You might say I’m being pedantic, but the whole point of exam questions is that they shouldn’t be open to ambiguities like this, at least if they’re science exams. I can imagine bright and knowledgeable students getting thoroughly confused by this question, and many of the others on the paper.

Here’s a couple more, from the “Advanced” section:

The answer to Q30 is, presumably, A. But do any scientists really think that galaxies are “moving away from the origin of the Big Bang”?  I’m worried that this implies that the Big Bang was located at a specific point. Is that what they’re teaching?

Bearing in mind that only one answer is supposed to be right, the answer to Q31 is presumably D. But is there really no evidence from “nebulae” that supports the Big Bang theory? The expansion of the Universe was discovered by observing things Hubble called “nebulae”..

I’m all in favour of school students being introduced to fundamental things such as cosmology and particle physics, but my deep worry is that this is being done at the expense of learning any real physics at all and is in any case done in a garbled and nonsensical way.

Lest I be accused of an astronomy-related bias, anyone care to try finding a correct answer to this question?

The more of this kind of stuff I see, the more admiration I have for the students coming to study physics and astronomy at University. How they managed to learn anything at all given the dire state of science education in the UK is really quite remarkable.

Cosmology on its beam-ends?

Posted in Cosmic Anomalies, The Universe and Stuff with tags , , , , on June 14, 2010 by telescoper

Interesting press release today from the Royal Astronomical Society about a paper (preprint version here) which casts doubt on whether the Wilkinson Microwave Anisotropy Probe supports the standard cosmological model to the extent that is generally claimed. Apologies if this is a bit more technical than my usual posts (but I like occasionally to pretend that it’s a science blog).

The abstract of the paper (by Sawangwit & Shanks) reads

Using the published WMAP 5-year data, we first show how sensitive the WMAP power spectra are to the form of the WMAP beam. It is well known that the beam profile derived from observations of Jupiter is non-Gaussian and indeed extends, in the W band for example, well beyond its 12.’6 FWHM core out to more than 1 degree in radius. This means that even though the core width corresponds to wavenumber l ~ 1800, the form of the beam still significantly affects the WMAP results even at l~200 which is the scale of the first acoustic peak. The difference between the beam convolved Cl; and the final Cl is ~ 70% at the scale of the first peak, rising to ~ 400% at the scale of the second.  New estimates of the Q, V and W-band beam profiles are then presented, based on a stacking analysis of the WMAP5 radio source catalogue and temperature maps. The radio sources show a significantly (3-4σ) broader beam profile on scales of 10′-30′ than that found by the WMAP team whose beam analysis is based on measurements of Jupiter. Beyond these scales the beam profiles from the radio sources are too noisy to give useful information. Furthermore, we find tentative evidence for a non-linear relation between WMAP and ATCA/IRAM 95 GHz source fluxes. We discuss whether the wide beam profiles could be caused either by radio source extension or clustering and find that neither explanation is likely. We also argue against the possibility that Eddington bias is affecting our results. The reasons for the difference between the radio source and the Jupiter beam profiles are therefore still unclear. If the radio source profiles were then used to define the WMAP beam, there could be a significant change in the amplitude and position of even the first acoustic peak. It is therefore important to identify the reasons for the differences between these two beam profile estimates.

The press release puts it somewhat more dramatically

New research by astronomers in the Physics Department at Durham University suggests that the conventional wisdom about the content of the Universe may be wrong. Graduate student Utane Sawangwit and Professor Tom Shanks looked at observations from the Wilkinson Microwave Anisotropy Probe (WMAP) satellite to study the remnant heat from the Big Bang. The two scientists find evidence that the errors in its data may be much larger than previously thought, which in turn makes the standard model of the Universe open to question. The team publish their results in a letter to the journal Monthly Notices of the Royal Astronomical Society.

I dare say the WMAP team will respond in due course, but this paper spurred me to mention some work on this topic that was done by my friend (and former student) Lung-Yih Chiang. During his last visit to Cardiff we discussed this at great length and got very excited at one point when we thought we had discovered an error along the lines that the present paper claims. However, looking more carefully into it we decided that this wasn’t the case and we abandoned our plans to publish a paper on it.

Let me show you a few slides from a presentation that Lung-Yih gave to me a while ago. For a start here is the famous power-spectrum of the temperature fluctuations of the cosmic microwave background which plays an essential role in determining the parameters of the standard cosmology:

The position of the so-called “acoustic peak” plays an important role in determining the overall curvature of space-time on cosmological scales and the higher-order peaks pin down other parameters. However, it must be remembered that WMAP doesn’t just observe the cosmic microwave background. The signal it receives is heavily polluted by contamination from within our Galaxy and there is also significant instrumental noise.  To deal with this problem, the WMAP team exploit the five different frequency channels with which the probe is equipped, as shown in the picture below.

The CMB, being described by a black-body spectrum, has a sky temperature that doesn’t vary with frequency. Foreground emission, on the other hand, has an effective temperature that varies with frequency in way that is fairly well understood. The five available channels can therefore be used to model and subtract the foreground contribution to the overall signal. However, the different channels have different angular resolution (because they correspond to different wavelengths of radiation). Here are some sample patches of sky illustrating this

At each frequency the sky is blurred out by the “beam” of the WMAP optical system; the blurring is worse at low frequencies than at high frequencies. In order to do the foreground subtraction, the WMAP team therefore smooth all the frequency maps to have the same resolution, i.e. so the net effect of optical resolution and artificial smoothing produces the same overall blurring (actually 1 degree).  This requires accurate knowledge of the precise form of the beam response of the experiment to do it accurately. A rough example (for illustration only) is given in the caption above.

Now, here are the power spectra of the maps in each frequency channel

Note this is Cl not l(l+1)Cl as in the first plot of the spectrum. Now you see how much foreground there is in the data: the curves would lie on top of each other if the signal were pure CMB, i.e. if it did not vary with frequency. The equation at the bottom basically just says that the overall spectrum is a smoothed version of the CMB plus the foregrounds  plus noise. Note, crucially,  that the smoothing suppresses the interesting high-l wiggles.

I haven’t got space-time enough to go into how the foreground subtraction is carried out, but once it is done it is necessary to “unblur” the maps in order to see the structure at small angular scales, i.e. at large spherical harmonic numbers l. The initial process of convolving the sky pattern with a filter corresponds to multiplying the power-spectrum with a “window function” that decreases sharply at high l, so to deconvolve the spectrum one essentially has to divide by this window function to reinstate the power removed at high harmonics.

This is where it all gets very tricky. The smoothing applied is very close to the scale of the acoustic peaks so you have to do it very carefully to avoid introducing artificial structure in Cl or obliterating structure that you want to see. Moreover, a small error in the beam gets blown up in the deconvolution so one can go badly wrong in recovering the final spectrum. In other words, you need to know the beam very well to have any chance of getting close to the right answer!

The next picture gives a rough model for how much the “recovered” spectrum depends on the error produced by making even a small error in the beam profile which, for illustration only, is assumed to be Gaussian. It also shows how sensitive the shape of the deconvolved spectrum is to small errors in the beam.

Incidentally, the ratty blue line shows the spectrum obtained from a small patch of the sky rather than the whole sky. We were interested to see how much the spectrum varied across the sky so broke it up into square patches about the same size as those analysed by the Boomerang experiment. This turns out to be a pretty good way of getting the acoustic peak position but, as you can see, you lose information at low l (i.e. on scales larger than the patch).

The WMAP beam isn’t actually Gaussian – it differs quite markedly in its tails, which means that there’s even more cross-talk between different harmonic modes than in this example – but I hope you get the basic point. As Sawangwit & Shanks say, you need to know the beam very well to get the right fluctuation spectrum out. Move the acoustic peak around only slightly and all bets are off about the cosmological parameters and, perhaps, the evidence for dark energy and dark matter. Lung-Yih looked at the way the WMAP had done it and concluded that if their published beam shape was right then they had done a good job and there’s nothing substantially wrong with the results shown in the first graph.

Sawangwit & Shanks suggest the beam isn’t right so the recovered angular spectrum is suspect. I’ll need to look a bit more at the evidence they consider before commenting on that, although if anyone else has worked through it I’d be happy to hear from them through the comments box!

Alternative Galaxy Dynamics Examination

Posted in Education, The Universe and Stuff with tags , , , , on June 12, 2010 by telescoper

Time Allowed: ~1/H0

Study the following video and answer the questions below it. Or else.

1. Use the information provided about the Earth’s orbital speed to estimate the mass of the Sun. (Assume a circular orbit; 1 AU is 1.5 × 1011 m.)

2. Use the information provided about the Sun’s motion around the Galactic Centre to estimate the total mass interior to the Sun’s orbit. (Assume a circular orbit and that the mass distribution is spherically symmetric; you may quote Newton’s shell theorem without proof.)

3. Use the answer to Q2, and other information provided in the video, to estimate the mean matter density in the Milky Way.

4. Use the information provided about the size, shape and stellar content of the Milky Way to estimate the mean number-density of stars interior to the Sun’s orbit.

5. Use the answers to Q3 & Q4 to estimate the mean mass-to-light ratio of the Galaxy.

Cauchy Statistics

Posted in Bad Statistics, The Universe and Stuff with tags , , , , on June 7, 2010 by telescoper

I was attempting to restore some sort of order to my office today when I stumbled across some old jottings about the Cauchy distribution, which is perhaps more familiar to astronomers as the Lorentz distribution. I never used in the publication they related to so I thought I’d just quickly pop the main idea on here in the hope that some amongst you might find it interesting and/or amusing.

What sparked this off is that the simplest cosmological models (including the particular one we now call the standard model) assume that the primordial density fluctuations we see imprinted in the pattern of temperature fluctuations in the cosmic microwave background and which we think gave rise to the large-scale structure of the Universe through the action of gravitational instability, were distributed according to Gaussian statistics (as predicted by the simplest versions of the inflationary universe theory).  Departures from Gaussianity would therefore, if found, yield important clues about physics beyond the standard model.

Cosmology isn’t the only place where Gaussian (normal) statistics apply. In fact they arise  generically,  in circumstances where variation results from the linear superposition of independent influences, by virtue of the Central Limit Theorem. Noise in experimental detectors is often treated as following Gaussian statistics, for example.

The Gaussian distribution has some nice properties that make it possible to place meaningful bounds on the statistical accuracy of measurements made in the presence of Gaussian fluctuations. For example, we all know that the margin of error of the determination of the mean value of a quantity from a sample of size n independent Gaussian-dsitributed varies as 1/\sqrt{n}; the larger the sample, the more accurately the global mean can be known. In the cosmological context this is basically why mapping a larger volume of space can lead, for instance, to a more accurate determination of the overall mean density of matter in the Universe.

However, although the Gaussian assumption often applies it doesn’t always apply, so if we want to think about non-Gaussian effects we have to think also about how well we can do statistical inference if we don’t have Gaussianity to rely on.

That’s why I was playing around with the peculiarities of the Cauchy distribution. This comes up in a variety of real physics problems so it isn’t an artificially pathological case. Imagine you have two independent variables X and Y each of which has a Gaussian distribution with zero mean and unit variance. The ratio Z=X/Y has a probability density function of the form

p(z)=1/\pi(1+z^2),

which is a form of the Cauchy distribution. There’s nothing at all wrong with this as a distribution – it’s not singular anywhere and integrates to unity as a pdf should. However, it does have a peculiar property that none of its moments is finite, not even the mean value!

Following on from this property is the fact that Cauchy-distributed quantities violate the Central Limit Theorem. If we take n independent Gaussian variables then the distribution of sum X_1+X_2 + \ldots X_n has the normal form, but this is also true (for large enough n) for the sum of n independent variables having any distribution as long as it has finite variance.

The Cauchy distribution has infinite variance so the distribution of the sum of independent Cauchy-distributed quantities Z_1+Z_2 + \ldots Z_n doesn’t tend to a Gaussian. In fact the distribution of the sum of any number of  independent Cauchy variates is itself a Cauchy distribution. Moreover the distribution of the mean of a sample of size n does not depend on n for Cauchy variates. This means that making a larger sample doesn’t reduce the margin of error on the mean value!

This was essentially the point I made in a previous post about the dangers of using standard statistical techniques – which usually involve the Gaussian assumption – to distributions of quantities formed as ratios.

We cosmologists should be grateful that we don’t seem to live in a Universe whose fluctuations are governed by Cauchy, rather than (nearly) Gaussian, statistics. Measuring more of the Universe wouldn’t be any use in determining its global properties as we’d always be dominated by cosmic variance..