Archive for the The Universe and Stuff Category

Cranks Anonymous

Posted in Biographical, Books, Talks and Reviews, The Universe and Stuff with tags , , , , on September 22, 2009 by telescoper

Sean Carroll, blogger-in-chief at Cosmic Variance, has ventured abroad from his palatial Californian residence and is currently slumming it in a little town called Oxford where he is attending a small conference in celebration of the 70th birthday of George Ellis. In fact he’s been posting regular live commentaries on the proceedings which I’ve been following with great interest. It looks an interesting and unusual meeting because it involves both physicists and philosophers and it is based around a series of debates on topics of current interest. See Sean’s posts here, here and here for expert summaries of the three days of the meeting.

Today’s dispatches included an account of George’s own talk which appears to have involved delivering a polemic against the multiverse, something he has been known to do from time to time. I posted something on it myself, in fact. I don’t think I’m as fundamentally opposed as Geroge to the idea that we might live in a bit of space-time that may belong to some sort of larger collection in which other bits have different properties, but it does bother me how many physicists talk about the multiverse as if it were an established fact. There certainly isn’t any observational evidence that this is true and the theoretical arguments usually advanced are far from rigorous.The multiverse certainly is  a fun thing to think about, I just don’t think it’s really needed.

There is one red herring that regularly floats into arguments about the multiverse, and that concerns testability. Different bits of the multiverse can’t be observed directly by an observer in a particular place, so it is often said that the idea isn’t testable. I don’t think that’s the right way to look at it. If there is a compelling physical theory that can account convincingly for a realised multiverse then that theory really should have other necessary consequences that are testable, otherwise there’s no point. Test the theory in some other way and you test whether the  multiverse emanating from it is sound too.

However, that fairly obvious statement isn’t really the point of this piece. As I was reading Sean’s blog post for today you could have knocked me down with a feather when I saw my name crop up:

Orthodoxy is based on the beliefs held by elites. Consider the story of Peter Coles, who tried to claim back in the 1990’s that the matter density was only 30% of the critical density. He was threatened by a cosmological bigwig, who told him he’d be regarded as a crank if he kept it up. On a related note, we have to admit that even scientists base beliefs on philosophical agendas and rationalize after the fact. That’s often what’s going on when scientists invoke “beauty” as a criterion.

George was actually talking about a paper we co-wrote for Nature in which we went through the different arguments that had been used to estimate the average density of matter in the Universe, tried to weigh up which were the more reliable, and came to the conclusion that the answer was in the range 20 to 40 percent of the critical density. There was a considerable theoretical prejudice at the time, especially from adherents of  inflation, that the density should be very close to the critical value, so we were running against the crowd to some extent. I remember we got quite a lot of press coverage at the time and I was invited to go on Radio 4 to talk about it, so it was an interesting period for me. Working with George was a tremendous experience too.

I won’t name the “bigwig” George referred to, although I will say it was a theorist; it’s more fun for those working in the field to guess for themselves! Opinions among other astronomers and physicists were divided. One prominent observational cosmologist was furious that we had criticized his work (which had yielded a high value of the density). On the other hand, Martin Rees (now “Lord” but then just plain “Sir”) said that he thought we were pushing at an open door and was surprised at the fuss.

Later on, in 1996, we expanded the article into a book in which we covered the ground more deeply but came to the same conclusion as before.  The book and the article it was based on are now both very dated because of the huge advances in observational cosmology over the last decade. However, the intervening years have shown that we were right in our assessment: the standard cosmology has about 30% of the critical density.

Of course there was one major thing we didn’t anticipate which was the discovery in the late 1990s of dark energy which, to be fair, had been suggested by others more prescient than us as early as 1990. You can’t win ’em all.

So that’s the story of my emergence as a crank, a title to which I’ve tried my utmost to do justice since then. Actually, I would have liked to have had the chance to go to George’s meeting in Oxford, primarily to greet my ertswhile collaborator whom I haven’t seen for ages. But it was invitation-only. I can’t work out whether these days I’m too cranky or not cranky enough to get to go to such things. Looking at the reports of the talks, I rather think it could be the latter.

Now, anyone care to risk the libel laws and guess who Professor BigWig was?

Astrostats

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , , , , , on September 20, 2009 by telescoper

A few weeks ago I posted an item on the theme of how gambling games were good for the development of probability theory. That piece  contained a mention of one astronomer (Christiaan Huygens), but I wanted to take the story on a little bit to make the historical connection between astronomy and statistics more explicit.

Once the basics of mathematical probability had been worked out, it became possible to think about applying probabilistic notions to problems in natural philosophy. Not surprisingly, many of these problems were of astronomical origin but, on the way, the astronomers that tackled them also derived some of the basic concepts of statistical theory and practice. Statistics wasn’t just something that astronomers took off the shelf and used; they made fundamental contributions to the development of the subject itself.

The modern subject we now know as physics really began in the 16th and 17th century, although at that time it was usually called Natural Philosophy. The greatest early work in theoretical physics was undoubtedly Newton’s great Principia, published in 1687, which presented his idea of universal gravitation which, together with his famous three laws of motion, enabled him to account for the orbits of the planets around the Sun. But majestic though Newton’s achievements undoubtedly were, I think it is fair to say that the originator of modern physics was Galileo Galilei.

Galileo wasn’t as much of a mathematical genius as Newton, but he was highly imaginative, versatile and (very much unlike Newton) had an outgoing personality. He was also an able musician, fine artist and talented writer: in other words a true Renaissance man.  His fame as a scientist largely depends on discoveries he made with the telescope. In particular, in 1610 he observed the four largest satellites of Jupiter, the phases of Venus and sunspots. He immediately leapt to the conclusion that not everything in the sky could be orbiting the Earth and openly promoted the Copernican view that the Sun was at the centre of the solar system with the planets orbiting around it. The Catholic Church was resistant to these ideas. He was hauled up in front of the Inquisition and placed under house arrest. He died in the year Newton was born (1642).

These aspects of Galileo’s life are probably familiar to most readers, but hidden away among scientific manuscripts and notebooks is an important first step towards a systematic method of statistical data analysis. Galileo performed numerous experiments, though he certainly carry out the one with which he is most commonly credited. He did establish that the speed at which bodies fall is independent of their weight, not by dropping things off the leaning tower of Pisa but by rolling balls down inclined slopes. In the course of his numerous forays into experimental physics Galileo realised that however careful he was taking measurements, the simplicity of the equipment available to him left him with quite large uncertainties in some of the results. He was able to estimate the accuracy of his measurements using repeated trials and sometimes ended up with a situation in which some measurements had larger estimated errors than others. This is a common occurrence in many kinds of experiment to this day.

Very often the problem we have in front of us is to measure two variables in an experiment, say X and Y. It doesn’t really matter what these two things are, except that X is assumed to be something one can control or measure easily and Y is whatever it is the experiment is supposed to yield information about. In order to establish whether there is a relationship between X and Y one can imagine a series of experiments where X is systematically varied and the resulting Y measured.  The pairs of (X,Y) values can then be plotted on a graph like the example shown in the Figure.

XY

In this example on it certainly looks like there is a straight line linking Y and X, but with small deviations above and below the line caused by the errors in measurement of Y. This. You could quite easily take a ruler and draw a line of “best fit” by eye through these measurements. I spent many a tedious afternoon in the physics labs doing this sort of thing when I was at school. Ideally, though, what one wants is some procedure for fitting a mathematical function to a set of data automatically, without requiring any subjective intervention or artistic skill. Galileo found a way to do this. Imagine you have a set of pairs of measurements (xi,yi) to which you would like to fit a straight line of the form y=mx+c. One way to do it is to find the line that minimizes some measure of the spread of the measured values around the theoretical line. The way Galileo did this was to work out the sum of the differences between the measured yi and the predicted values mx+c at the measured values x=xi. He used the absolute difference |yi-(mxi+c)| so that the resulting optimal line would, roughly speaking, have as many of the measured points above it as below it. This general idea is now part of the standard practice of data analysis, and as far as I am aware, Galileo was the first scientist to grapple with the problem of dealing properly with experimental error.

error

The method used by Galileo was not quite the best way to crack the puzzle, but he had it almost right. It was again an astronomer who provided the missing piece and gave us essentially the same method used by statisticians (and astronomy) today.

Karl Friedrich Gauss was undoubtedly one of the greatest mathematicians of all time, so it might be objected that he wasn’t really an astronomer. Nevertheless he was director of the Observatory at Göttingen for most of his working life and was a keen observer and experimentalist. In 1809, he developed Galileo’s ideas into the method of least-squares, which is still used today for curve fitting.

This approach involves basically the same procedure but involves minimizing the sum of [yi-(mxi+c)]2 rather than |yi-(mxi+c)|. This leads to a much more elegant mathematical treatment of the resulting deviations – the “residuals”.  Gauss also did fundamental work on the mathematical theory of errors in general. The normal distribution is often called the Gaussian curve in his honour.

After Galileo, the development of statistics as a means of data analysis in natural philosophy was dominated by astronomers. I can’t possibly go systematically through all the significant contributors, but I think it is worth devoting a paragraph or two to a few famous names.

I’ve already mentioned Jakob Bernoulli, whose famous book on probability was probably written during the 1690s. But Jakob was just one member of an extraordinary Swiss family that produced at least 11 important figures in the history of mathematics.  Among them was Daniel Bernoulli who was born in 1700.  Along with the other members of his famous family, he had interests that ranged from astronomy to zoology. He is perhaps most famous for his work on fluid flows which forms the basis of much of modern hydrodynamics, especially Bernouilli’s principle, which accounts for changes in pressure as a gas or liquid flows along a pipe of varying width.
But the elder Jakob’s work on gambling clearly also had some effect on Daniel, as in 1735 the younger Bernoulli published an exceptionally clever study involving the application of probability theory to astronomy. It had been known for centuries that the orbits of the planets are confined to the same part in the sky as seen from Earth, a narrow band called the Zodiac. This is because the Earth and the planets orbit in approximately the same plane around the Sun. The Sun’s path in the sky as the Earth revolves also follows the Zodiac. We now know that the flattened shape of the Solar System holds clues to the processes by which it formed from a rotating cloud of cosmic debris that formed a disk from which the planets eventually condensed, but this idea was not well established in the time of Daniel Bernouilli. He set himself the challenge of figuring out what the chance was that the planets were orbiting in the same plane simply by chance, rather than because some physical processes confined them to the plane of a protoplanetary disk. His conclusion? The odds against the inclinations of the planetary orbits being aligned by chance were, well, astronomical.

The next “famous” figure I want to mention is not at all as famous as he should be. John Michell was a Cambridge graduate in divinity who became a village rector near Leeds. His most important idea was the suggestion he made in 1783 that sufficiently massive stars could generate such a strong gravitational pull that light would be unable to escape from them.  These objects are now known as black holes (although the name was coined much later by John Archibald Wheeler). In the context of this story, however, he deserves recognition for his use of a statistical argument that the number of close pairs of stars seen in the sky could not arise by chance. He argued that they had to be physically associated, not fortuitous alignments. Michell is therefore credited with the discovery of double stars (or binaries), although compelling observational confirmation had to wait until William Herschel’s work of 1803.

It is impossible to overestimate the importance of the role played by Pierre Simon, Marquis de Laplace in the development of statistical theory. His book A Philosophical Essay on Probabilities, which began as an introduction to a much longer and more mathematical work, is probably the first time that a complete framework for the calculation and interpretation of probabilities ever appeared in print. First published in 1814, it is astonishingly modern in outlook.

Laplace began his scientific career as an assistant to Antoine Laurent Lavoiser, one of the founding fathers of chemistry. Laplace’s most important work was in astronomy, specifically in celestial mechanics, which involves explaining the motions of the heavenly bodies using the mathematical theory of dynamics. In 1796 he proposed the theory that the planets were formed from a rotating disk of gas and dust, which is in accord with the earlier assertion by Daniel Bernouilli that the planetary orbits could not be randomly oriented. In 1776 Laplace had also figured out a way of determining the average inclination of the planetary orbits.

A clutch of astronomers, including Laplace, also played important roles in the establishment of the Gaussian or normal distribution.  I have also mentioned Gauss’s own part in this story, but other famous astronomers played their part. The importance of the Gaussian distribution owes a great deal to a mathematical property called the Central Limit Theorem: the distribution of the sum of a large number of independent variables tends to have the Gaussian form. Laplace in 1810 proved a special case of this theorem, and Gauss himself also discussed it at length.

A general proof of the Central Limit Theorem was finally furnished in 1838 by another astronomer, Friedrich Wilhelm Bessel– best known to physicists for the functions named after him – who in the same year was also the first man to measure a star’s distance using the method of parallax. Finally, the name “normal” distribution was coined in 1850 by another astronomer, John Herschel, son of William Herschel.

I hope this gets the message across that the histories of statistics and astronomy are very much linked. Aspiring young astronomers are often dismayed when they enter research by the fact that they need to do a lot of statistical things. I’ve often complained that physics and astronomy education at universities usually includes almost nothing about statistics, because that is the one thing you can guarantee to use as a researcher in practically any branch of the subject.

Over the years, statistics has become regarded as slightly disreputable by many physicists, perhaps echoing Rutherford’s comment along the lines of “If your experiment needs statistics, you ought to have done a better experiment”. That’s a silly statement anyway because all experiments have some form of error that must be treated statistically, but it is particularly inapplicable to astronomy which is not experimental but observational. Astronomers need to do statistics, and we owe it to the memory of all the great scientists I mentioned above to do our statistics properly.

A Well Placed Lecture

Posted in The Universe and Stuff with tags , on September 18, 2009 by telescoper

I noticed that the UK government has recently dropped its ban on product placement in television programmes. I wanted to take this opportunity to state Virgin Airlines that I will not be taking this as a Carling cue to introduce subliminal Coca Cola advertising of any Corby Trouser Press form into this blog.

This week I’ve been giving Marks and Spencer lectures every AIG afternoon to groups of 200 sixth form Samsung students on the subject of the Burger King Big Bang. The talks seemed to go down BMW quite well although I had Betfair trouble sometimes cramming all the Sainsbury things I wanted to talk about in the Northern Rock 30 minutes I was allotted. Anyway, I went through the usual stuff about the Carlsberg cosmic microwave background (CMB), even showing the noise on a Sony television screen to explain that a bit of the Classic FM signal came from the edge of the Next Universe.  The CMB played an Emirates important role in the talk as it is the Marlboro smoking gun of the Big Bang and established our Standard Life model of L’Oreal cosmology.

The timing of these lectures was Goodfella’s Pizza excellent because I was able to include Crown Paints references to the Hubble Ultra Deep Kentucky Fried Chicken Field and the Planck First Direct initial results that I’ve blogged about in the past week or so.

Now that’s all over, Thank God It’s Friday and  I’m getting ready to go to the Comet Sale Now On Opera. ..

First Light from Planck!

Posted in The Universe and Stuff with tags , , , on September 17, 2009 by telescoper

Credit to Andrew Jaffe for alerting me to the fact that ESA’s first press release concerning Planck has now been, well, released…

I last blogged about Planck when it had reached its orbit around L2 and cooled down to its working temperature of 100 milliKelvin. Over the ensuing weeks it has been tested and calibrated, prodded and poked (electronically of course) and generally tuned up. More recently it has completed a “mini-survey” to check that it’s all working as planned.

The way Planck scans means that it takes about six months to cover the whole sky, which is much longer than the two-week period allowed for the mini-survey. This explains the fact that a relatively narrow slice of the celestial sphere has been mapped. However, you can see the foreground emission from the Galactic plane quite clearly. Here is the region shown in the box split into the nine separate frequency channels that Planck observes:

The High Frequency Instrument (HFI) is more sensitive to dust, while the Low Frequency Instrument (LFI) detects more radio emission. It all seems to be working as expected!

And finally here’s a blow up of the smaller square above the Galactic plane shown as seen by  LFI and HFI:

This region is much less prone to foreground emission. The fact that similar structures are seen in the two completely independent receivers shows that the structure is not just instrument noise. In other words, Planck is seeing the cosmic microwave background!

Now Planck will carry out its full survey, scanning the sky for another year or so. There will then be an intense period of data analysis for about another year after which the key science results will be published. Exciting times.

Lessening Anomalies

Posted in Cosmic Anomalies, The Universe and Stuff with tags , , , , , on September 15, 2009 by telescoper

An interesting paper caught my eye on today’s ArXiv and I thought I’d post something here because it relates to an ongoing theme on this blog about the possibility that there might be anomalies in the observed pattern of temperature fluctuations in the cosmic microwave background (CMB). See my other posts here, here, here, here and here for related discussions.

One of the authors of the new paper, John Peacock, is an occasional commenter on this blog. He was also the Chief Inquisitor at my PhD (or rather DPhil) examination, which took place 21 years ago. The four-and-a-half hours of grilling I went through that afternoon reduced me to a gibbering wreck but the examiners obviously felt sorry for me and let me pass anyway. I’m not one to hold a grudge so I’ll resist the temptation to be churlish towards my erstwhile tormentor.

The most recent paper is about the possible  contribution of  the integrated Sachs-Wolfe (ISW) effect to these anomalies. The ISW mechanism generates temperature variations in the CMB because photons travel along a line of sight through a time-varying gravitational potential between the last-scattering surface and the observer. The integrated effect is zero if the potential does not evolve because the energy shift falling into a well exactly balances that involved in climbing out of one. If in transit the well gets a bit deeper, however, there is a net contribution.

The specific thing about the ISW effect that makes it measurable is that the temperature variations it induces should correlate with the pattern of structure in the galaxy distribution, as it is these that generate the potential fluctuations through which CMB photons travel. Francis & Peacock try to assess the ISW contribution using data from the 2MASS all-sky survey of galaxies. This in itself contains important cosmological clues but in the context of this particular question it is a nuisance, like any other foreground contamination, so they subtract it off the maps obtained from the Wilkinson Microwave Anisotropy Probe (WMAP) in an attempt to get a cleaner map of the primordial CMB sky.

The results are shown in the picture below which presents  the lowest order spherical harmonic modes, the quadrupole (left) and octopole (right) for the  ISW component (top) , WMAP data (middle) and at the bottom we have the cleaned CMB sky (i.e. the middle minus the top). The ISW subtraction doesn’t make a huge difference to the visual appearance of the CMB maps but it is enough to substantially reduce to the statistical significance of at least some of the reported anomalies I mentioned above. This reinforces how careful we have to be in analysing the data before jumping to cosmological conclusions.

peacock

There should also be a further contribution from fluctuations beyond the depth of the 2MASS survey (about 0.3 in redshift).  The actual ISW effect could therefore  be significantly larger than this estimate.

Back Early…

Posted in The Universe and Stuff with tags , , , , , on September 11, 2009 by telescoper

As a very quick postscript to my previous post about the amazing performance of Hubble’s spanking new camera, let me just draw attention to a fresh paper on the ArXiv by Rychard Bouwens and collaborators, which discusses the detection of galaxies with redshifts around 8 in the Hubble Ultra Deep Field (shown below in an earlier image) using WFC3/IR observations that reveal galaxies fainter than the previous detection limits.

Amazing. I remember the days when a redshift z=0.5 was a big deal!

To put this in context and to give some idea of its importance, remember that the redshift z is defined in such a way that 1+z is the factor by which the wavelength of light is stretched out by the expansion of the Universe. Thus, a photon from a galaxy at redshift 8 started out on its journey towards us (or, rather, the Hubble Space Telescope) when the Universe was compressed in all directions relative to its present size by a factor of 9. The average density of stuff then was a factor 93=729 larger, so the Universe was a much more crowded place then compared to what it’s like now.

Translating the redshift into a time is trickier because it requires us to know how the expansion rate of the Universe varies with cosmic epoch. The requires solving the equations of a cosmological model or, more realistically for a Friday afternoon, plugging the numbers into Ned Wright’s famous cosmology calculator.

Using the best-estimate parameters for the current concordance cosmology reveals that at redshift 8, the Universe was only about 0.65 billion years old (i.e. light from the distant galaxies seen by HST set out only 650 million years after the Big Bang). Since the current age of the Universe is about 13.7 billion years (according to the same model), this means that the light Hubble detected set out on its journey towards us an astonishing 13 billion years ago.

More importantly for theories of galaxy formation and evolution, this means that at least some galaxies must have formed very early on, relatively speaking, in the first 5% of the time the Universe has been around for until now.

These observations are by no means certain as the redshifts have been determined only approximately using photometric techniques rather than the more accurate spectroscopic methods, but if they’re correct they could be extremely important.

At the very least they provide even stronger motivation for getting on with the next-generation space telescope, JWST.

Atlantes

Posted in Science Politics, The Universe and Stuff with tags , , , , , , on September 10, 2009 by telescoper

I’ve just noticed a  post on another blog about the  meeting of the Herschel ATLAS consortium that’s  going on in Cardiff at the moment, so I thought I’d do a quickie here too. Actually I’ve only just been accepted into the Consortium so quite a lot of the goings-on are quite new to me.

The Herschel ATLAS (or H-ATLAS for short) is the largest open-time key project involving Herschel. It has been awarded 600 hours of observing time  to survey 550 square degrees of sky in 5 wavelenth bands: 110, 170, 250, 350, & 500 microns. It is hoped to detect approximately 250,000 galaxies,  most of them in the nearby Universe, but some will undoubtedly turn out to be very distant, with redshifts of 3 to 4; these are likely to be very interesting for  studies of galaxy evolution.

Herschel is currently in its performance verification (PV) phase, following which there will be a period of science validation (SV). During the latter the ATLAS team will have access to some observational data to have a quick look to see that it’s  behaving as anticipated. It is planned to publish a special issue of the journal Astronomy & Astrophysics next year that will contain key results from the SV phase, although in the case of ATLAS many of these will probably be quite preliminary because only a small part of the survey area will be sampled during the SV time.

Herschel seems to be doing fine, with the possible exception of the HIFI instrument which is currently switched off owing to a fault in its power supply. There is a backup, but the ESA boffins don’t want to switch it back on and risk further complications until they know why it failed in the first place. The problem with HIFI has led to some rejigging of the schedule for calibrating and testing the other two instruments (SPIRE and PACS) but both of these are otherwise doing well.

The data for H-ATLAS proper hasn’t started arriving yet so the meeting here in Cardiff was intended to sort out the preparations, plan who’s going to do what, and sort out some organisational issues. With well over a hundred members, this project has to think seriously about quite a lot of administrative and logistical matters.

One of the things that struck me as particular difficult is the issue of authorship of science papers. In observational astronomy and cosmology we’re now getting used to the situation that has prevailed in experimental particle physics for some time, namely that even short papers have author lists running into the hundreds. Theorists like me usually work in teams too, but our author lists are, generally speaking, much shorter. In fact I don’t have any publications  yet with more than six or seven authors; mine are often just by me and a PhD student or postdoc.

In a big consortium, the big issue is not so much who to include, but how to give appropriate credit to the different levels of contribution. Those senior scientists who organized and managed the survey are clearly key to its success, but so also are those who work at the coalface and are probably much more junior. In between there are individuals who supply bits and pieces of specialist software or extra comparison data. Nobody can pretend that everyone in a list of 100 authors has made an identical contribution, but how can you measure the differences and how can you indicate them on a publication? Or  shouldn’t you try?

Some suggest that author lists should always be alphabetical, which is fine if you’re “Aarseth” but not if you’re “Zel’dovich”. This policy would, however, benefit “al”, a prolific collaborator who never seems to make it as first author..

When astronomers write grant applications for STFC one of the pieces of information they have to include is a table summarising their publication statistics. The total number of papers written has  to be given, as well as the number in which the applicant  is  the first author on the list,  the implicit assumption being that first authors did more work than the others or that first authors were “leading” the work in some sense.

Since I have a permanent job and  students and postdocs don’t, I always make junior collaborators  first author by default and only vary that policy if there is a specific reason not to. In most cases they have done the lion’s share of the actual work anyway, but even if this is not the case it is  important for them to have first author papers given the widespread presumption that this is a good thing to have on a CV.

With more than 100 authors, and a large number of  collaborators vying for position, the chances are that junior people will just get buried somewhere down the author list unless there is an active policy to protect their interests.

Of course everyone making a significant contribution to a discovery has to be credited, and the metric that has been used for many years to measure scientific productivity is the numbered of authored publications, but it does seem to me that this system must have reached breaking point when author lists run to several pages!

It was all a lot easier in the good old days when there was no data…

PS. Atlas was a titan who was forced to hold the sky  on his shoulders for all eternity. I hope this isn’t expected of members of the ATLAS consortium, none of who are titans anyway (as far as I can tell). The plural of Atlas is Atlantes, by the way.

Hubble Flash

Posted in The Universe and Stuff with tags , , , , , on September 9, 2009 by telescoper

Just a quick post to point out that brand new “Early Release” images have just appeared following the recent refurbishment of the Hubble Space Telescope.

You can read the accompanying press release here, so I’ll just post this brief description:

These four images are among the first observations made by the new Wide Field Camera 3 aboard the upgraded NASA Hubble Space Telescope.

The image at top left shows NGC 6302, a butterfly-shaped nebula surrounding a dying star. At top right is a picture of a clash among members of a galactic grouping called Stephan’s Quintet. The image at bottom left gives viewers a panoramic portrait of a colorful assortment of 100,000 stars residing in the crowded core of Omega Centauri, a giant globular cluster. At bottom right, an eerie pillar of star birth in the Carina Nebula rises from a sea of greenish-colored clouds.

My own favourite has to be Stephan’s Quintet, but they all look pretty fantastic.

Cosmic Haiku

Posted in Poetry, The Universe and Stuff with tags , , , on September 6, 2009 by telescoper

I haven’t had much time to post today and will probably be too busy next week for anything too substantial, so I thought I’d resort to a bit of audience participation. How about a few Haiku on themes connected to astronomy, cosmology or physics?

Don’t be worried about making the style of your contributions too authentic, just make sure they are 17 syllables in total, and split into three lines of 5, 7 and 5 syllables respectively.

Here’s a few of my own to give you an idea!

Quantum Gravity:
The troublesome double-act
Of Little and Large

Gravity’s waves are
Traceless; which does not mean they
Can never be found

The Big Bang wasn’t
So big, at least not when you
Think in decibels.

Cosmological
Constant and Dark Energy
Are vacuous names

Microwave Background
Photons remember a time
When they were hotter

Isotropic and
Homogeneous metric?
Robertson-Walker

Galaxies evolve
In a complicated way
We don’t understand

Acceleration:
Type Ia Supernovae
Gave us the first clue

Cosmic Inflation
Could have stretched the Universe
And made it flatter

Astrophysicist
Is what I’m told is my Job
Title. Whatever.

Contributions welcome via the comments box. The best one gets a chance to win Bully’s star prize.

Game Theory

Posted in Bad Statistics, Books, Talks and Reviews, The Universe and Stuff with tags , , , on September 5, 2009 by telescoper

Nowadays gambling is generally looked down on as something shady and disreputable, not to be discussed in polite company, or even to be banned altogether. However, the  formulation of the basic laws of probability was almost exclusively inspired by their potential application to games of chance. Once established, these laws found a much wide range of applications in scientific contexts, including my own field of astronomy. I thought I’d illustrate this connection with a couple of examples. You may think that I’m just trying to make excuses for the fact that I also enjoy the odd bet every now and then!

Gambling in various forms has been around for millennia. Sumerian and Assyrian archaeological sites are littered with examples of a certain type of bone, called the astragalus (or talus bone). This is found just above the heel and its shape (in sheep and deer at any rate) is such that when it is tossed in the air it can land in any one of four possible orientations. It can therefore be used to generate “random” outcomes and is in many ways the forerunner of modern six-sided dice. The astragalus is known to have been used for gambling games as early as 3600 BC.

images

Unlike modern dice, which appeared around 2000BC, the astragalus is not symmetrical, giving a different probability of it landing in each orientation. It is not thought that there was a mathematical understanding of how to calculate odds in games involving this object or its more symmetrical successors.

Games of chance also appear to have been commonplace in the time of Christ – Roman soldiers are supposed to have drawn lots at the crucifixion, for example – but there is no evidence of any really formalised understanding of the laws of probability at this time.

Playing cards emerged in China sometime during the tenth century BC and were available in western europe by the 14th Century. This is an interesting development because playing cards can be used for games such as contract Bridge which involve a great deal of pure skill as well as an element of randomness. Perhaps it is this aspect that finally got serious intellectuals (i.e. physicists) excited about probability theory.

The first book on probability that I am aware of was by Gerolamo Cardano. His Liber de Ludo Aleae ( Book on Games of Chance) was published in 1663, but it was written more than a century earlier than this date.  Probability theory really got going in 1654 with a famous correspondence between the two famous mathematicians Blaise Pascal and Pierre de Fermat, sparked off by a gambling addict by the name of Antoine Gombaud, who went by the name of the “Chevalier de Méré” (although he wasn’t actually a nobleman of any sort). The Chevalier de Méré had played a lot of dice games in his time and, although he didn’t have a rigorous mathematical theory of how they worked, he nevertheless felt he had an intuitive  “feel” for what was a good bet and what wasn’t. In particular, he had done very well financially by betting at even money that he would roll at least one six in four rolls of a standard die.

It’s quite an easy matter to use the rules of probability to see why he was successful with this game. The odds  that a single roll of a fair die yields a six is 1/6. The probability that it does not yield a six is therefore 5/6. The probability that four independent rolls produce no sixes at all is (the probability that the first roll is not a six) times (the probability that the second roll is not a six) times (the probability that the third roll is not a six) times (the probability that the fourth roll is not a six). Each of the probabilities involved in this multiplication is 5/6, so the result is (5/6)4 which is 625/1296. But this is the probability of losing. The probability of winning is 1-625/1296 = 671/1296=0.5177, significantly higher than 50%. Sinceyou’re more likely to win than lose, it’s a good bet.

So successful had this game been for de Méré that nobody would bet against him any more, and he had to think of another bet to offer. Using his “feel” for the dice, he reckoned that betting on one or more double-six in twenty-four rolls of a pair of dice at even money should also be a winner. Unfortunately for him, he started to lose heavily on this game and in desperation wrote to his friend Pascal to ask why. This set Pascal wondering, and he in turn started a correspondence about it with Fermat.

This strange turn of events led not only to the beginnings of a general formulation of probability theory, but also to the binomial distribution and the beautiful mathematical construction now known as Pascal’s Triangle.

The full story of this is recounted in the fascinating book shown above, but the immediate upshot for de Méré was that he abandoned this particular game.

To see why, just consider each throw of a pair of dice as a single “event”. There are 36 possible events corresponding to six possible outcomes on each of the dice (6×6=36). The probability of getting a double six in such an event is 1/36 because only one of the 36 events corresponds to two sixes. The probability of not getting a double six is therefore 35/36. The probability that a set of 24 independent fair throws of a pair of dice produces no double-sixes at all is therefore 35/36 multiplied by itself 24 times, or (35/36)24. This is 0.5086, which is slightly higher than 50%. The probability that at least one double-six occurs is therefore 1-0.5086, or 0.4914. Our Chevalier has a less than 50% chance of winning, so an even money bet is not a good idea, unless he plans to use this scheme as a tax dodge.

Both Fermat and Pascal had made important contributions to many diverse aspects of scientific thought in addition to pure mathematics, including physics, the first real astronomer to contribute to the development of probability in the context of gambling was Christiaan Huygens, the man who discovered the rings of Saturn in 1655. Two years after his famous astronomical discovery, he published a book called Calculating in Games of Chance, which introduced the concept of expectation. However, the development of the statistical theory underlying  games and gambling came  with the publication in 1713 of Jakob Bernouilli’s wonderful treatise entitled Ars Conjectandi which did a great deal to establish the general mathematical theory of probability and statistics.