Archive for the The Universe and Stuff Category

Launch Party

Posted in The Universe and Stuff with tags , on May 14, 2009 by telescoper

The Big Day has finally arrived!

I’ve managed to submit my paper to the journal and the ArXiv before the little shindig we’ve been planning for the Planck and Herschel launch gets under way at 1pm. Business as usual so far, though.

Strangely, I haven’t managed to get nervous yet, although I have to say  there are many anxious faces around the department. I just keep telling people how much simpler their life is going to be if it all goes wrong, without all that messy and unnecessarily complicated data to deal with. It bothers me sometimes that I don’t often get nervous expect when watching sport. Mind you, being  a Newcastle United supporter probably makes me more nervous more often than most people.

Anyway, at times like this a  stiff upper lip is obviously called for. Anyone who cracks now is clearly not officer material. There’ll be plenty of time for panic later on.

It’s now about 12.45 and the launch is scheduled for 14.12.  With impeccable timing, the First Minister of Wales, Rhodri Morgan, is due to arrive in the department at 14.30. I hope he doesn’t think it’s going to be delayed especially for him. I also hope we’re not all in tears when he gets here.

We’re going to be watching on a big screen via a satellite downlink. Not quite as good as being there in person, but probably better than watching it on the net (which you can do here).

Anyway, I can hear the wine bottles being opened so I’m going to barge my way to the front of the queue, feigning nerves in order to justify a calming tipple.

I’ll be back later to complete the story, for better or worse.

Fingers crossed. TTFN.

…………

 

Well here I am back from the do. It all seemed to go pretty well, although I wasn’t paying attention at the exact time of the launch – opening a bottle of wine – so I failed to get nervous even then. As far as I can tell the launch went like clockwork – or at least like Newtonian Mechanics – and the ground station even managed to handshake with both satellites after separation.

I was particularly impressed to see that ESA had roped in affable compère and media god Des Lynam to provide expertise in his accustomed role as  TV anchor man, although for some reason he was operating under the pseudonym of David Southwood:

Anyway, all seems to be set fair. I’m delighted. It will be a while before we get any science results, as it takes several weeks to get to L2.  I’m looking forward to first light from Herschel fairly soon, but science from Planck will be a while coming and even when it does it the information will be strictly controlled.

Anyway, in case you missed it here’s the liftoff!

P.S. We had a few bottles of special Herschel wine. Vintage 2001 Rioja, full-bodied and uncompromising. Not to everyone’s taste. I quite liked it but I was already quite drunk.

Unravelling CERN

Posted in Science Politics, The Universe and Stuff with tags , , on May 13, 2009 by telescoper

A disturbing piece of news passed me by last week. One of the founder members, Austria, has decided to pull out of CERN, the home of the much-vaunted Large Hadron Collider. The announcement was made on 8th May 2009, but I missed it at the time owing to my trip to Berlin.

Austria, a founder member of CERN, has been a member of the 20-nation body since 1959, but its justification for leaving, according to Austria’s Minister for Science Johannes Hahn, is that the CERN subscription ties up about 70% of the nation’s budget for international research. To quote him

“In the meantime there have been diverse research projects in the European Union which offer a very large number of different scientists’ perspectives..”

Austria only contributes 2.2 percent of CERN’s budget, but it will be the first country to leave the organization since Spain’s departure in 1969. Spain rejoined in 1983. According to a statement,

“CERN would be sorry to lose Austria as one of its member states and sincerely believes that it would be in Austria’s best interests to remain a member..”

The immediate consequence of this will be a (small) increase in the subscriptions payable by other member nations in order to plug the funding gap left by Austria’s departure. However, particle physicists will probably see this as a very worrying precedent that might signal to other funding bodies that they could think the previously unthinkable and follow Austria’s example.

The CERN subscription payable by the United Kingdom comes from the budget of the Science and Technology Facilities Council (STFC). Although it amounts to about £82 million, this is about 16% of the STFC budget, which is a much smaller fraction than in the case of Austria. However, the consequences of one of the larger contributors like the UK pulling out of CERN would be extremely serious, because of the large increases in remaining subscriptions that would be needed to fill the gap that would be created.

All this puts even more pressure on the Large Hadron Collider to produce the goods and it also reinforces the view I expressed in one of my first ever blog posts that we may be nearing the time when nations decide that Big Science is just too expensive and  too esoteric to be worth investing in…

STOP PRESS:  New just in from Thomas (below) reveals that the Austrians have done a U-bahn U-turn and are not, after all, going to pull out of CERN.

For more information, see the story in Physics World.

Planckety-Planck

Posted in The Universe and Stuff with tags , on May 12, 2009 by telescoper

With the launch of Planck and Herschel only two days away, excitement is reaching fever pitch. As the countdown inches slowly towards the moment of reckoning the tension mounts…

This post would have been a bit more exciting if all that had been true. Of course we now do have a definite launch window for Planck, 14th May 2009. The launch window opens at 14.12 BST and will remain open for about two hours. Let’s hope they manage to get the thing up in that time, otherwise there’ll be yet another substantial delay.

Planck will be launched with its sister-mission, Herschel.  They will both be carried by an Ariane 5 rocket from the European Space Agency’s launch site in Kourou, French Guyana. Within half an hour of launch, Planck and Herschel will separate and start on their journeys.  While both satellites are going to orbit the second Lagrangian Point (L2), they will have slightly different orbits.  It will take Planck around 6 weeks to get to L2, during which time it will start to cool down its cryogenic systems. Eventually it will be the coolest thing in space.

Of course that is all very exciting, but it would have been a lie to say that the excitement is mounting that much back here at home. Together with the fact that the undergraduate examination period is upon us, the department is extremely quiet and those that are most nervous have taken their jitters to South America. The fact is that most of the people directly involved with Planck or Herschel have actually been invited to the launch and have either already made their way there or have at least set out on their journeys to the jolly.

We do, however, plan to have a small function here to mark the  launch on Thursday with wine and nibbles and talks about the science. I hope it’s not tempting fate. I”m not exactly nervous myself, but probably will get butterflies as we watch the launch on the net. Still, there’ll be wine to steady our nerves…

I  remember very well the “launch”, in 1996, of a mission called Cluster which many of my colleagues at Queen Mary were heavily involved. This was the first flight of Ariane-5. Bugs in the software meant it lost control shortly after launch and the party very soon turned into a wake, although the resulting fireworks were quite spectacular.

Because the Ariane-5 vehicle was brand new, and somewhat untested, the European Space Agency had decided to take advantage of an offer to launch the mission without charge. This seemed like a good deal because the costs of putting an experiment in space are a sizeable fraction of the overall budget for such missions. It turned out, though, that the old expression was true. There’s no such thing as a free launch.

In fact, Cluster did eventually fly using flight spares and a launch on a Russian spacecraft. If Planck and Herschel go boom then there’s no way they can be replaced. It would be a terrible thing if this happened, for a large number of reasons, but Ariane-5 has launched many times since then, and I’m confident that both Planck and Herschel will soon be safely on their way to L2.

But don’t expect any science immediately, especially not from Planck. It will be years before the key science results emerge and, until then, the science team is sworn to secrecy….

Space Experiments

Posted in Art, Biographical, The Universe and Stuff with tags , , on May 9, 2009 by telescoper

I’ve been disconnected from the blogosphere for a few days,  as one of the consequences of a very interesting trip  to Berlin from which I’ve just returned.

When I received an invitation a few months ago to give a lecture on cosmology at the Institut für Raumexperimente (Institute for Space Experiments), I first thought that the “space experiment” concerned would be the forthcoming Planck mission, which is now firmly scheduled for launch on the afternoon of 14th May 2009. However, the institute I visited  is in fact part of the Universität der Künste Berlin (Arts University of Berlin) . It’s a new project run by Olafur Eliasson, a famous artist and a Professor at the University and I was one of a series of guests invited to talk to the students about various aspects of space and time. Olafur was one of the people behind the Experiment Marathon in Reykjavik which was almost exactly a year ago, and he’d decided to invite me to his new institute here and now as a result of my contribution there and then.

I was quite apprehensive about doing this because I’m really extremely ignorant about art, and didn’t want to appear too much of a philistine. I therefore decided to prepare a talk that was focussed strongly on the science but with just one or two references to works of art.  It turned out that the artist Matthew Ritchie was also around and keen to participate so we decided to do a joint presentation.

The eminent art historian Caroline Jones from MIT also sat in, contributing to the discussion and adding her own insights along the way

Matthew spoke first about how art can draw ideas and inspiration from scientific thought and argued that this was especially relevant today when science is so full of strange and wonderful concepts. Along the way he demonstrated an unexpectedly deep understanding of subjects such as thermodynamics, relativity and quantum theory.

I then took over and talked about cosmology, trying to focus on the interplay between theory and observation in order to convey some sort of idea of how the process of science actually works in this field.  I was particularly keen to get across the idea that we haven’t made scientific progress in cosmology by merely looking and recording. We have had needed to build theoretical frameworks to help us interpret what we see and to plan new observations.

Although we’d only discussed things for a few minutes before the event, as it turned out the two talks dovetailed rather nicely, I think.

When I was finished, Matthew finished by showing some of his own works which are complex, multi-faceted, multi-media creations evocations of and responses to ideas often, but not exclusively, arising from theoretical physics. The photograph above shows one of his installations. I haven’t seen his work up close, but it struck me as astonishingly inventive but at the same time possessing a great unity about it. His works are extremely diverse but they all seem to have a very distinctive signature all of his own.

After the talks and lots of discussion we adjourned for a nice dinner in a local bistro with some of the students who carried on asking about various bits of physics, such as the possible existence of  closed timelike curves. I was delighted by the intensity of their curiosity, which went far beyond that displayed by most physics students!

These days there seem to be quite a lot of initiatives aimed at promoting a dialogue between art and science although most of them don’t seem to be very successful. Science and art are obviously quite different types of activity. Each is also surrounded by a discursive penumbra of metaphors and simplifications that attempts to articulate what is going on inside the field to those outside. Not all artists try to explain their work in this way and neither do all scientists. Often the result is that the arts-science dialogue is simply a coming together of relatively superficial interpretations that does not really bring the core domains any closer. What is particularly impressive about Matthew Ritchie is that he does seem to have deeper insights into science than many artists and he responds to those insights in a way that is highly original.

The other thing that struck me after taking part in this event was the difference between art as a process and the products of that process in terms of “works of art”. Similar  processes are involved in making art as are needed in science, such as those involving problem-solving about how to implement an idea in a painting, sculpture or an equation. What differs is that works of art are, to a greater or lesser extent, consumable by the general public while those of science are not.

 The invitation to do this talk also gave me the chance to take a trip down the Unter den Linden of my memory. I’ve actually been to Berlin twice before. Once, about 25 years ago when I was a student, and then again in the early 90s when I attended a conference in Potsdam.

This time I stayed in a charming but rather antiquated hotel in the Prenzlauer Berg area of the city. Before 1989 this was in East Berlin, on the “wrong” side of the Berlin Wall. It had, however, escaped the total devastation that rained down on most of the rest of Berlin during the later stages of the war and it managed to retain much of its interesting architecture. After reunification it became a rather bohemian area and many artists set up studios there, which is presumably part of the reason my hosts had located there. Prenzlauer Berg had also been a major centre for Berlin’s sizeable  beer-making industry. One of the larger breweries has now been transformed into an exciting arts centre called the Kulturbrauerei and the Institut fur Raümexperimente is itself also housed in buildings that were once part of a brewery.  In fact, the whole area was built in the 19th century, itself a kind of space experiment, and still incorporates many features arising from its origins as an innovative piece of urban planning.

When I first came to the cityof Berlin in 1985 I stayed in the West – with its ostentatiously exuberant and uninhibited nightlife, West Berlin was an amazing place to visit in those days. I did, however, have a pass to travel to the East for a day. I remember walking through Checkpoint Charlie, on Friedrichstrasse, after passing through Potsdammerplatz south of the  Brandenburg Gate and looking eastwards across the strip of waste ground that had been levelled to create a killing zone for  escapees coming in the other direction. The transition from affluent and colourful West Berlin to the dreary drabness of the East was like swtiching channels to find a black-and-white movie on view. It was also frightening because everywhere you looked there were guns pointed at you, especially on the return leg from East to West. I also remember thinking how much the shoddy and unimaginative postwar architecture of East Berlin reminded me of Wolverhampton.

The drastic social and political experiment that lay behind the Berlin Wall was ultimately a failure, but its legacy will only slowly vanish. There are still signs of it even today, almost twenty years after the Wall fell in a metaphorical sense.

This time I reversed my previous path, starting out in the East and walking to the West. This time both sides were in glorious colour. In fact, it was a lovely spring morning and there were tourists everywhere.

Very little of the wall now remains. When I came in the 90s, just  a few years after the momentous events of 1989, much of it was still intact although there was a big gap in the central section. The killing zone was a strip of rubble-strewn ground which it was possible to walk over without any real hindrance.  Hitler’s bunker was located there too, although its position wasn’t advertised for fear of it becoming some kind of grisly  shrine.

At that time path of the wall through the city was easy to follow by eye as it was marked by the tall cranes involved in massive construction projects aimed at removing the scar that the wall had carved across the face of the city.

Returning now to the same location, I found new buildings covering almost all of the old cold war stuff but, in between the offices and administrative buildings, there is also a sombre and very moving Memorial to the Murdered Jews of Europe. Checkpoint Charlie has gone too, of course, but its site is also marked by a museum. Elsewhere in the city only one or two pieces of the wall remain, the biggest one in Bernauer Strasse, not far from my hotel.

It was fascinating to see the how the city slowly is renewing itself. There is still a huge amount of building going on but it’s a wonderful city to move around and it’s very green. The wide boulevards give a tremendous sense of space which contrasts enormously with the creeping claustrophobia of London.

Back from Berlin on Friday lunchtime I had time to pop into the RAS meeting and dine again at the RAS Club before returning on the late train back to Cardiff, bringing closure to a little space-like curve of my own. 

A short trip, but  fascinating and very enjoyable.

The Cosmic Tightrope

Posted in The Universe and Stuff with tags , , on May 3, 2009 by telescoper

Here’s a thought experiment for you.

Imagine you are standing outside a sealed room. The contents of the room are hidden from you, except for a small window covered by a curtain. You are told that you can open the curtain once and only briefly to take a peep at what is inside, and you may do this whenever you feel the urge.

You are told what is in the room. It is bare except for a tightrope suspended across it about two metres in the air. Inside the room is a man who at some time in the past – you’re not told when – began walking along the tightrope. His instructions were to carry on walking backwards and forwards along the tightrope until he falls off, either through fatigue or lack of balance. Once he falls he must lie motionless on the floor.

You are not told whether he is skilled in tightrope-walking or not, so you have no way of telling whether he can stay on the rope for a long time or a short time. Neither are you told when he started his stint as a stuntman.

What do you expect to see when you eventually pull the curtain?

Well, if the man does fall off sometime it will clearly take him a very short time to drop to the floor. Once there he has to stay there.One outcome therefore appears very unlikely: that at the instant you open the curtain, you see him in mid-air between a rope and a hard place.

Whether you expect him to be on the rope or on the floor depends on information you do not have. If he is a trained circus artist, like the great Charles Blondin here, he might well be capable of walking to and fro along the tightrope for days. If not, he would probably only manage a few steps before crashing to the ground. Either way it remains unlikely that you catch a glimpse of him in mid-air during his downward transit. Unless, of course, someone is playing a trick on you and someone has told the guy to jump when he sees the curtain move.

This probably seems to have very little to do with physical cosmology, but now forget about tightropes and think about the behaviour of the mathematical models that describe the Big Bang. To keep things simple, I’m going to ignore the cosmological constant and just consider how things depend on one parameter, the density parameter Ω0. This is basically the ratio between the present density of the matter in the Universe compared to what it would have to be to cause the expansion of the Universe eventually to halt. To put it a slightly different way, it measures the total energy of the Universe. If Ω0>1 then the total energy of the Universe is negative: its (negative) gravitational potential energy dominates over the (positive) kinetic energy. If Ω0<1 then the total energy is positive: kinetic trumps potential. If Ω0=1 exactly then the Universe has zero total energy: energy is precisely balanced, like the man on the tightrope.

A key point, however, is that the trade-off between positive and negative energy contributions changes with time. The result of this is that Ω is not fixed at the same value forever, but changes with cosmic epoch; we use Ω0 to denote the value that it takes now, at cosmic time t0, but it changes with time.

At the beginning, at the Big Bang itself,  all the Friedmann models begin with Ω arbitrarily close to unity at arbitrarily early times, i.e. the limit as t tends to zero is Ω=1.

In the case in which the Universe emerges from the Big bang with a value of Ω just a tiny bit greater than one then it expands to a maximum at which point the expansion stops. During this process Ω grows without bound. Gravitational energy wins out over its kinetic opponent.

If, on the other hand, Ω sets out slightly less than unity – and I mean slightly, one part in 1060 will do – the Universe evolves to a state where it is very close to zero. In this case kinetic energy is the winner  and Ω ends up on the ground, mathematically speaking.

In the compromise situation with total energy zero, this exact balance always applies. The universe is always described by Ω=1. It walks the cosmic tightrope. But any small deviation early on results in runaway expansion or catastrophic recollapse. To get anywhere close to Ω=1 now – I mean even within a factor ten either way – the Universe has to be finely tuned.

A slightly different way of describing this is to think instead about the radius of curvature of the Universe. In general relativity the curvature of space is determined by the energy (and momentum) density. If the Universe has zero total energy it is flat, so it doesn’t have any curvature at all so its curvature radius is infinite. If it has positive total energy the curvature radius is finite and positive, in much the same way that a sphere has positive curvature. In the opposite case it has negative curvature, like a saddle. I’ve blogged about this before.

I hope you can now see how this relates to the curious case of the tightrope walker.

If the case Ω0= 1 applied to our Universe then we can conclude that something trained it to have a fine sense of equilibrium. Without knowing anything about what happened at the initial singularity we might therefore be pre-disposed to assign some degree of probability that this is the case, just as we might be prepared to imagine that our room contained a skilled practitioner of the art of one-dimensional high-level perambulation.

On the other hand, we might equally suspect that the Universe started off slightly over-dense or slightly under-dense, at which point it should either have re-collapsed by now or have expanded so quickly as to be virtually empty.

About fifteen years ago, Guillaume Evrard and I tried to put this argument on firmer mathematical grounds by assigning a sensible prior probability to Ω based on nothing other than the assumption that our Universe is described by a Friedmann model.

The result we got was that it should be of the form

P(\Omega) \propto \Omega^{-1}(\Omega-1)^{-1}.

I was very pleased with this result, which is based on a principle advanced by physicist Ed Jaynes, but I have no space to go through the mathematics here. Note, however, that this prior has three interesting properties: it is infinite at Ω=0 and Ω=1, and it has a very long “tail” for very large values of Ω. It’s not a very well-behaved measure, in the sense that it can’t be integrated over, but that’s not an unusual state of affairs in this game. In fact it is an improper prior.

I think of this prior as being the probabilistic equivalent of Mark Twain’s description of a horse:

dangerous at both ends, and uncomfortable in the middle.

Of course the prior probability doesn’t tell usall that much. To make further progress we have to make measurements, form a likelihood and then, like good Bayesians, work out the posterior probability . In fields where there is a lot of reliable data the prior becomes irrelevant and the likelihood rules the roost. We weren’t in that situation in 1995 – and we’re arguably still not – so we should still be guided, to some extent by what the prior tells us.

The form we found suggests that we can indeed reasonably assign most of our prior probability to the three special cases I have described. Since we also know that the Universe is neither totally empty nor ready to collapse, it does indicate that, in the absence of compelling evidence to the contrary, it is quite reasonable to have a prior preference for the case Ω=1.  Until the late 1980s there was indeed a strong ideological preference for models with Ω=1 exactly, but not because of the rather simple argument given above but because of the idea of cosmic inflation.

From recent observations we now know, or think we know, that Ω is roughly 0.26. To put it another way, this means that the Universe has roughly 26% of the density it would need to have to halt the cosmic expansion at some point in the future. Curiously, this corresponds precisely to the unlikely or “fine-tuned” case where our Universe is in between  two states in which we might have expected it to lie.

Even if you accept my argument that Ω=1 is a special case that is in principle possible, it is still the case that it requires the Universe to have been set up with very precisely defined initial conditions. Cosmology can always appeal to special initial conditions to get itself out of trouble because we don’t know how to describe the beginning properly, but it is much more satisfactory if properties of our Universe are explained by understanding the physical processes involved rather than by simply saying that “things are the way they are because they were the way they were.” The latter statement remains true, but it does not enhance our understanding significantly. It’s better to look for a more fundamental explanation because, even if the search is ultimately fruitless, we might turn over a few interesting stones along the way.

The reasoning behind cosmic inflation admits the possibility that, for a very short period in its very early stages, the Universe went through a phase where it was dominated by a third form of energy, vacuum energy. This forces the cosmic expansion to accelerate. This drastically changes the arguments I gave above. Without inflation the case with Ω=1 is unstable: a slight perturbation to the Universe sends it diverging towards a Big Crunch or a Big Freeze. While inflationary dynamics dominate, however, this case has a very different behaviour. Not only stable, it becomes an attractor to which all possible universes converge. Whatever the pre-inflationary initial conditions, the Universe will emerge from inflation with Ω very close to unity. Inflation trains our Universe to walk the tightrope.

So how can we reconcile inflation with current observations that suggest a low matter density? The key to this question is that what inflation really does is expand the Universe by such a large factor that the curvature radius becomes infinitesimally small. If there is only “ordinary” matter in the Universe then this requires that the universe have the critical density. However, in Einstein’s theory the curvature is zero only if the total energy is zero. If there are other contributions to the global energy budget besides that associated with familiar material then one can have a low value of the matter density as well as zero curvature. The missing link is dark energy, and the independent evidence we now have for it provides a neat resolution of this problem.

Or does it? Although spatial curvature doesn’t really care about what form of energy causes it, it is surprising to some extent that the dark matter and dark energy densities are similar. To many minds this unexplained coincidence is a blemish on the face of an otherwise rather attractive structure.

It can be argued that there are initial conditions for non-inflationary models that lead to a Universe like ours. This is true. It is not logically necessary to have inflation in order for the Friedmann models to describe a Universe like the one we live in. On the other hand, it does seem to be a reasonable argument that the set of initial data that is consistent with observations is larger in models with inflation than in those without it. It is rational therefore to say that inflation is more probable to have happened than the alternative.

I am not totally convinced by this reasoning myself, because we still do not know how to put a reasonable measure on the space of possibilities existing prior to inflation. This would have to emerge from a theory of quantum gravity which we don’t have. Nevertheless, inflation is a truly beautiful idea that provides a framework for understanding the early Universe that is both elegant and compelling. So much so, in fact, that I almost believe it.

Space Time

Posted in Biographical, The Universe and Stuff with tags , , on April 30, 2009 by telescoper

I thought anyone reading my rather gloomy recent posts could probably do with a laugh so I thought I’d put this up.

These clips contain a short item  I did about nine or ten years ago for the BBC series Space, which was presented by Sam Neill. Originally we were going to demonstrate wormholes using a snooker table, clever editing and reversed video. The producer, Jeremy,  decided that wouldn’t look spectacular enough so instead we went to St Anton in Austria: I was flown over the Alps in a helicopter and then driven through the Arlberg tunnel in an impressively fast car. Well worth the cost to license fee payers, I’m sure, even if the three-day trip to Austria by me and a crew of six as well as the hire of the helicopter ended up as a mere three minutes of screen time…

The episode I was in, the last of 6 in the series, was called To Boldly Go. I remember suggesting to the producer that the only way to travel faster than light in the manner required was with a split infinitive drive, but they didn’t use that in the final script.

Notice how, in the helicopter sequence, I give the appearance of being completely terrified. A fine piece of acting by me, I thought. *Cough*

Unfortunately my bit is quite a long way into the first clip, so you need to wait until about 09.00, and it runs over the join into the second clip

The item is daft, I know, and I don’t really believe any of that stuff about wormholes… but it was great fun doing it.

The Doomsday Argument

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , on April 29, 2009 by telescoper

I don’t mind admitting that as I get older I get more and  more pessimistic about the prospects for humankind’s survival into the distant future.

Unless there are major changes in the way this planet is governed, our planet may become barren and uninhabitable through war or environmental catastrophe. But I do think the future is in our hands, and disaster is, at least in principle, avoidable. In this respect I have to distance myself from a very strange argument that has been circulating among philosophers and physicists for a number of years. It is called Doomsday argument, and it even has a sizeable wikipedia entry, to which I refer you for more details and variations on the basic theme. As far as I am aware, it was first introduced by the mathematical physicist Brandon Carter and subsequently developed and expanded by the philosopher John Leslie (not to be confused with the TV presenter of the same name). It also re-appeared in slightly different guise through a paper in the serious scientific journal Nature by the eminent physicist Richard Gott. Evidently, for some reason, some serious people take it very seriously indeed.

The Doomsday argument uses the language of probability theory, but it is such a strange argument that I think the best way to explain it is to begin with a more straightforward problem of the same type.

 Imagine you are a visitor in an unfamiliar, but very populous, city. For the sake of argument let’s assume that it is in China. You know that this city is patrolled by traffic wardens, each of whom carries a number on their uniform.  These numbers run consecutively from 1 (smallest) to T (largest) but you don’t know what T is, i.e. how many wardens there are in total. You step out of your hotel and discover traffic warden number 347 sticking a ticket on your car. What is your best estimate of T, the total number of wardens in the city?

 I gave a short lunchtime talk about this when I was working at Queen Mary College, in the University of London. Every Friday, over beer and sandwiches, a member of staff or research student would give an informal presentation about their research, or something related to it. I decided to give a talk about bizarre applications of probability in cosmology, and this problem was intended to be my warm-up. I was amazed at the answers I got to this simple question. The majority of the audience denied that one could make any inference at all about T based on a single observation like this, other than that it  must be at least 347.

 Actually, a single observation like this can lead to a useful inference about T, using Bayes’ theorem. Suppose we have really no idea at all about T before making our observation; we can then adopt a uniform prior probability. Of course there must be an upper limit on T. There can’t be more traffic wardens than there are people, for example. Although China has a large population, the prior probability of there being, say, a billion traffic wardens in a single city must surely be zero. But let us take the prior to be effectively constant. Suppose the actual number of the warden we observe is t. Now we have to assume that we have an equal chance of coming across any one of the T traffic wardens outside our hotel. Each value of t (from 1 to T) is therefore equally likely. I think this is the reason that my astronomers’ lunch audience thought there was no information to be gleaned from an observation of any particular value, i.e. t=347.

 Let us simplify this argument further by allowing two alternative “models” for the frequency of Chinese traffic wardens. One has T=1000, and the other (just to be silly) has T=1,000,000. If I find number 347, which of these two alternatives do you think is more likely? Think about the kind of numbers that occupy the range from 1 to T. In the first case, most of the numbers have 3 digits. In the second, most of them have 6. If there were a million traffic wardens in the city, it is quite unlikely you would find a random individual with a number as small as 347. If there were only 1000, then 347 is just a typical number. There are strong grounds for favouring the first model over the second, simply based on the number actually observed. To put it another way, we would be surprised to encounter number 347 if T were actually a million. We would not be surprised if T were 1000.

 One can extend this argument to the entire range of possible values of T, and ask a more general question: if I observe traffic warden number t what is the probability I assign to each value of T? The answer is found using Bayes’ theorem. The prior, as I assumed above, is uniform. The likelihood is the probability of the observation given the model. If I assume a value of T, the probability P(t|T) of each value of t (up to and including T) is just 1/T (since each of the wardens is equally likely to be encountered). Bayes’ theorem can then be used to construct a posterior probability of P(T|t). Without going through all the nuts and bolts, I hope you can see that this probability will tail off for large T. Our observation of a (relatively) small value for t should lead us to suspect that T is itself (relatively) small. Indeed it’s a reasonable “best guess” that T=2t. This makes intuitive sense because the observed value of t then lies right in the middle of its range of possibilities.

 Before going on, it is worth mentioning one other point about this kind of inference: that it is not at all powerful. Note that the likelihood just varies as 1/T. That of course means that small values are favoured over large ones. But note that this probability is uniform in logarithmic terms. So although T=1000 is more probable than T=1,000,000,  the range between 1000 and 10,000 is roughly as likely as the range between 1,000,000 and 10,000,0000, assuming there is no prior information. So although it tells us something, it doesn’t actually tell us very much. Just like any probabilistic inference, there’s a chance that it is wrong, perhaps very wrong.

 What does all this have to do with Doomsday? Instead of traffic wardens, we want to estimate N, the number of humans that will ever be born, Following the same logic as in the example above, I assume that I am a “randomly” chosen individual drawn from the sequence of all humans to be born, in past present and future. For the sake of argument, assume I number n in this sequence. The logic I explained above should lead me to conclude that the total number N is not much larger than my number, n. For the sake of argument, assume that I am the one-billionth human to be born, i.e. n=1,000,000,0000.  There should not be many more than a few billion humans ever to be born. At the rate of current population growth, this means that not many more generations of humans remain to be born. Doomsday is nigh.

 Richard Gott’s version of this argument is logically similar, but is based on timescales rather than numbers. If whatever thing we are considering begins at some time tbegin and ends at a time tend and if we observe it at a “random” time between these two limits, then our best estimate for its future duration is of order how long it has lasted up until now. Gott gives the example of Stonehenge[1], which was built about 4,000 years ago: we should expect it to last a few thousand years into the future. Actually, Stonehenge is a highly dubious . It hasn’t really survived 4,000 years. It is a ruin, and nobody knows its original form or function. However, the argument goes that if we come across a building put up about twenty years ago, presumably we should think it will come down again (whether by accident or design) in about twenty years time. If I happen to walk past a building just as it is being finished, presumably I should hang around and watch its imminent collapse….

But I’m being facetious.

Following this chain of thought, we would argue that, since humanity has been around a few hundred thousand years, it is expected to last a few hundred thousand years more. Doomsday is not quite as imminent as previously, but in any case humankind is not expected to survive sufficiently long to, say, colonize the Galaxy.

 You may reject this type of argument on the grounds that you do not accept my logic in the case of the traffic wardens. If so, I think you are wrong. I would say that if you accept all the assumptions entering into the Doomsday argument then it is an equally valid example of inductive inference. The real issue is whether it is reasonable to apply this argument at all in this particular case. There are a number of related examples that should lead one to suspect that something fishy is going on. Usually the problem can be traced back to the glib assumption that something is “random” when or it is not clearly stated what that is supposed to mean.

 There are around sixty million British people on this planet, of whom I am one. In contrast there are 3 billion Chinese. If I follow the same kind of logic as in the examples I gave above, I should be very perplexed by the fact that I am not Chinese. After all, the odds are 50: 1 against me being British, aren’t they?

 Of course, I am not at all surprised by the observation of my non-Chineseness. My upbringing gives me access to a great deal of information about my own ancestry, as well as the geographical and political structure of the planet. This data convinces me that I am not a “random” member of the human race. My self-knowledge is conditioning information and it leads to such a strong prior knowledge about my status that the weak inference I described above is irrelevant. Even if there were a million million Chinese and only a hundred British, I have no grounds to be surprised at my own nationality given what else I know about how I got to be here.

 This kind of conditioning information can be applied to history, as well as geography. Each individual is generated by its parents. Its parents were generated by their parents, and so on. The genetic trail of these reproductive events connects us to our primitive ancestors in a continuous chain. A well-informed alien geneticist could look at my DNA and categorize me as an “early human”. I simply could not be born later in the story of humankind, even if it does turn out to continue for millennia. Everything about me – my genes, my physiognomy, my outlook, and even the fact that I bothering to spend time discussing this so-called paradox – is contingent on my specific place in human history. Future generations will know so much more about the universe and the risks to their survival that they won’t even discuss this simple argument. Perhaps we just happen to be living at the only epoch in human history in which we know enough about the Universe for the Doomsday argument to make some kind of sense, but too little to resolve it.

 To see this in a slightly different light, think again about Gott’s timescale argument. The other day I met an old friend from school days. It was a chance encounter, and I hadn’t seen the person for over 25 years. In that time he had married, and when I met him he was accompanied by a baby daughter called Mary. If we were to take Gott’s argument seriously, this was a random encounter with an entity (Mary) that had existed for less than a year. Should I infer that this entity should probably only endure another year or so? I think not. Again, bare numerological inference is rendered completely irrelevant by the conditioning information I have. I know something about babies. When I see one I realise that it is an individual at the start of its life, and I assume that it has a good chance of surviving into adulthood. Human civilization is a baby civilization. Like any youngster, it has dangers facing it. But is not doomed by the mere fact that it is young,

 John Leslie has developed many different variants of the basic Doomsday argument, and I don’t have the time to discuss them all here. There is one particularly bizarre version, however, that I think merits a final word or two because is raises an interesting red herring. It’s called the “Shooting Room”.

 Consider the following model for human existence. Souls are called into existence in groups representing each generation. The first generation has ten souls. The next has a hundred, the next after that a thousand, and so on. Each generation is led into a room, at the front of which is a pair of dice. The dice are rolled. If the score is double-six then everyone in the room is shot and it’s the end of humanity. If any other score is shown, everyone survives and is led out of the Shooting Room to be replaced by the next generation, which is ten times larger. The dice are rolled again, with the same rules. You find yourself called into existence and are led into the room along with the rest of your generation. What should you think is going to happen?

 Leslie’s argument is the following. Each generation not only has more members than the previous one, but also contains more souls than have ever existed to that point. For example, the third generation has 1000 souls; the previous two had 10 and 100 respectively, i.e. 110 altogether. Roughly 90% of all humanity lives in the last generation. Whenever the last generation happens, there bound to be more people in that generation than in all generations up to that point. When you are called into existence you should therefore expect to be in the last generation. You should consequently expect that the dice will show double six and the celestial firing squad will take aim. On the other hand, if you think the dice are fair then each throw is independent of the previous one and a throw of double-six should have a probability of just one in thirty-six. On this basis, you should expect to survive. The odds are against the fatal score.

 This apparent paradox seems to suggest that it matters a great deal whether the future is predetermined (your presence in the last generation requires the double-six to fall) or “random” (in which case there is the usual probability of a double-six). Leslie argues that if everything is pre-determined then we’re doomed. If there’s some indeterminism then we might survive. This isn’t really a paradox at all, simply an illustration of the fact that assuming different models gives rise to different probability assignments.

 While I am on the subject of the Shooting Room, it is worth drawing a parallel with another classic puzzle of probability theory, the St Petersburg Paradox. This is an old chestnut to do with a purported winning strategy for Roulette. It was first proposed by Nicolas Bernoulli but famously discussed at greatest length by Daniel Bernoulli in the pages of Transactions of the St Petersburg Academy, hence the name.  It works just as well for the case of a simple toss of a coin as for Roulette as in the latter game it involves betting only on red or black rather than on individual numbers.

 Imagine you decide to bet such that you win by throwing heads. Your original stake is £1. If you win, the bank pays you at even money (i.e. you get your stake back plus another £1). If you lose, i.e. get tails, your strategy is to play again but bet double. If you win this time you get £4 back but have bet £2+£1=£3 up to that point. If you lose again you bet £8. If you win this time, you get £16 back but have paid in £8+£4+£2+£1=£15 to that point. Clearly, if you carry on the strategy of doubling your previous stake each time you lose, when you do eventually win you will be ahead by £1. It’s a guaranteed winner. Isn’t it?

 The answer is yes, as long as you can guarantee that the number of losses you will suffer is finite. But in tosses of a fair coin there is no limit to the number of tails you can throw before getting a head. To get the correct probability of winning you have to allow for all possibilities. So what is your expected stake to win this £1? The answer is the root of the paradox. The probability that you win straight off is ½ (you need to throw a head), and your stake is £1 in this case so the contribution to the expectation is £0.50. The probability that you win on the second go is ¼ (you must lose the first time and win the second so it is ½ times ½) and your stake this time is £2 so this contributes the same £0.50 to the expectation. A moment’s thought tells you that each throw contributes the same amount, £0.50, to the expected stake. We have to add this up over all possibilities, and there are an infinite number of them. The result of summing them all up is therefore infinite. If you don’t believe this just think about how quickly your stake grows after only a few losses: £1, £2, £4, £8, £16, £32, £64, £128, £256, £512, £1024, etc. After only ten losses you are staking over a thousand pounds just to get your pound back. Sure, you can win £1 this way, but you need to expect to stake an infinite amount to guarantee doing so. It is not a very good way to get rich.

 The relationship of all this to the Shooting Room is that it is shows it is dangerous to pre-suppose a finite value for a number which in principle could be infinite. If the number of souls that could be called into existence is allowed to be infinite, then any individual as no chance at all of being called into existence in any generation!

 Amusing as they are, the thing that makes me most uncomfortable about these Doomsday arguments is that they attempt to determine a probability of an event without any reference to underlying mechanism. For me, a valid argument about Doomsday would have to involve a particular physical cause for the extinction of humanity (e.g. asteroid impact, climate change, nuclear war, etc). Given this physical mechanism one should construct a model within which one can estimate probabilities for the model parameters (such as the rate of occurrence of catastrophic asteroid impacts). Only then can one make a valid inference based on relevant observations and their associated likelihoods. Such calculations may indeed lead to alarming or depressing results. I fear that the greatest risk to our future survival is not from asteroid impact or global warming, where the chances can be estimated with reasonable precision, but self-destructive violence carried out by humans themselves. Science has no way of being able to predict what atrocities people are capable of so we can’t make any reliable estimate of the probability we will self-destruct. But the absence of any specific mechanism in the versions of the Doomsday argument I have discussed robs them of any scientific credibility at all.

There are better grounds for worrying about the future than mere numerology.

How Loud was the Big Bang?

Posted in The Universe and Stuff with tags , , , , , , on April 26, 2009 by telescoper

The other day I was giving a talk about cosmology at Cardiff University’s Open Day for prospective students. I was talking, as I usually do on such occasions, about the cosmic microwave background, what we have learnt from it so far and what we hope to find out from it from future experiments, assuming they’re not all cancelled.

Quite a few members of staff listened to the talk too and, afterwards, some of them expressed surprise at what I’d been saying, so I thought it would be fun to try to explain it on here in case anyone else finds it interesting.

As you probably know the Big Bang theory involves the assumption that the entire Universe – not only the matter and energy but also space-time itself – had its origins in a single event a finite time in the past and it has been expanding ever since. The earliest mathematical models of what we now call the  Big Bang were derived independently by Alexander Friedman and George Lemaître in the 1920s. The term “Big Bang” was later coined by Fred Hoyle as a derogatory description of an idea he couldn’t stomach, but the phrase caught on. Strictly speaking, though, the Big Bang was a misnomer.

Friedman and Lemaître had made mathematical models of universes that obeyed the Cosmological Principle, i.e. in which the matter was distributed in a completely uniform manner throughout space. Sound consists of oscillating fluctuations in the pressure and density of the medium through which it travels. These are longitudinal “acoustic” waves that involve successive compressions and rarefactions of matter, in other words departures from the purely homogeneous state required by the Cosmological Principle. The Friedman-Lemaitre models contained no sound waves so they did not really describe a Big Bang at all, let alone how loud it was.

However, as I have blogged about before, newer versions of the Big Bang theory do contain a mechanism for generating sound waves in the early Universe and, even more importantly, these waves have now been detected and their properties measured.

The above image shows the variations in temperature of the cosmic microwave background as charted by the Wilkinson Microwave Anisotropy Probe about five years ago. The average temperature of the sky is about 2.73 K but there are variations across the sky that have an rms value of about 0.08 milliKelvin. This corresponds to a fractional variation of a few parts in a hundred thousand relative to the mean temperature. It doesn’t sound like much, but this is evidence for the existence of primordial acoustic waves and therefore of a Big Bang with a genuine “Bang” to it.

A full description of what causes these temperature fluctuations would be very complicated but, roughly speaking, the variation in temperature you corresponds directly to variations in density and pressure arising from sound waves.

So how loud was it?

The waves we are dealing with have wavelengths up to about 200,000 light years and the human ear can only actually hear sound waves with wavelengths up to about 17 metres. In any case the Universe was far too hot and dense for there to have been anyone around listening to the cacophony at the time. In some sense, therefore, it wouldn’t have been loud at all because our ears can’t have heard anything.

Setting aside these rather pedantic objections – I’m never one to allow dull realism to get in the way of a good story- we can get a reasonable value for the loudness in terms of the familiar language of decibels. This defines the level of sound (L) logarithmically in terms of the rms pressure level of the sound wave Prms relative to some reference pressure level Pref

L=20 log10[Prms/Pref]

(the 20 appears because of the fact that the energy carried goes as the square of the amplitude of the wave; in terms of energy there would be a factor 10).

There is no absolute scale for loudness because this expression involves the specification of the reference pressure. We have to set this level by analogy with everyday experience. For sound waves in air this is taken to be about 20 microPascals, or about 2×10-10 times the ambient atmospheric air pressure which is about 100,000 Pa.  This reference is chosen because the limit of audibility for most people corresponds to pressure variations of this order and these consequently have L=0 dB. It seems reasonable to set the reference pressure of the early Universe to be about the same fraction of the ambient pressure then, i.e.

Pref~2×10-10 Pamb

The physics of how primordial variations in pressure translate into observed fluctuations in the CMB temperature is quite complicated, and the actual sound of the Big Bang contains a mixture of wavelengths with slightly different amplitudes so it all gets a bit messy if you want to do it exactly, but it’s quite easy to get a rough estimate. We simply take the rms pressure variation to be the same fraction of ambient pressure as the averaged temperature variation are compared to the average CMB temperature,  i.e.

Prms~ a few ×10-5Pamb

If we do this, scaling both pressures in logarithm in the equation in proportion to the ambient pressure, the ambient pressure cancels out in the ratio, which turns out to be a few times 10-5.

AudiogramsSpeechBanana

With our definition of the decibel level we find that waves corresponding to variations of one part in a hundred thousand of the reference level  give roughly L=100dB while part in ten thousand gives about L=120dB. The sound of the Big Bang therefore peaks at levels just over  110 dB. As you can see in the Figure above, this is close to the threshold of pain,  but it’s perhaps not as loud as you might have guessed in response to the initial question. Many rock concerts are actually louder than the Big Bang, at least near the speakers!

A useful yardstick is the amplitude  at which the fluctuations in pressure are comparable to the mean pressure. This would give a factor of about 1010 in the logarithm and is pretty much the limit that sound waves can propagate without distortion. These would have L≈190 dB. It is estimated that the 1883 Krakatoa eruption produced a sound level of about 180 dB at a range of 100 miles. By comparison the Big Bang was little more than a whimper.

PS. If you would like to read more about the actual sound of the Big Bang, have a look at John Cramer’s webpages. You can also download simulations of the actual sound. If you listen to them you will hear that it’s more of  a “Roar” than a “Bang” because the sound waves don’t actually originate at a single well-defined event but are excited incoherently all over the Universe.

Leonid’s Shower

Posted in The Universe and Stuff with tags , , , , on April 18, 2009 by telescoper

Yesterday (17th April) was the last day of our Easter vacation – back to the grind on Monday – and it was also the occasion of a special meeting to mark the retirement of Professor Leonid Petrovich Grishchuk.

Leonid has been a Distinguished Research Professor here in Cardiff since 1995. You can read more of his scientific biography and wider achievements here, but it should suffice to say that he is a pioneer of many aspects of relativistic cosmology and particularly primordial gravitational waves. He’s also a larger-than-life character who is known with great affection around the world.

Among other things, he’s a big fan of football. He still plays, as a matter of fact, although he generally spends more time ordering his team-mates about than actually running around himself. One of his retirement presents was a Cardiff City football shirt with his name on the back.

My first experience of Leonid was many years ago at a scientific meeting at which I attempted to give a talk. Leonid was in the audience and he interrupted me,  rather aggressively. I didn’t really understand his question so he had another go at me in the questions afterwards. I don’t mind admitting that I was quite upset with his behaviour. I think a large fraction of working cosmologists have probably been Grischchucked at one time or another.

Later on, though, people from the meeting were congregating at a bar when he arrived and headed for me. I didn’t really want to talk to him as I felt he had been quite rude. However, there wasn’t really any way of escaping so I ended up talking to him over a beer. We finally resolved the question he had been trying to ask me and his demeanour changed completely. We spent the rest of the evening having dinner and talking about all sorts of things and have been friends ever since.

Over the years I’ve learned that this is very much a tradition amongst Russian scientists of the older school. They can seem very hostile – even brutal – when discussing science, but that was the way things were done in the environment where they learned their trade.  In many cases the rather severe exterior masks a kindly and generous nature, as it certainly does with Leonid.

I also remember a spell in the States as a visitor during which I heard two Russian cosmologists screaming at each other in the room next door. I really thought they were about to have a fist fight. A few minutes later, though, they both emerged, smiling as if nothing had happened…

Appropriately enough Leonid’s bash was held immediately after BritGrav 9, a meeting dedicated to bringing together the gravitational research community of the UK and beyond, and to provide a forum for the exchange of ideas. It aimed to cover all aspects of gravitational physics, both theoretical and experimental, including cosmology, mathematical general relativity, quantum gravity, gravitational astrophysics, gravitational wave data analysis, and instrumentation. I chaired a session during the meeting and found Leonid in characteristic form as a member of the audience, never shy with questions or comments, and quite difficult to keep under control.

I enjoyed the meeting because priority was given to students when allocating speaking slots. I think too many conferences have the same senior scientists giving  the same talk over and over again. Relativists are also quite different to cosmologists in the level of mathematical rigour to which they aspire.  You can bullshit at a cosmology conference, but wouldn’t get away with it in front of a GR audience.

On the evening of 16th April we had a public lecture in Cardiff by Kip Thorne on The Warped Side of the Universe: from the Big Bang to Black Holes and Gravitational Waves and Kip also gave a talk as part of the subsequent meeting on Friday in Leonid’s honour.

lpg008_test

Kip and Leonid are shown together a few years ago in the photograph to the left here. The rest of the LPGFest meeting was interesting and eclectic, with talks from mathematical relativists as well as scientists in diverse fields who had come over from Russia specially to honour Leonid. We later adjourned to a “Welsh Banquet” at the 15th Century Undercroft of Cardiff Castle for dinner accompanied by something described as “entertainment” laid on by the hosts. That part was quite excruciating: like Butlins only not as classy. Heaven knows what our distinguished foreign visitors made of it, although Leonid seemed to think it was great fun, and that’s what matters.

Once the dinner was over it was time for Leonid to be showered with gifts from around the world and, by way of a finale, he was serenaded with a version of From Russian With Love, by Bernie and the Gravitones. Now at last I understand what the phrase “extraordinary rendition” means.

Perception, Piero and Pollock

Posted in Art, The Universe and Stuff with tags , , , , , on April 15, 2009 by telescoper

For some unknown reason I’ve just received an invitation to a private view at a small art gallery that’s about ten minutes’ walk from my house. Cocktails included. I shall definitely go and will blog about it next week. I’m looking forward to it already.

This invitation put me in an artistic frame of mind so, to follow up my post on randomness (and the corresponding parallel version on cosmic variance), I thought I’d develop some thoughts about the nature of perception and the perception of nature.

This famous painting is The Flagellation of Christ, by Piero della Francesca. I actually saw it many years ago on one of my many trips to Italy; it’s in an art gallery in Urbino. The first thing that strikes you when you see it is actually that the painting is surprisingly small (about 60cm by 80cm). However, that superficial reaction aside, the painting draws you into it in a way which few other works of art can. The composition is complicated and mathematically precise, but the use of linear perspective is sufficiently straightforward that your eye can quickly understand the geometry of the space depicted and locate the figures and actions within it. The Christ figure is clearly in the room to the left rear and the scene is then easily recognized as part of the story leading up to the crucifixion.

That’s what your eye always seems to do first when presented with a figurative representation: sort out what’s going on and fill in any details it can from memory and other knowledge.

But once you have made sense of the overall form, your brain immediately bombards you with questions. Who are the three characters in the right foreground? Why aren’t they paying attention to what’s going on indoors? Who is the figure with his back to us? Why is the principal subject so far in the background? Why does everyone look so detached? Why is the light coming from two different directions (from the left for the three men in the foreground but from the right for those in the interior)? Why is it all staged in such a peculiar way? And so on.

These unresolved questions lead you to question whether this is the straightforward depiction first sight led you to think it was. It’s clearly much more than that. Deeply symbolic, even cryptic, it’s effect on the viewer is eery and disconcerting. It has a dream-like quality. The individual elements of the painting add up to something, but the full meaning remains elusive. You feel there must be something you’re missing, but can’t find it.

This is such an enigmatic picture that it has sparked some extremely controversial interpretations, some of which are described in an article in the scientific journal Nature. I’m not going to pretend to know enough to comment on the theories, escept to say that some of them at least must be wrong. They are, however, natural consequences of our brain’s need to impose order on what it sees. The greatest artists know this, of course. Although it sometimes seems like they might be playing tricks on us just for fun, part of what makes art great is the way it gets inside the process of perception.

Here’s another example from quite a different artist.

This one is called Lavender Mist. It’s one of the “action paintings” made by the influential American artist Jackson Pollock. This, and many of the other paintings of its type, also get inside your head in quite a disconcerting way but it’s quite a different effect to that achieved by Piero della Francesca.

This is an abstract painting, but that doesn’t stop your eyes seeking within it some sort of point of reference to make geometrical sense of it. There’s no perspective to draw you into it so you look for clues to the depth in the layers of paint. Standing in front of one of these very large works – I find they don’t work at all in reduced form like on the screen in front of you now – you find your eyes constantly shifting around, following lines here and there, trying to find recognizable shapes and to understand what is there in terms of other things you have experienced either in the painting itself or elsewhere. Any order you can find, however, soon becomes lost. Small-scale patterns dissolve away into sea of apparent confusion. Your brain tries harder, but is doomed. One of the biggest problems is that your eyes keep focussing and unfocussing to look for depth and structure. It’s almost impossible to stop yourself doing it. You end up dizzy.

I don’t know how Pollock came to understand exactly how to make his compositions maximally disorienting, but he seems to have done so. Perhaps he had a deep instinctive understanding of how the eye copes with the interaction of structures on different physical scales. I find you can see this to some extent even in the small version of the picture on this page. Deliberately blurring your vision makes different elements stand out and then retreat, particularly the large darkish streak that lies to the left of centre at a slight angle to the vertical.

This artist has also been the subject of interest by mathematicians and physicists because his work seems to display some of the characteristic properties of fractal sets. I remember going to a very interesting talk a few years ago by Richard Taylor of the University of Oregon who claimed that fractal dimensions could be used to authenticate (or otherwise) genuine works by Pollock as he seemed to have his own unique signature.

I suppose what I’m trying to suggest is that there’s a deeper connection than you might think between the appreciation of art and the quest for scientific understanding.