Archive for Cosmology

I want it painted … beige?

Posted in The Universe and Stuff with tags , , , , , on November 4, 2009 by telescoper

I was quite pleased when I saw that Pass Notes No 2,677 in Today’s Guardian was about “the universe”. Like the other pieces in this series, it looks at the subject matter from a deliberately bizarre angle, focussing on the fact that it appears to be coloured beige, or at least if you blend the light from all the stars we can see in the right proportions, that’s the colour you would get.

Actually the work discussed in this item was done quite along time ago; it was featured in a New Scientist article in 2002. One of the authors, Karl Glazebrook had previously claimed that the colour produced by all the stars in all the galaxies that could be seen was in fact something like turquoise. For some reason, this trivial bit of science fluff captured the (obviously limited) imagination of journalists around the world. However it turned out to be have been wrong and a grave announcement was made pointing out that the Universe was actually more like beige. This story gave a few people their 15 minutes of fame, but I think the episode made cosmologists as a whole look very silly.

I had hoped this would be forgotten but, the Guardian decided to revive memories of the affair today, with obviously humorous intent. They also called Glazebrook an “astrologist”, although that appears to have been a mistake rather than a joke as it has now been changed to “astrophysicist”.

Anyway, this important observation requires a theoretical explanation and I now want to step into the limelight beigelight to offer a radical insight into the vexed issue of cosmological chromaticity.
My hypothesis has its inspiration in TV shows like House Doctor in which homeowners wishing to impress prospective purchasers are always advised to paint everything beige or magnolia. Since the Divine Creator appears to have decorated the Universe according to the same prescription, the obvious inference is that the cosmos is about to be put on the market. He might have had the courtesy to tell the sitting tenants.

Come to think of it, Glazebrook missed a trick here. We astrophysicists are always being castigated for not doing anything that leads to wealth creation. What he should have done was to produce a paint with the same colour as the Universe. Glazebrook Beige has a nice ring to it.

The Edge of Darkness

Posted in The Universe and Stuff with tags , , , on October 29, 2009 by telescoper

I just picked up an item from the BBC Website that refers to news announced in this week’s edition of Nature of the discovery of a gamma-ray burst detected by NASA’s Swift satellite.  The burst itself was detected in April this year and I had a sneak preview that something exciting was going to be announced earlier this month at the Royal Astronomical Society meeting on October 9th. However, today’s press releases still managed to catch me on the hop owing to the fact that a rather different story had distracted my attention…

In fact, detections of gamma-ray bursts are not all that rare. Swift observes one every few days on average. Once such a source is found through its gamma-ray emission, a signal is sent to astronomers around the world who then work like crazy to detect an optical counterpart. If and when they find one, they try to measure the spectrum of light emitted in order to determine the source’s redshift. This is very difficult for the distant ones, and is not  always successful.

However, what happened in this case – called GRB 090423 – was that a spectrum was that not one but two independent teams obtained optical spectra of the  object in which the gamma-ray burst must have happened. What each time found was that their spectrum showed a sharp cut-off at wavelengths shorter than a given limiting value.

Hydrogen is very effective at absorbing radiation with wavelengths shorter than 91.2 nm (the so-called Lyman limit, which is in the ultraviolet part of the spectrum), and all galaxies contain large amounts of hydrogen; hence galaxies are virtually dark at wavelengths shorter than 91.2 nm in their rest-frame. The position of the break in an observed frame will be at a different wavelength owing to the effect of the cosmological redshift.

The Lyman break for the host of  GRB 090423 appears not in the ultraviolet but in the infrared, indicating a very large redshift. In fact, it’s a truly spectacular  8.2.

Together with the direct observations of galaxies at high redshifts I blogged about a month or so ago, this discovery helps push back the frontiers of our knowledge of the Universe not just in space but also in time. A quick calculation reveals that in the standard cosmological model, light from a source at redshift 8.2 has taken about 13.1 billion light years to reach us. The gamma-ray burst therefore exploded about 600 million years after the Big Bang.

Another interesting thing about this source is its duration. The optical afterglow of a gamma-ray burst  decays with time. Gamma-ray bursts are usually classified as either short or long, depending on the decay time with the dividing line between the two classes being around 2 seconds. The optical afterglow of GRB 090423 lasted about ten seconds. But that doesn’t make it a long burst. We actually see the afterglow stretched out in time by the same redshift factor as an individual photon’s wavelength. So in the rest frame of the source the optical glow was only a bit over a second in duration, i.e. it was a short burst.

Long gamma-ray bursts are thought to be associated with core-collapse supernovae which arise from the self-destruction of very massive stars with very short lifetimes. The fact that such things die young means that they are only found where star formation has happened very recently. One might expect the earliest gamma-ray bursts to therefore be of this type.

I don’t think anyone is really sure what the shorter ones really are, but they  seem to happen in regions without active star formation in which the stellar populations are quite old, such as in elliptical galaxies. The fact that the most distant GRB yet discovered happens to be a short burst is very interesting. How can there be an old stellar population at a time when the  Universe itself was so young?

If the Big Bang theory is correct, astronomers  should eventually be able to reach back so far in time that the Universe was so young that no stars had had time to form. There would be no sources of light to detect so we would have reached the edge of darkness. We’re not there yet, but we’re getting closer.

Ergodic Means…

Posted in The Universe and Stuff with tags , , , , , , on October 19, 2009 by telescoper

The topic of this post is something I’ve been wondering about for quite a while. This afternoon I had half an hour spare after a quick lunch so I thought I’d look it up and see what I could find.

The word ergodic is one you will come across very frequently in the literature of statistical physics, and in cosmology it also appears in discussions of the analysis of the large-scale structure of the Universe. I’ve long been puzzled as to where it comes from and what it actually means. Turning to the excellent Oxford English Dictionary Online, I found the answer to the first of these questions. Well, sort of. Under etymology we have

ad. G. ergoden (L. Boltzmann 1887, in Jrnl. f. d. reine und angewandte Math. C. 208), f. Gr.

I say “sort of” because it does attribute the origin of the word to Ludwig Boltzmann, but the greek roots (εργον and οδοσ) appear to suggest it means “workway” or something like that. I don’t think I follow an ergodic path on my way to work so it remains a little mysterious.

The actual definitions of ergodic given by the OED are

Of a trajectory in a confined portion of space: having the property that in the limit all points of the space will be included in the trajectory with equal frequency. Of a stochastic process: having the property that the probability of any state can be estimated from a single sufficiently extensive realization, independently of initial conditions; statistically stationary.

As I had expected, it has two  meanings which are related, but which apply in different contexts. The first is to do with paths or orbits, although in physics this is usually taken to meantrajectories in phase space (including both positions and velocities) rather than just three-dimensional position space. However, I don’t think the OED has got it right in saying that the system visits all positions with equal frequency. I think an ergodic path is one that must visit all positions within a given volume of phase space rather than being confined to a lower-dimensional piece of that space. For example, the path of a planet under the inverse-square law of gravity around the Sun is confined to a one-dimensional ellipse. If the force law is modified by external perturbations then the path need not be as regular as this, in extreme cases wandering around in such a way that it never joins back on itself but eventually visits all accessible locations. As far as my understanding goes, however, it doesn’t have to visit them all with equal frequency. The ergodic property of orbits is  intimately associated with the presence of chaotic dynamical behaviour.

The other definition relates to stochastic processes, i.e processes involving some sort of random component. These could either consist of a discrete collection of random variables {X1…Xn} (which may or may not be correlated with each other) or a continuously fluctuating function of some parameter such as time t, i.e. X(t) or spatial position (or perhaps both).

Stochastic processes are quite complicated measure-valued mathematical entities because they are specified by probability distributions. What the ergodic hypothesis means in the second sense is that measurements extracted from a single realization of such a process have a definition relationship to analagous quantities defined by the probability distribution.

I always think of a stochastic process being like a kind of algorithm (whose workings we don’t know). Put it on a computer, press “go” and it spits out a sequence of numbers. The ergodic hypothesis means that by examining a sufficiently long run of the output we could learn something about the properties of the algorithm.

An alternative way of thinking about this for those of you of a frequentist disposition is that the probability average is taken over some sort of statistical ensemble of possible realizations produced by the algorithm, and this must match the appropriate long-term average taken over one realization.

This is actually quite a deep concept and it can apply (or not) in various degrees.  A simple example is to do with properties of the mean value. Given a single run of the program over some long time T we can compute the sample average

\bar{X}_T\equiv \frac{1}{T} \int_0^Tx(t) dt

the probability average is defined differently over the probability distribution, which we can call p(x)

\langle X \rangle \equiv \int x p(x) dx

If these two are equal for sufficiently long runs, i.e. as T goes to infinity, then the process is said to be ergodic in the mean. A process could, however, be ergodic in the mean but not ergodic with respect to some other property of the distribution, such as the variance. Strict ergodicity would require that the entire frequency distribution defined from a long run should match the probability distribution to some accuracy.

Now  we have a problem with the OED again. According to the defining quotation given above, ergodic can be taken to mean statistically stationary. Actually that’s not true. ..

In the one-parameter case, “statistically stationary” means that the probability distribution controlling the process is independent of time, i.e. that p(x,t)=p(x,t+Δt) . It’s fairly straightforward to see that the ergodic property requires that a process X(t) be stationary, but the converse is not the case. Not every stationary process is necessarily ergodic. Ned Wright gives an example here. For a higher-dimensional process, such as a spatially-fluctuating random field the analogous property is statistical homogeneity, rather than stationarity, but otherwise everything carries over.

Ergodic theorems are very tricky to prove in general, but there are well-known results that rigorously establish the ergodic properties of Gaussian processes (which is another reason why theorists like myself like them so much). However, it should be mentioned that even if the ergodic assumption applies its usefulness depends critically on the rate of convergence. In the time-dependent example I gave above, it’s no good if the averaging period required is much longer than the age of the Universe; in that case even ergodicity makes it difficult to make inferences from your sample. Likewise the ergodic hypothesis doesn’t help you analyse your galaxy redshift survey if the averaging scale needed is larger than the depth of the sample.

Moreover, it seems to me that many physicists resort to ergodicity when there isn’t any compelling mathematical grounds reason to think that it is true. In some versions of the multiverse scenario, it is hypothesized that the fundamental constants of nature describing our low-energy turn out “randomly” to take on different values in different domains owing to some sort of spontaneous symmetry breaking perhaps associated a phase transition generating  cosmic inflation. We happen to live in a patch within this structure where the constants are such as to make human life possible. There’s no need to assert that the laws of physics have been designed to make us possible if this is the case, as most of the multiverse doesn’t have the fine tuning that appears to be required to allow our existence.

As an application of the Weak Anthropic Principle, I have no objection to this argument. However, behind this idea lies the assertion that all possible vacuum configurations (and all related physical constants) do arise ergodically. I’ve never seen anything resembling a proof that this is the case. Moreover, there are many examples of physical phase transitions for which the ergodic hypothesis is known not to apply.  If there is a rigorous proof that this works out, I’d love to hear about it. In the meantime, I remain sceptical.

Greatness in Little

Posted in Poetry, The Universe and Stuff with tags , , on October 15, 2009 by telescoper

The BBC Website yesterday mentioned that according to the British Astronomer Royal, Lord Martin Rees, celestial bodies are less complicated than the bodies of insects – let alone those of human beings – and cosmology is an easier science than the study of a balanced diet.

As I was tucking into my carefully balanced meal of fish and chips last night, the first part of the quotation suddenly reminded me of the following poem Greatness in Little by Richard Leigh (1649-1728), a relatively obscure poet of the seventeenth century who managed to excel himself in this particular poem of 1675 in which he compares the intricate workings of insects with the grandest achievements of human explorers.

In spotted globes, that have resembled all
Which we or beasts possess to one great ball
Dim little specks for thronging cities stand,
Lines wind for rivers, blots bound sea and land.
Small are those spots which in the moon we view,
Yet glasses these like shades of mountains shew;
As what an even brightness does retain,
A glorious level seems, and shining plain.
Those crowds of stars in the populous sky,
Which art beholds as twinkling worlds on high,
Appear to naked, unassisted sight
No more than sparks or slender points of light.
The sun, a flaming universe alone,
Bigger than that about which his fires run;
Enlightening ours, his globe but part does gild,
Part by his lustre or Earth’s shades concealed;
His glory dwindled so, as what we spy
Scarce fills the narrow circle of the eye.
What new Americas of light have been
Yet undiscovered there, or yet unseen,
Art’s near approaches awfully forbid,
As in the majesty of nature hid.
Nature, who with like state, and equal pride,
Her great works does in height and distance hide,
And shuts up her minuter bodies all
In curious frames, imperceptibly small.
Thus still incognito, she seeks recess
In greatness half-seen, or dim littleness.
Ah, happy littleness! that art thus blest,
That greatest glories aspire to seem least.
Even those installed in a higher sphere,
The higher they are raised, the less appear,
And in their exaltation emulate
Thy humble grandeur and thy modest state.
Nor is this all thy praise, though not the least,
That greatness is thy counterfeit at best.
Those swelling honours, which in that we prize,
Thou dost contain in thy more thrifty size;
And hast that pomp, magnificence does boast,
Though in thy stature and dimensions lost.
Those rugged little bodies whose parts rise
And fall in various inequalities,
Hills in the risings of their surface show,
As valleys in their hollow pits below.
Pompous these lesser things, but yet less rude
Than uncompact and looser magnitude.
What Skill is in the frame of Insects shown?
How fine the Threds, in their small Textures spun?
How close those Instruments and Engines knit,
Which Motion, and their slender Sense transmit?
Like living Watches, each of these conceals
A thousand Springs of Life, and moving wheels.
Each ligature a Lab’rynth seems, each part
All wonder is, all Workmanship and Art.
Rather let me this little Greatness know,
Then all the Mighty Acts of Great Ones do.
These Engines understand, rather than prove
An Archimedes, and the Earth remove.
These Atom-Worlds found out, I would despise
Colombus, and his vast Discoveries.

Another Day at the ArXiv..

Posted in Cosmic Anomalies, The Universe and Stuff with tags , , , , , , , on October 8, 2009 by telescoper

Every now and again I remember that this is supposed to be some sort of science blog. This happened again this morning after three hours of meetings with my undergraduate project students. Dealing with questions about simulating the cosmic microwave background, measuring the bending of light during an eclipse, and how to do QCD calculations on a lattice reminded me that I’m supposed to know something about stuff like that.

Anyway, looking for something to post about while I eat my lunchtime sandwich, I turned to the estimable arXiv and turned to the section marked astro-ph, and to the new submissions category, for inspiration.

I’m one of the old-fashioned types who still gets an email every day of the new submissions. In the old days there were only a few, but today’s new submissions were 77 in number, only about half-a-dozen of which seemed directly relevant to things I’m interested in. It’s always a bit of a struggle keeping up and I often miss important things. There’s no way I can read as widely around my own field as I would like to, or as I used to in the past, but that’s the information revolution for you…

Anyway, the thing that leapt out at me first was an interesting paper by Dikarev et al (accepted for publication in the Astrophysical Journal) that speculates about the possibility that dust grains in the solar system might be producing emission that messes up measurements of the cosmic microwave background, thus possibly causing the curious cosmic anomalies seen by WMAP I’ve blogged about on more than one previous occasion.

Their abstract reads:

Analyses of the cosmic microwave background (CMB) radiation maps made by the Wilkinson Microwave Anisotropy Probe (WMAP) have revealed anomalies not predicted by the standard inflationary cosmology. In particular, the power of the quadrupole moment of the CMB fluctuations is remarkably low, and the quadrupole and octopole moments are aligned mutually and with the geometry of the Solar system. It has been suggested in the literature that microwave sky pollution by an unidentified dust cloud in the vicinity of the Solar system may be the cause for these anomalies. In this paper, we simulate the thermal emission by clouds of spherical homogeneous particles of several materials. Spectral constraints from the WMAP multi-wavelength data and earlier infrared observations on the hypothetical dust cloud are used to determine the dust cloud’s physical characteristics. In order for its emissivity to demonstrate a flat, CMB-like wavelength dependence over the WMAP wavelengths (3 through 14 mm), and to be invisible in the infrared light, its particles must be macroscopic. Silicate spheres from several millimetres in size and carbonaceous particles an order of magnitude smaller will suffice. According to our estimates of the abundance of such particles in the Zodiacal cloud and trans-neptunian belt, yielding the optical depths of the order of 1E-7 for each cloud, the Solar-system dust can well contribute 10 microKelvin (within an order of magnitude) in the microwaves. This is not only intriguingly close to the magnitude of the anomalies (about 30 microKelvin), but also alarmingly above the presently believed magnitude of systematic biases of the WMAP results (below 5 microKelvin) and, to an even greater degree, of the future missions with higher sensitivities, e.g. PLANCK.

I haven’t read the paper in detail yet, but will definitely do so. In the meantime I’d be interested to hear the reaction to this claim from dusty experts!

Of course we know there is dust in the solar system, and were reminded of this in spectacular style earlier this week by the discovery (by the Spitzer telescope) of an enormous new ring around Saturn.

That tenuous link gives me an excuse to include a gratuitous pretty picture:

It may look impressive, but I hope things like that are not messing up the CMB. Has anyone got a vacuum cleaner?

Nobel Betting

Posted in Science Politics, The Universe and Stuff with tags , , , , on October 5, 2009 by telescoper

I’m reminded that the 2009 Nobel Prize for Physics will be announced tomorrow, on Tuesday 6th October. A recent article in the Times Higher suggested that British physicists might be in line for glory (based on a study of citation statistics). However, the Table they produced showed that their predictions haven’t really got a good track record so it might be unwise to bet too much on the outcome! This year’s predictions are at the top, with previous years underneath; the only successful prediction is highlighted in blue:

nobel

The problem I think is that it’s difficult to win the Nobel Prize for theoretical work unless confirmed by a definitive experiment, so much as I admire (Lord) Martin Rees – and would love to see a Nobel Prize going to astrophysics generally – I think I’d have to mark him down as an outsider. It would be absurd to give the prize to string theory, of course, as that makes no contact whatsoever with experiment or observation.

I think it would be particularly great if Sir Michael Berry won a share of the physics prize, but we’ll have to wait and see. The other British runner in the paddock is Sir John Pendry. While it would be excellent for British science to have a Nobel prize, what I think is best about the whole show is that it is one of the rare occasions that puts a spotlight on basic science, so it’s good for all of us (even us non-runners).

I think the panel made a bit of a bizarre decision last year and I hope there won’t be another steward’s enquiry this year to distract us from the chance to celebrate the achievements of the winner(s).

I’d be interested to hear any thoughts on other candidates through the comments box. No doubt there’ll be some reactions after the announcement too!

Index Rerum

Posted in Biographical, Science Politics with tags , , , , , , , , , on September 29, 2009 by telescoper

Following on from yesterday’s post about the forthcoming Research Excellence Framework that plans to use citations as a measure of research quality, I thought I would have a little rant on the subject of bibliometrics.

Recently one particular measure of scientific productivity has established itself as the norm for assessing job applications, grant proposals and for other related tasks. This is called the h-index, named after the physicist Jorge Hirsch, who introduced it in a paper in 2005. This is quite a simple index to define and to calculate (given an appropriately accurate bibliographic database). The definition  is that an individual has an h-index of  h if that individual has published h papers with at least h citations. If the author has published N papers in total then the other N-h must have no more than h citations. This is a bit like the Eddington number.  A citation, as if you didn’t know,  is basically an occurrence of that paper in the reference list of another paper.

To calculate it is easy. You just go to the appropriate database – such as the NASA ADS system – search for all papers with a given author and request the results to be returned sorted by decreasing citation count. You scan down the list until the number of citations falls below the position in the ordered list.

Incidentally, one of the issues here is whether to count only refereed journal publications or all articles (including books and conference proceedings). The argument in favour of the former is that the latter are often of lower quality. I think that is in illogical argument because good papers will get cited wherever they are published. Related to this is the fact that some people would like to count “high-impact” journals only, but if you’ve chosen citations as your measure of quality the choice of journal is irrelevant. Indeed a paper that is highly cited despite being in a lesser journal should if anything be given a higher weight than one with the same number of citations published  in, e.g., Nature. Of course it’s just a matter of time before the hideously overpriced academic journals run by the publishing mafia go out of business anyway so before long this question will simply vanish.

The h-index has some advantages over more obvious measures, such as the average number of citations, as it is not skewed by one or two publications with enormous numbers of hits. It also, at least to some extent, represents both quantity and quality in a single number. For whatever reasons in recent times h has undoubtedly become common currency (at least in physics and astronomy) as being a quick and easy measure of a person’s scientific oomph.

Incidentally, it has been claimed that this index can be fitted well by a formula h ~ sqrt(T)/2 where T is the total number of citations. This works in my case. If it works for everyone, doesn’t  it mean that h is actually of no more use than T in assessing research productivity?

Typical values of h vary enormously from field to field – even within each discipline – and vary a lot between observational and theoretical researchers. In extragalactic astronomy, for example, you might expect a good established observer to have an h-index around 40 or more whereas some other branches of astronomy have much lower citation rates. The top dogs in the field of cosmology are all theorists, though. People like Carlos Frenk, George Efstathiou, and Martin Rees all have very high h-indices.  At the extreme end of the scale, string theorist Ed Witten is in the citation stratosphere with an h-index well over a hundred.

I was tempted to put up examples of individuals’ h-numbers but decided instead just to illustrate things with my own. That way the only person to get embarrased is me. My own index value is modest – to say the least – at a meagre 27 (according to ADS).   Does that mean Ed Witten is four times the scientist I am? Of course not. He’s much better than that. So how exactly should one use h as an actual metric,  for allocating funds or prioritising job applications,  and what are the likely pitfalls? I don’t know the answer to the first one, but I have some suggestions for other metrics that avoid some of its shortcomings.

One of these addresses an obvious deficiency of h. Suppose we have an individual who writes one brilliant paper that gets 100 citations and another who is one author amongst 100 on another paper that has the same impact. In terms of total citations, both papers register the same value, but there’s no question in my mind that the first case deserves more credit. One remedy is to normalise the citations of each paper by the number of authors, essentially sharing citations equally between all those that contributed to the paper. This is quite easy to do on ADS also, and in my case it gives  a value of 19. Trying the same thing on various other astronomers, astrophysicists and cosmologists reveals that the h index of an observer is likely to reduce by a factor of 3-4 when calculated in this way – whereas theorists (who generally work in smaller groups) suffer less. I imagine Ed Witten’s index doesn’t change much when calculated on a normalized basis, although I haven’t calculated it myself.

Observers  complain that this normalized measure is unfair to them, but I’ve yet to hear a reasoned argument as to why this is so. I don’t see why 100 people should get the same credit for a single piece of work:  it seems  like obvious overcounting to me.

Another possibility – if you want to measure leadership too – is to calculate the h index using only those papers on which the individual concerned is the first author. This is  a bit more of a fiddle to do but mine comes out as 20 when done in this way.  This is considerably higher than most of my professorial colleagues even though my raw h value is smaller. Using first author papers only is also probably a good way of identifying lurkers: people who add themselves to any paper they can get their hands on but never take the lead. Mentioning no names of  course.  I propose using the ratio of  unnormalized to normalized h-indices as an appropriate lurker detector…

Finally in this list of bibliometrica is the so-called g-index. This is defined in a slightly more complicated way than h: given a set of articles ranked in decreasing order of citation numbers, g is defined to be the largest number such that the top g articles altogether received at least g2 citations. This is a bit like h but takes extra account of the average citations of the top papers. My own g-index is about 47. Obviously I like this one because my number looks bigger, but I’m pretty confident others go up even more than mine!

Of course you can play with these things to your heart’s content, combining ideas from each definition: the normalized g-factor, for example. The message is, though, that although h definitely contains some information, any attempt to condense such complicated information into a single number is never going to be entirely successful.

Comments, particularly with suggestions of alternative metrics are welcome via the box. Even from lurkers.

The Evidence

Posted in Biographical, The Universe and Stuff with tags , , , on September 25, 2009 by telescoper

Further to my recent post about the evidence for a low-density Universe, I thought I’d embarrass all concerned with this image, taken in Leiden in 1995.

Various shady characters masquerading as “experts” were asked by the audience of graduate students at a summer school to give their favoured values for the cosmological parameters (from top to bottom: the Hubble constant, density parameter, cosmological constant, curvature parameter and age of the Universe).

From left to right we have Alain Blanchard (AB), Bernard Jones (BJ, standing), John Peacock (JP), me (yes, with a beard and a pony tail – the shame of it), Vincent Icke (VI), Rien van de Weygaert (RW) and Peter Katgert (PK, standing). You can see on the blackboard that the only one to get anywhere close to correctly predicting the parameters of what would become the standard cosmological model was, in fact, Rien van de Weygaert.

Cranks Anonymous

Posted in Biographical, Books, Talks and Reviews, The Universe and Stuff with tags , , , , on September 22, 2009 by telescoper

Sean Carroll, blogger-in-chief at Cosmic Variance, has ventured abroad from his palatial Californian residence and is currently slumming it in a little town called Oxford where he is attending a small conference in celebration of the 70th birthday of George Ellis. In fact he’s been posting regular live commentaries on the proceedings which I’ve been following with great interest. It looks an interesting and unusual meeting because it involves both physicists and philosophers and it is based around a series of debates on topics of current interest. See Sean’s posts here, here and here for expert summaries of the three days of the meeting.

Today’s dispatches included an account of George’s own talk which appears to have involved delivering a polemic against the multiverse, something he has been known to do from time to time. I posted something on it myself, in fact. I don’t think I’m as fundamentally opposed as Geroge to the idea that we might live in a bit of space-time that may belong to some sort of larger collection in which other bits have different properties, but it does bother me how many physicists talk about the multiverse as if it were an established fact. There certainly isn’t any observational evidence that this is true and the theoretical arguments usually advanced are far from rigorous.The multiverse certainly is  a fun thing to think about, I just don’t think it’s really needed.

There is one red herring that regularly floats into arguments about the multiverse, and that concerns testability. Different bits of the multiverse can’t be observed directly by an observer in a particular place, so it is often said that the idea isn’t testable. I don’t think that’s the right way to look at it. If there is a compelling physical theory that can account convincingly for a realised multiverse then that theory really should have other necessary consequences that are testable, otherwise there’s no point. Test the theory in some other way and you test whether the  multiverse emanating from it is sound too.

However, that fairly obvious statement isn’t really the point of this piece. As I was reading Sean’s blog post for today you could have knocked me down with a feather when I saw my name crop up:

Orthodoxy is based on the beliefs held by elites. Consider the story of Peter Coles, who tried to claim back in the 1990’s that the matter density was only 30% of the critical density. He was threatened by a cosmological bigwig, who told him he’d be regarded as a crank if he kept it up. On a related note, we have to admit that even scientists base beliefs on philosophical agendas and rationalize after the fact. That’s often what’s going on when scientists invoke “beauty” as a criterion.

George was actually talking about a paper we co-wrote for Nature in which we went through the different arguments that had been used to estimate the average density of matter in the Universe, tried to weigh up which were the more reliable, and came to the conclusion that the answer was in the range 20 to 40 percent of the critical density. There was a considerable theoretical prejudice at the time, especially from adherents of  inflation, that the density should be very close to the critical value, so we were running against the crowd to some extent. I remember we got quite a lot of press coverage at the time and I was invited to go on Radio 4 to talk about it, so it was an interesting period for me. Working with George was a tremendous experience too.

I won’t name the “bigwig” George referred to, although I will say it was a theorist; it’s more fun for those working in the field to guess for themselves! Opinions among other astronomers and physicists were divided. One prominent observational cosmologist was furious that we had criticized his work (which had yielded a high value of the density). On the other hand, Martin Rees (now “Lord” but then just plain “Sir”) said that he thought we were pushing at an open door and was surprised at the fuss.

Later on, in 1996, we expanded the article into a book in which we covered the ground more deeply but came to the same conclusion as before.  The book and the article it was based on are now both very dated because of the huge advances in observational cosmology over the last decade. However, the intervening years have shown that we were right in our assessment: the standard cosmology has about 30% of the critical density.

Of course there was one major thing we didn’t anticipate which was the discovery in the late 1990s of dark energy which, to be fair, had been suggested by others more prescient than us as early as 1990. You can’t win ’em all.

So that’s the story of my emergence as a crank, a title to which I’ve tried my utmost to do justice since then. Actually, I would have liked to have had the chance to go to George’s meeting in Oxford, primarily to greet my ertswhile collaborator whom I haven’t seen for ages. But it was invitation-only. I can’t work out whether these days I’m too cranky or not cranky enough to get to go to such things. Looking at the reports of the talks, I rather think it could be the latter.

Now, anyone care to risk the libel laws and guess who Professor BigWig was?

A Well Placed Lecture

Posted in The Universe and Stuff with tags , on September 18, 2009 by telescoper

I noticed that the UK government has recently dropped its ban on product placement in television programmes. I wanted to take this opportunity to state Virgin Airlines that I will not be taking this as a Carling cue to introduce subliminal Coca Cola advertising of any Corby Trouser Press form into this blog.

This week I’ve been giving Marks and Spencer lectures every AIG afternoon to groups of 200 sixth form Samsung students on the subject of the Burger King Big Bang. The talks seemed to go down BMW quite well although I had Betfair trouble sometimes cramming all the Sainsbury things I wanted to talk about in the Northern Rock 30 minutes I was allotted. Anyway, I went through the usual stuff about the Carlsberg cosmic microwave background (CMB), even showing the noise on a Sony television screen to explain that a bit of the Classic FM signal came from the edge of the Next Universe.  The CMB played an Emirates important role in the talk as it is the Marlboro smoking gun of the Big Bang and established our Standard Life model of L’Oreal cosmology.

The timing of these lectures was Goodfella’s Pizza excellent because I was able to include Crown Paints references to the Hubble Ultra Deep Kentucky Fried Chicken Field and the Planck First Direct initial results that I’ve blogged about in the past week or so.

Now that’s all over, Thank God It’s Friday and  I’m getting ready to go to the Comet Sale Now On Opera. ..