Archive for October, 2009

The Monster Mash

Posted in Biographical, Music with tags , on October 31, 2009 by telescoper

I explained this time last year how I’m not really a big fan of Halloween and don’t tend to celebrate it. However, I decided to make an exception this year and post the following little video which seems to be appropriate for the occasion. It’s made of bits of old horror B-movies but the music – by Bobby “Boris” Pickett and the Crypt-kickers is actually the second single I ever bought, way back in 1973. I wonder if you can guess what the first one was?

The Michael Green Experience

Posted in Biographical, The Universe and Stuff with tags , , , on October 30, 2009 by telescoper

It’s been a couple of weeks since the University of Cambridge announced that the successor to Stephen Hawking as Lucasian Professor  of Mathematics would be Michael Green, who is best known for his work on string theory. Heartiest congratulations to him for reaching a position of such eminence.

I was trying to think of a suitable way of marking the occasion of his election to this prestigious post when I suddenly remembered that we were actually on a TV programme together years ago. The show in question was called Unravelling the Universe and was first broadcast in December 1991 as part of a science documentary series called Equinox.

I eventually found my ancient VHS copy of the broadcast master tape of this show and persuaded Ed and Stephen, two of the excellent elves that work in the School of Physics & Astronomy here at Cardiff University, to transfer it to a digital format and put a bit on Youtube for all to see. Many thanks to them for their help.

Other people involved in the programme included Rocky Kolb, Chris Isham and Paul Davies but the short (2-and-a-half minute) clip below features just Michael Green (who basically put the show together) and myself (who was just there to make up the numbers), plus wonderful narration by the late great Peter Jones.

Michael Green hasn’t changed a bit in 18 years. In fact, I saw him last year and am sure he was even wearing the same sweater.

I, on the other hand….Oh dear.

The Edge of Darkness

Posted in The Universe and Stuff with tags , , , on October 29, 2009 by telescoper

I just picked up an item from the BBC Website that refers to news announced in this week’s edition of Nature of the discovery of a gamma-ray burst detected by NASA’s Swift satellite.  The burst itself was detected in April this year and I had a sneak preview that something exciting was going to be announced earlier this month at the Royal Astronomical Society meeting on October 9th. However, today’s press releases still managed to catch me on the hop owing to the fact that a rather different story had distracted my attention…

In fact, detections of gamma-ray bursts are not all that rare. Swift observes one every few days on average. Once such a source is found through its gamma-ray emission, a signal is sent to astronomers around the world who then work like crazy to detect an optical counterpart. If and when they find one, they try to measure the spectrum of light emitted in order to determine the source’s redshift. This is very difficult for the distant ones, and is not  always successful.

However, what happened in this case – called GRB 090423 – was that a spectrum was that not one but two independent teams obtained optical spectra of the  object in which the gamma-ray burst must have happened. What each time found was that their spectrum showed a sharp cut-off at wavelengths shorter than a given limiting value.

Hydrogen is very effective at absorbing radiation with wavelengths shorter than 91.2 nm (the so-called Lyman limit, which is in the ultraviolet part of the spectrum), and all galaxies contain large amounts of hydrogen; hence galaxies are virtually dark at wavelengths shorter than 91.2 nm in their rest-frame. The position of the break in an observed frame will be at a different wavelength owing to the effect of the cosmological redshift.

The Lyman break for the host of  GRB 090423 appears not in the ultraviolet but in the infrared, indicating a very large redshift. In fact, it’s a truly spectacular  8.2.

Together with the direct observations of galaxies at high redshifts I blogged about a month or so ago, this discovery helps push back the frontiers of our knowledge of the Universe not just in space but also in time. A quick calculation reveals that in the standard cosmological model, light from a source at redshift 8.2 has taken about 13.1 billion light years to reach us. The gamma-ray burst therefore exploded about 600 million years after the Big Bang.

Another interesting thing about this source is its duration. The optical afterglow of a gamma-ray burst  decays with time. Gamma-ray bursts are usually classified as either short or long, depending on the decay time with the dividing line between the two classes being around 2 seconds. The optical afterglow of GRB 090423 lasted about ten seconds. But that doesn’t make it a long burst. We actually see the afterglow stretched out in time by the same redshift factor as an individual photon’s wavelength. So in the rest frame of the source the optical glow was only a bit over a second in duration, i.e. it was a short burst.

Long gamma-ray bursts are thought to be associated with core-collapse supernovae which arise from the self-destruction of very massive stars with very short lifetimes. The fact that such things die young means that they are only found where star formation has happened very recently. One might expect the earliest gamma-ray bursts to therefore be of this type.

I don’t think anyone is really sure what the shorter ones really are, but they  seem to happen in regions without active star formation in which the stellar populations are quite old, such as in elliptical galaxies. The fact that the most distant GRB yet discovered happens to be a short burst is very interesting. How can there be an old stellar population at a time when the  Universe itself was so young?

If the Big Bang theory is correct, astronomers  should eventually be able to reach back so far in time that the Universe was so young that no stars had had time to form. There would be no sources of light to detect so we would have reached the edge of darkness. We’re not there yet, but we’re getting closer.

A Dutch Book

Posted in Bad Statistics with tags , , , on October 28, 2009 by telescoper

When I was a research student at Sussex University I lived for a time in Hove, close to the local Greyhound track. I soon discovered that going to the dogs could be both enjoyable and instructive. The card for an evening would usually consist of ten races, each involving six dogs. It didn’t take long for me to realise that it was quite boring to watch the greyhounds unless you had a bet, so I got into the habit of making small investments on each race. In fact, my usual bet would involve trying to predict both first and second place, the kind of combination bet which has longer odds and therefore generally has a better return if you happen to get it right.

imageresizer

The simplest way to bet is through a totalising pool system (called “The Tote”) in which the return on a successful bet  is determined by how much money has been placed on that particular outcome; the higher the amount staked, the lower the return for an individual winner. The Tote accepts very small bets, which suited me because I was an impoverished student in those days. The odds at any particular time are shown on the giant Tote Board you can see in the picture above.

However, every now and again I would place bets with one of the independent trackside bookies who set their own odds. Here the usual bet is for one particular dog to win, rather than on 1st/2nd place combinations. Sometimes these odds were much more generous than those that were showing on the Tote Board so I gave them a go. When bookies offer long odds, however, it’s probably because they know something the punters don’t and I didn’t win very often.

I often watched the bookmakers in action, chalking the odds up, sometimes lengthening them to draw in new bets or sometimes shortening them to discourage bets if they feared heavy losses. It struck me that they have to be very sharp when they change odds in this way because it’s quite easy to make a mistake that might result in a combination bet guaranteeing a win for a customer.

With six possible winners it takes a while to work out if there is such a strategy but to explain what I mean consider  a  race with three competitors. The bookie assigns odds as follows : (1) even money; (2) 3/1 against; and (3)  4/1 against. The quoted odds imply probabilities to win of 50% (1 in 2), 25% (1 in 4) and 20% (1 in 5) respectively.

Now suppose you  place in three different bets:  £100 on (1) to win, £50 on (2) and £40 on (3).  Your total stake is then £190. If (1) succeeds you win £100 and also get your stake back; you lose the other stakes, but you have turned £190 into £200 so are up £10  overall. If (2) wins you also come out with £200: your £50 stake plus £150 for the bet. Likewise if (3) wins. You win whatever the outcome of the race. It’s not a question of being lucky, just that the odds have been designed inconsistently.

I stress that I never saw a bookie actually do this. If one did, he’d soon go out of business. An inconsistent set of odds like this is called a Dutch Book, and a bet which guarantees the better a positive return is often called a lock. It’s the also the principle behind many share-trading schemes based on the idea of arbitrage.

It was only much  later I realised that there is a nice way of turning the Dutch Book argument around to derive the laws of probability from the principle that the odds be consistent, i.e. so that they do not lead to situations where a Dutch Book arises.

To see this, I’ll just generalise the above discussion a bit. Imagine you are a gambler interested in betting on the outcome of some event. If the game is fair, you would have expect to pay a stake px to win an amount x if the probability of the winning outcome is p.

Now  imagine that there are several possible outcomes, each with different probabilities, and you are allowed to bet a different amount on each of them. Clearly, the bookmaker has to be careful that there is no combination of bets that guarantees that you (the punter) will win.

Now consider a specific example. Suppose there are three possible outcomes; call them A, B, and C. Your bookie will accept the following bets: a bet on A with a payoff xA, for which the stake is pAxA; a bet on B for which the return  is xB and the stake  pBxB; and a bet on C with stake  pCxC and payoff xC.

Think about what happens in the special case where the events A and B are mutually exclusive (which just means that they can’t both happen) and C is just given by  A “OR” B, i.e. the event that either A or B happens. There are then three possible outcomes.

First, if A happens but B does not happen the net return to the gambler is

R=x_A(1-P_A)-x_BP_B+x_c(1-P_C).

The first term represents the difference between the stake and the return for the successful bet on A, the second is the lost stake corresponding to the failed bet on the event B, and the third term arises from the successful bet on C. The bet on C succeeds because if A happens then A”OR”B must happen too.

Alternatively, if B happens but A does not happen, the net return is

R=-x_A P_A -x_B(1-P_B)+x_c(1-P_C),

in a similar way to the previous result except that the bet on A loses, while those on B and C succeed.

Finally there is the possibility that neither A nor B succeeds: in this case the gambler does not win at all, and the return (which is bound to be negative) is

R=-x_AP_A-x_BP_B -x_C P_C.

Notice that A and B can’t both happen because I have assumed that they are mutually exclusive. For the game to be consistent (in the sense I’ve discussed above) we need to have

\textrm{det} \left( \begin{array}{ccc} 1- P_A & -P_B & 1-P_C \\ -P_A & 1-P_B & 1-P_C\\ -P_A & -P_B & -P_C \end{array} \right)=P_A+P_B-P_C=0.

This means that

P_C=P_A+P_B

so, since C is the event A “OR” B, this means that the probabilityof two mutually exclusive events A and B is the sum of the separate probabilities of A and B. This is usually taught as one of the axioms from which the calculus of probabilities is derived, but what this discussion shows is that it can itself be derived in this way from the principle of consistency. It is the only way to combine probabilities  that is consistent from the point of view of betting behaviour. Similar logic leads to the other rules of probability, including those for events which are not mutually exclusive.

Notice that this kind of consistency has nothing to do with averages over a long series of repeated bets: if the rules are violated then the game itself is rigged.

A much more elegant and complete derivation of the laws of probability has been set out by Cox, but I find the Dutch Book argument a  nice practical way to illustrate the important difference between being unlucky and being irrational.

P.S. For legal reasons I should point out that, although I was a research student at the University of Sussex, I do not have a PhD. My doctorate is a DPhil.

Exploitation

Posted in Poetry, Science Politics with tags , , on October 27, 2009 by telescoper

At the last Meeting of the RAS Council on October 9th 2009, Professor Keith Mason, Chief Executive of the Science and Technology Facilities Council (STFC), made a presentation after which he claimed that STFC spends too much on “exploitation”, i.e. on doing science with the facilities it provides. This statement clearly signals an intention to cut grants to research groups still further and funnel a greater proportion of STFC’s budget into technology development rather than pure research.

Following on from Phillip Helbig’s challenge a couple of posts ago, I decided to commemorate the occasion with an appropriate sonnet, inspired by Shakespeare’s Sonnet 14.

TO.THE.ONLIE.BEGETTER.OF.THIS.INSU(LT)ING.SONNET.

Mr K.O.M.

It seems Keith Mason doesn’t give a fuck
About the future of Astronomy.
“The mess we’re in is down to rotten luck
And our country’s  ruin’d economy”;
Or that’s the tale our clueless leader tells
When oft by angry critics he’s assailed,
Undaunted he in Swindon’s office dwells
Refusing to accept it’s him that failed.
And now he tells us we must realise:
We spend “too much on science exploitation”.
Forget the dreams of research in blue skies
The new name of the game is wealth creation.
A truth his recent statement underlines
Is that we’re doomed unless this man resigns.

Automatonophobia

Posted in Biographical with tags , , , on October 25, 2009 by telescoper

OK. I admit it. I’m  automatonophobic.

I don’t think I have many irrational fears. I don’t like snakes, and am certainly a bit frightened of them, but there’s nothing irrational about that. They’re nasty and likely to be poisonous. I don’t like slugs either, especially when they eat things in my garden. They’re unpleasant but easy to deal with and I’m not at all scared of them. Likewise spiders and insects.

But  ventriloquists’ dummies give me nightmares every time.

 When I was a little boy my grandfather took me to the Spanish City in Whitley Bay. There was an amusement arcade there and one of the attractions was thing called   The Laughing Sailor. You put a penny in the slot and a hideous  automaton  – very similar to the dummy a ventriloquist might use, except in mock-nautical attire – began to lurch backwards and forwards, flailing its arms, staring maniacally and emitting a loud mechanical cackle that was supposed to represent a laugh. The minute it started doing its turn I burst into tears and ran screaming out of the building. I’ve hated such things ever since.

The anxiety that these objects induce has now been given a name: automatonophobia, which is defined as “a persistent, abnormal, and unwarranted fear of ventriloquist’s dummies, animatronic creatures or wax statues”. Abnormal? No way. They’re simply horrible.

I’m clearly not the only one who thinks so, because there was an article in The Independent a few years ago by Neil Norman that exactly expressed the fear and loathing I feel about these creepy little dolls. Feature films  including Magic and Dead of Night, and episodes of The Twilight Zone and Hammer House of Horror have taken it further by playing with the idea that  a ventriloquist’s dummy has been possessed by some sort of malign power which  uses it to wreak terror on those around.

 We’re not talking about a benign wooden doll like Pinocchio who metamorphoses into a real boy; we’re talking about a ghastly staring-faced mannequin that is brought to life by its operator, the ventriloquist,  by inserting his hand up its backside. The dummy never looks human, but can speak and displays some human traits, usually nasty ones. The essence of a ventriloquist act is to generate the illusion  that one is watching two personalities sparring with each other when in reality the two voices are coming from the same person. Schizophrenia here we come.

It must be very clever to be able to throw your voice,  but I always had the nagging suspicion that ventriloquists use dummies to express the things they find it difficult to say through their own mouth, and so to give life to their darkest thoughts. 

Best of all the attempts to realise the sinister potential of this relationship in a movie is the “Ventriloquist’s Dummy” episode, directed by Alberto Cavalcanti, in Dead of Night, the 1945 portmanteau that some regard as Britain’s greatest horror film. Here is the part that tells  the tale of Michael Redgrave’s ventriloquist being sweatily possessed by the spirit of his malevolent dummy, Hugo. It’s old and creaky, but I find it absolutely terrifying.

So what is it about these man-child mannequins – they are always male – that makes them so creepy? First, there is their appearance: the mad, swivelling, psychotic eyes beneath arched eyebrows and that crude parody of a mouth (with painted teeth) that opens and shuts with a mechanical sound like a trap. Then there are the badly articulated limbs,  like those of a dead thing. When at rest,  their eyes remain open, their mouths fixed in a diabolic grimace. Moreover, with their rouged cheeks, lurid red lips and unnatural eyelashes, all ventriloquist’s dummies look like the badly embalmed corpses of small boys. And they always end up sitting on the knee of a horrible pervert.  Necrophilia and paedophilia all in one sick package. Yuck.

Worst of all, perhaps, is the voice. The high-pitched squawk that emerges is one of the most unpleasant sounds a human being can make. Even if you find it tolerable when you know that it comes from the ventriloquist, the last thing you want  is the dummy to start talking on its own.

I started writing this with the cathartic intention of exorcising the demon that appears whenever I see one of these wretched things. It didn’t work. However, I have now decided to take my mind off this track with a change of thread. Here’s a little quiz. I wonder if anyone can spot the connection between this post and the history of cosmology?

Alternatively, if you’re brave, you could try a bit of catharsis of your own and reveal your worst phobias through the comments box…

Stomp!

Posted in Jazz with tags , on October 24, 2009 by telescoper

I couldn’t resist a quick post about this old record, which was made in Chicago in 1928. The personnel line-up is very similar to that of the classic Hot Sevens, except that Louis Armstrong wasn’t there. Satchmo was, in fact, replaced for this number by two trumpeters, Natty Dominique and George Mitchell. John Thomas played trombone, Bud Scott was on banjo and Warren “Baby” Dodds played the drums.

The star of the show, however, is undoubtedly the great  Johnny Dodds (the older brother of the drummer). He was a clarinettist of exceptional power, a fact that enabled him to cut through the limitations of the relatively crude recording technology of the time. Standing shoulder-to-shoulder with Louis Armstrong doesn’t make it easy for a clarinettist to be heard!

This is still a favourite tune for jazz bands all around the world, but I’ve never heard a version as good as this one. There are lots of little things that contribute to its brilliance, such as the thumping 2/4 rhythm (which also gives away its origins in the New Orleans tradition of marching bands). It’s a bit fast to actually march to, though;  I suppose that’s what turns a march into a stomp. I like the little breaks too (such as Bud Scott’s banjo fill around 2:10 and, especially, the ensemble break at 2:45). But most of all it’s all about how they build up the momentum in such a  controlled way, using little key changes to shift gear but holding back until the time Johnny Dodds joins in again (around 2:20). At that point the whole thing totally catches fire and the remaining 40 seconds or so are some of the “hottest” in all of jazz history.

Some time ago I heard Robert Parker’s digitally remastered version of this track, which revealed that Baby Dodds was pounding away on the bass drum all the way through it. He’s barely audible on the original but it was clearly him that drove the performance along. Anyway, despite the relatively poor sound quality I do hope you enjoy it. It’s a little bit of musical history, but also an enormous bit of fun.

A Random Walk

Posted in The Universe and Stuff with tags , , , , , on October 24, 2009 by telescoper

In 1905 Albert Einstein had his “year of miracles” in which he published three papers that changed the course of physics. One of these is extremely famous: the paper that presented the special theory of relativity. The second was a paper on the photoelectric effect that led to the development of quantum theory. The third paper is not at all so well known. It was about the theory of Brownian motion.  In fact, Einstein spent an enormous amount of time and energy working on problems in statistical physics, something that isn’t so well appreciated these days as his work on the more glamorous topics of relativity and quantum theory.

 Brownian motion, named after the botanist Robert Brown,  is the perpetual jittering observed when small particles such as pollen grains are immersed in a fluid. It is now well known that these motions are caused by the constant bombardment of the grain by the fluid molecules. The molecules are too small to be seen directly, but their presence can be inferred from the visible effect on the much larger grain.

Brownian motion can be observed whenever  any relatively massive particles (perhaps large molecules) are immersed in a fluid comprising lighter particles. Here is a little video showing the Brownian motion observed by viewing smoke under a microscope. There is a small coherent “drift” motion in this example but superimposed on that you can clearly see the effect of gas atoms bombarding the (reddish) smoke particles:

The mathematical modelling of this process was pioneered by Einstein (and also Smoluchowski), but has now become a very sophisticated field of mathematics in its own right. I don’t want to go into too much detail about the modern approach for fear of getting far too technical, so I will concentrate on the original idea.

Einstein took the view that Brownian motion could be explained in terms of a type of stochastic process called a “random walk” (or sometimes “random flight”). I think the first person to construct a mathematical model to describethis type of phenomenon was the statistician Karl Pearson. The problem he posed concerned the famous drunkard’s walk. A man starts from the origin and takes a step of length L in a random direction. After this step he turns through a random angle and takes another step of length L. He repeats this process n times. What is the probability distribution for R, his total distance from the origin after these n steps? Pearson didn’t actually solve this problem, but posed it in a letter to Nature in 1905. Only a  week later, a reply from Lord Rayleigh was published in the same journal. He hadn’t worked it all out, written it up and sent it within a week though. It turns out that Rayleigh had solved essentially the same problem in a different context way back in 1880 so he had the answer readily available when he saw Pearson’s letter.

Pearson’s problem is a restricted case of a random walk, with each step having the same length. The more general case allows for a distribution of step lengths as well as random directions. To give a nice example for which virtually everything is known in a statistical sense, consider the case where each component of the step, i.e. x and y, are independent Gaussian variables, which have zero mean so that there is no preferred direction:

p(x)=\frac{1}{\sigma\sqrt{2\pi}} \exp \left(-\frac{x^2}{2\sigma^2}\right)  

A similar expression holds for p(y). Now we can think of the entire random walk as being two independent walks in x and y.  After n steps the total displacement in x, say, xn is given by

 p(x_n)=\frac{1}{\sigma\sqrt{n 2\pi }} \exp \left(-\frac{x_n^2}{2n\sigma^2}\right)

and again there is a similar expression for the distribution of yn . Notice that each of these distribution has a mean value of zero. On average, meaning on average over the entire probability distribution of realizations of the walk, the drunkard doesn’t go anywhere. In each individual walk he certainly does go somewhere, of course, but he is equally likely to move in any direction the probabilistic mean has to be zero. The total net displacement from the origin, rn , is just given by Pythagoras’ theorem:

r_n^2=x_n^2+y_n^2

 from which it is quite easy to establish that the probability distribution has to be

 p(r_n)=\frac{r_n}{n\sigma^2} \exp \left(-\frac{r_n^2}{2n\sigma^2}\right)

 This is called the Rayleigh distribution, and this kind of process is called a Rayleigh “flight”. The mean value of the displacement is just σ√n. By virtue of the ubiquitous central limit theorem, this result also holds in the original case discussed by Pearson in the limit of very large n. So this gives another example of the useful rule-of-thumb that quantities arising from fluctuations among n entities generally give a result that depends on the square root of n.

The figure below shows a simulation of a Rayleigh random walk. It is quite a good model for the jiggling motion executed by a Brownian particle. 

 sp003196

The step size resulting from a collision of a Brownian particle with a molecule depends on the mass of the molecule and of the particle itself. A heavier particle will be relatively unaffected by each bash and thus take longer to diffuse than a lighter particle. Here is a nice video showing three-dimensional simulations of the diffusion of sugar molecules (left) and proteins (right) that demonstrates this effect.

Of course not even the most inebriated boozer will execute a truly random walk. One would expect each step direction to have at least some memory of the previous one. This gives rise to the idea of a correlated random walk.  Such objects can be used to mimic the behaviour of geometric objects that possess some stiffness in their joints, such as proteins or other long molecules. Nowadays theory of Brownian motion and related stochastic phenomena is now considerably more sophisticated than the simply random flight models I have discussed here. The more general formalism can be used to understand many situations involving phenomena such as diffusion and percolation, not to mention gambling games and the stock market. The ability of these intrinsically “random” processes to yield surprisingly rich patterns is, to me, one of their most fascinating aspects. It takes only a little tweak to create order from chaos.

 

Nox Nocti Indicat Scientiam

Posted in Poetry, The Universe and Stuff with tags , on October 23, 2009 by telescoper

According to my blog access statistics, some of the poems I post on here seem to be fairly popular so I thought I’d put up another one by another poet  from the Metaphysical tradition, William Habington. He belonged to a prominent Catholic family and lived in England from 1605 to 1654, during a time of great religious upheaval.

The title of this particular poem is taken from the Latin (Vulgate) version of Psalm 19, the first two lines of which are

Caeli enarrant gloriam Dei et opus manus eius adnuntiat firmamentum.
Dies diei eructat verbum et nox nocti indicat scientiam.

The King James Bible translates this as

The heavens declare the glory of God; and the firmament sheweth his handywork.
Day unto day uttereth speech, and night unto night sheweth knowledge.

Some translations I have seen give “night after night” rather than the form above. My distant recollection of  Latin learnt at school tells me that nocti is the dative case of the third declension noun nox, so I think think “night shows knowledge to night” is indeed the correct sense of the Latin. Of course I don’t know what the sense of the original Hebrew is!

The original Psalm is the text on which one of the mightiest choruses of Haydn’s  Creation is based, “The Heavens are Telling” and Habington’s poem is a meditation on it. It seems to me to be a natural companion to the poem by John Masefield I posted earlier in the week, but I don’t know whether they share a common inspiration in the Psalm or just in the Universe itself.

When I survey the bright
Celestial sphere;
So rich with jewels hung, that Night
Doth like an Ethiop bride appear:

My soul her wings doth spread
And heavenward flies,
Th’ Almighty’s mysteries to read
In the large volumes of the skies.

For the bright firmament
Shoots forth no flame
So silent, but is eloquent
In speaking the Creator’s name.

No unregarded star
Contracts its light
Into so small a character,
Removed far from our human sight,

But if we steadfast look
We shall discern
In it, as in some holy book,
How man may heavenly knowledge learn.

It tells the conqueror
That far-stretch’d power,
Which his proud dangers traffic for,
Is but the triumph of an hour:

That from the farthest North,
Some nation may,
Yet undiscover’d, issue forth,
And o’er his new-got conquest sway:

Some nation yet shut in
With hills of ice
May be let out to scourge his sin,
Till they shall equal him in vice.

And then they likewise shall
Their ruin have;
For as yourselves your empires fall,
And every kingdom hath a grave.

Thus those celestial fires,
Though seeming mute,
The fallacy of our desires
And all the pride of life confute:–

For they have watch’d since first
The World had birth:
And found sin in itself accurst,
And nothing permanent on Earth.


Another take on cosmic anisotropy

Posted in Cosmic Anomalies, The Universe and Stuff with tags , , , on October 22, 2009 by telescoper

Yesterday we had a nice seminar here by Antony Lewis who is currently at Cambridge, but will be on his way to Sussex in the New Year to take up a lectureship there. I thought I’d put a brief post up here so I can add it to my collection of items concerning cosmic anomalies. I admit that I had missed the paper he talked about (by himself and Duncan Hanson) when it came out on the ArXiv last month, so I’m very glad his visit drew this to my attention.

What Hanson & Lewis did was to think of a number of simple models in which the pattern of fluctuations in the temperature of the cosmic microwave background radiation across the sky might have a preferred direction. They then construct optimal estimators for the parameters in these models (assuming the underlying fluctuations are Gaussian) and then apply these estimators to the data from the Wilkinson Microwave Anisotropy Probe (WMAP). Their subsequent analysis attempts to answer the question whether the data prefer these anisotropic models to the bog-standard cosmology which is statistically isotropic.

I strongly suggest you read their paper in detail because it contains a lot of interesting things, but I wanted to pick out one result for special mention. One of their models involves a primordial power spectrum that is intrinsically anisotropic. The model is of the form

P(\vec{ k})=P(k) [1+a(k)g(\vec{k})]

compared to the standard P(k), which does not depend on the direction of the wavevector. They find that the WMAP measurements strongly prefer this model to the standard one. Great! A departure from the standard cosmological model! New Physics! Re-write your textbooks!

Well, not really. The direction revealed by the best-choice parameter fit to the data is shown in the smoothed picture  (top). Underneath it are simulations of the sky predicted by their  model decomposed into an isoptropic part (in the middle) and an anisotropic part (at the bottom).

lewis2

You can see immediately that the asymmetry axis is extremely close to the scan axis of the WMAP satellite, i.e. at right angles to the Ecliptic plane.

This immediately suggests that it might not be a primordial effect at all but either (a) a signal that is aligned with the Ecliptic plane (i.e. something emanating from the Solar System) or (b) something arising from the WMAP scanning strategy. Antony went on to give strong evidence that it wasn’t primordial and it wasn’t from the Solar System. The WMAP satellite has a number of independent differencing assemblies. Anything external to the satellite should produce the same signal in all of them, but the observed signal varies markedly from one to another. The conclusion, then, is that this particular anomaly is largely generated by an instrumental systematic.

The best candidate for such an effect is that it is an artefact of a asymmetry in the beams of the two telescopes on the satellite. Since the scan pattern has a preferred direction, the beam profile may introduce a direction-dependent signal into the data. No attempt has been made to correct for this effect in the published maps so far, and it seems to me to be very likely that this is the root of this particular anomaly.

We will have to see the extent to which beam systematics will limit the ability of Planck to shed further light on this issue.