Archive for Physics

Late Arrivals at the Physics Ball

Posted in The Universe and Stuff, Uncategorized with tags , , on March 13, 2009 by telescoper

Today is the day we have to endure Comic Relief, an event which happens mercifully only once a year. The idea is to raise money for charity by doing something funny. If only.

I’ve also recently been persuaded to part with £30 to buy a ticket for the annual Physics Ball, organized by Chaos (Cardiff University Physics student-staff society). In the light of this I thought I’d add yet another item of debatable comic value to Comic Relief. My old friend Bryn Jones and I have been taking a leaf out of the I’m Sorry I Haven’t a Clue book of appalling puns.

Without further ado, therefore, it gives us great pleasure to announce the late arrivals at the Physics Ball:

Mr. and Mrs. Sirquashens and their son Maxwell
Mr and Mrs Rowave and their son Mike
Mr and Mrs Ofmotion and their daughter Constance
Mr and Mrs Destate and their son Solly
And from Ireland, Mr and Mrs O’genesis and their son Barry who has brought his two pet newts (Ron and Reno).
Mr and Mrs Yabatick and their daughter Ada.
Mr and Mrs Dardtemperatureandpressure and their son, Stan.
Mr and Mrs Hertz and their son Terry.
Mr and Mrs Avolt and their energetic daughter Meg
Mr and Mrs Persymmetry and their daughter Sue
Mr and Mrs Mentum and their daughter Mo.
Mr and Mrs Sticity and their daughter Ella.
Mr and Mrs Ryovrelativity and their son, Theo, who has a successful career in the military, yes it’s General Theo Ryovrelativity. He’s brought a couple of friends too: Chris Toffle-Cymbals and Joe Desick. Oh, and have you met Rick Tensor?

Here’s Mr and Mrs Zeinstein-Condensate with their son Bo.
Mr and Mrs Gular-velocity and their daughter Anne.
And now we have Mr. and Mrs. Ihilation and their destructive daughter Ann.
Here are Mr. and Mrs. Barr and their highly pressured daughter Millie.

Mr. and Mrs. Farparticull with their son Al.
Mr. and Mrs. Diantflucks and their bright son Ray.
And the coach party has arrived from Ireland with Mr. and Mrs. O’Moshun and their important son Newt Onslow.
Mr. and Mrs. O’Lissforss and their daughter rotating daughter Kerry.
From the Institute of Electrical Engineers we have Mr. and Mrs. Arrsirkitt and their pulsating daughter Elsie.
We now have Mr. and Mrs. Rectcurrant and their son Dai.
Mr and Mrs Hair-Theorem and their son Noah.
Mr and Mrs Mix and their daughter Dinah
Mr and Mrs Clotron and their son Si
Mr and Mrs Yaolis and, doing her best to circulate, their daughter Cora
Mr and Mrs Daze-Lore and their Daughter Farrah
From the Ruritanian principality of Energee we have Prince Ippilocon-Servashun of Energee.
Mr. and Mrs. Jeenslaw and their far-from-energetic son Ray Lee.
Mr. and Mrs. Minnusflucks and their bright son Lou.
Mr. and Mrs. Litonian and their dynamic son Hammy.
Mr. and Mrs. Shuoffheet-Capassitees and their son Ray.
And more arrivals from Ireland: Mr. O’Savar-Law and his attractive wife Bea.
Mr. and Mrs. O’Watt and their powerful daughter Meg.
Mr. and Mrs. O’Particull and their petite daughter Nan
Mr and Mrs Ear-accelerator and their daughter Lynne

And although I don’t think they were invited here are Mr and Mrs Osoficklenonsense and their son Phil along with Mr and Mrs Logicaldistraction and their son Theo.

And a definitely unwelcome are Mr and Mrs Thropic-principle and their daughter Anne

Sorry you can’t come in wearing those jeans. You might not like it, but we do have a Jeans criterion.

Mr and Mrs Ittifluctuation and their son Dennis
Mr and Mrs Punovexponent and their rather chaotic daughter, Leah
Mr and Mrs Stransition and their daughter Fay
Mr and Mrs Trope with their children Polly and Barry.
And we now welcome Mr. and Mrs. Way-Veckwashunn and their canny daughter Inga; that’s the shrewd Inga Way-Veckwashunn.
Mr. and Mrs. Broywavelength and their daughter Deb.
Please welcome Mr. and Mrs. Noldsnumber and their turbulent son Ray.

And now it’s Cabaret time!

First we’ve got sensational pop in the form of singer Larry Tee, followed by a quick burst of Pump up the Volume, folllowed by Norwegian artist Lars Kattering, then chillout with the smooth background sounds of The Three Degrees and ending up with a number of fading stars performing Back to Black.

For those of you wanting something more traditional, we’ve got folk music by The Spinors.

Mr. and Mrs. Helmholtz-Instability and their unstable son Kelvin.
Mr. and Mrs. Tensor and their son Richie
From Wales, Mr and Mrs Menshanalanalissis and their son Dai
Mr and Mrs Eyelength and their daughter Deb Eyelength
Mr and Mrs Notanotherloadofbolloxaboutstringtheory and their son Gordon Bennett Notanotherloadofbolloxaboutstringtheory
Mr and Mrs Dingo-Flyte and their son Ben
From Norway, Mr and Mrs Tableorbit and their son Lars
Mr and Mrs Sonscattering and their son Tom.
Mr. Skelleration and his rapidly moving wife Constance.
Mr. and Mrs. Vennspeed and their son Alf.
And the Welsh electrician, Dai Electric.
From Germany we have Herr Diffraction and his wife Frau Enhofer Diffraction.
Mr. and Mrs. Offslaw and their electrical engineer son Kirk.
Mrs and Mrs Ginvariance and their daughter Gay
Mr and Mrs Terry-Matrix and their daughter Una
And here is Solly, the only member of the Ton family who could make it, but then he always comes on his own
Mr and Mrs On and their daughter Kay and son Barry
Mr and Mrs Roscopic-quantity and their son Mac.
Mr. and Mrs. Moment and their bipolar son Dai Paul.
Mr. and Mrs. Covraydiashonn and their glowing daughter Cherry Ann.
Mr. and Mrs. Arisation and their son Paul.
Mr. and Mrs. Onsprinkippiah and their very important son Newt.
Mr. and Mrs. Cannsoyldropp-Experryment and their very practical daughter Millie.
Mr. and Mrs. Sonnmorlie-Experryment, and here comes their son Michael with no positive result.
Mr. Menterryparticalls and his fundamentally important wife Ellie.
Mr. and Mrs. Swelldeemon and their problematic son Max.
Mr. and Mrs. Defect and their slightly spolit daughter Crystal.
Mr. Formmotion and his constant wife Una.
And here are the Tonn children with their father Newt, and their father’s unmarried sister Prue – that’s Auntie Prue Tonn.
The coach party has arrived from Wales, with Mr. and Mrs. Nammicks and their fast-moving son Dai.
Mr. and Mrs. Vergance-Theorem and their son Dai.
Mr. and Mrs. Oolie-Ekwayshonn and their son Bernie.
From America, Mr and Mrs Chure and their spaced-out son Cosmic Tex Chure
Mr and Mrs Wurld and their son Brian
Mr and Mrs Theory and their Daughter Emma
and here are the Structive-interference family, with brother and sister Des and Connie
Mr and Mrs Medes-Principle with their son Archie
Mr and Mrs Fishalsatellites and their son Artie
In a bit of a whirl here’s Mr and Mrs Currants and their son Eddy

From Germany, Mr and Mrs Duranium and their son Heinrich
Mr and Mrs Photon and their son Virgil
Mr and Mrs Velocity and their typical son Aramis
Mr and Mrs Gadrowsnumber and their daughter Ava
Mr and Mrs Experryment and their son Jules
Mr and Mrs Psimeson and their son Jay
Mr and Mrs Dington-Limit and their son Ed.
Mr. and Mrs. Eslaw and their son Charles.

From the Institution of Electrical Engineers we have Mr. and Mrs. Acksialcabell and their shielded son Carl.
We are pleased to receive Mr. Tennar and his wife Ann.
And from the Science and Technology Facilities Council we have their chief accountants, Mrs. Nanshall-Dissastar and Mr. Jettery-Kayoss: that’s Fi Nanshall-Dissastar and Bud Jettery-Kayoss.

Mr. Motiff-Forss and his magnetic wife Elektra.
Here from the left come Mr. and Mrs. Saslaw and their charged son Guy, and in the opposite direction their son Len.
Mr. and Mrs. Annicall-Annerjee and their son Mike.
Mr. and Mrs. Tamass and their son Rhys.
Mr. and Mrs. Statickpotenshall and their daughter Elektra.
Mr. Jenner-Ait-Annerjee-Levell and his wife Dee.
Mr. and Mrs. Mental-Constance and their humorous, light-hearted son Dai. That’s fun Dai Mental-Constance

Feel free to add more via the comments if you get the idea! The more excruciating the better…

Doctor Atomic

Posted in Opera with tags , , , , on March 1, 2009 by telescoper

It’s not often that my interest in Opera overlaps significantly with my career in Physics, but this Saturday night (28th March) was a definite example, with English National Opera‘s production of Doctor Atomic by John Adams providing the opportunity. I even went with a group of six physicists to see it.

The piece is, of course, centred around the personality of J Robert Oppenheimer “The Father of the Atomic Bomb” and is set in Los Alamos in the runup to the first “Trinity” test detonation of an atomic bomb in July 1945. Other physicists feature in the story, especially Edward Teller and Robert Wilson, and images of many more taken from their security passes are projected onto the set at the opening of Scene 1.

According to what I was told, John Adams originally engaged a librettist to write the text for the Opera but this didn’t work out, and a libretto was instead stitched together by Peter Sellars from a variety of sources, including the poetry of John Donne, the Bhagavad Gita, scientific documents, and assorted memoranda from Los Alamos.

This means that the work doesn’t really have a real narrative trajectory, and there is very little in the way of character development, but instead it resolves itself into a series of impressionist tableaux. Rather than attempting to provide the coherence that the libretto lacks through the music, Adams chose to work with what he had and not try to impose a larger structure on it via the score.

The result is fascinating but it’s not without its problems. I greatly admire John Adams’ music, which manages to be both innovative and accessible. There certainly are many places in Doctor Atomic where the music, words and drama come together to make wonderful Opera. Frankly, though, there are also some passages where it gets becalmed, especially in the domestic scenes between Oppenheimer and his wife which didn’t seem to me to add any special insight into the character of either. The Opera ends with the countdown to the detonation of the Trinity Test, but I thought this was also too long, robbing the event of some of its power, although the last moments and the explosion itself were brilliantly done.

This is a new Opera, first performed as recently as 2005 in San Francisco. This production is only the second, as it has moved directly to London from a successful run at the Metropolitan Opera in New York. Many of the great operas which we now regard as standards went through several iterations before they arrived at their final version. This may happen to Doctor Atomic too. The running time of around three hours is by no means excessive by opera standards, but I do think it would work even better on stage if it were shortened quite a bit and tightened up to remove the longueurs from both acts and focus on building the tension as the test approaches.

I hope all this doesn’t sound too negative. It really is a fascinating and compelling piece. The ending, involving an empty stage and a tape recording of the words of a dying Japanese woman asking for water, moved at least one of our little group to tears.

And of course there’s that aria. At the end of Act 1, just before the interval, Oppenheimer is alone on stage while the prototype bomb is suspended behind him. His thoughts are expressed by the words of a Sonnet Batter my Heart, by John Donne:

Batter my heart, three-person’d God, for you
As yet but knock, breathe, shine, and seek to mend;
That I may rise and stand, o’erthrow me, and bend
Your force to break, blow, burn, and make me new.
I, like an usurp’d town to’another due,
Labor to’admit you, but oh, to no end;
Reason, your viceroy in me, me should defend,
But is captiv’d, and proves weak or untrue.
Yet dearly’I love you, and would be lov’d fain,
But am betroth’d unto your enemy;
Divorce me,’untie or break that knot again,
Take me to you, imprison me, for I,
Except you’enthrall me, never shall be free,
Nor ever chaste, except you ravish me.

Coming back to Cardiff on the train this morning I read a glowing review of Doctor Atomic in the Observer by Fiona Maddocks which referred to this aria as the greatest written since Puccini. I’m not sure I’d go as far as that, but it is truly wonderful, especially when sung as it was last night by the flawless Gerald Finley. It also struck me that it has many parallels with, and is as at least as good as, the aria When the Great Bear and Pleiades in Peter Grimes by Benjamin Britten. Whether Doctor Atomic eventually comes to be regarded as a great Opera in the way Peter Grimes has remains to be seen, but I would bet my bottom dollar that this aria will anyway be performed many many times as a concert piece.

Fortunately, though, you don’t have to take my word for how good  it is. Since the Met version of this production has been released on video, I can end with Batter my Heart just as we saw and heard it last night, with John Adams great music stuttering uneasily at the start but taking on a radiant quality as the aria develops. Superb.

Executive Roast

Posted in Science Politics with tags , , , on February 6, 2009 by telescoper

The Chief Executive of the Science and Technology Facilities Council (Keith Mason) was recently summoned to the House of Commons Select Committee on Innovation, Universities and Skills. The video of his inquisition is now available for your enjoyment (but not his) here.

(I tried embedding this using vodpod but it didn’t work, so you’ll just have to click the link…)

Notice how in traditional fashion the light was shining in his eyes throughout. I suppose I should really feel sorry for him, but somehow I don’t. He may not be entirely responsible for the budgetary crisis currently engulfing STFC, but he handled the aftermath so badly that the damage done to relations between STFC and the community of physics researchers that rely on it for funding will take a long time to fix.

Anyway, if you can’t be bothered to watch the whole show here are some of the salient points in a summary that was passed to me by an anonymous source; I was too busy laughing to make my own notes, but I’ve added a few comments in italics. For those of you not up with acronyms, DIUS is the Department for Innovation, Universities and Skills and CSR stands for the Comprehensive Spending Review.

KM insisted that STFC had been successful in giving the UK unprecedented opportunities for doing world class science, and by the end (though by that stage his most aggressive interlocutor, Ian Gibson, had left) appeared to have earned the committee’s grudging respect (though I suspect that was for the way he played a tricky wicket as much as because he had persuaded them out of their deep concerns about his management of the STFC)

Among the many issues raised were the following:

  • KM agreed to hand over the letter detailing the Science and Technology Facilities Council’s 2007 spending review allocation to MPs for scrutiny.
  • He denied that the external review of STFC had been a “total
    whitewash” on the grounds that it had not been given sufficient time to thoroughly interview a cross section of staff during the review or to do other than take the STFC’s self-assessment document, upon which their work was based, at “face value” without being able to find out if the majority of STFC staff actually agreed with its content. On the contrary staff had made their views known ‘vociferously’.
  • Challenged about the perceived overrepresentation of the executive council on the STFC council KM said that, while it had affected the perception held in the community, it made “no difference” to the outcomes (a point which the committee repeatedly contested). He added that STFC takes full account of community input via the advisory panels and science board. It’s simply not true, he insisted, that the executive dominates the Council;  rather it ensures it is properly informed so that decisions are well founded. However he acknowledged that communications had not been good – hence the new arrangements (Director of Communications appointment); Great, another spin doctor – PC .
  • An extra GBP 9M had been freed up by DIUS reducing STFC’s liabilities to exchange rate variations from the first 6 to 3 m pa over the triennium. Of this 6 would go to exploitation grants and 3 to HEIs to promote knowledge transfer. So 6M will be used properly and the rest wasted – PC .
  • He stated that Jodrell Bank had no long term future in radio astronomy since its location exposed it to too much ‘noise’ – but that was for Manchester University (which STFC would continue to support via E-MERLIN and SKA) to determine. It will take a silver bullet to kill that particular zombie -PC
  • KM also voiced the opinion that here was no tension between being simultaneously responsible for developing STFC labs/campuses and funding HEIs through grants; on the contrary it enabled better utilisation of resources bearing in mind the role of STFC which is BOTH to promote science AND its societal /economic benefits. In other words he wants the flexibility to continue robbing Peter to pay Paul – PC
  • For this reason (as well as reasons of administrative complexity)
    STFC had rejected Wakeham’s recommendation to ring fence the ex-PPARC budget line in the forthcoming CSR. Ditto
  • KM argued that  Daresbury was not being treated unfairly in relation to Harwell (there was a good deal of probing about this by North West MPs) .

My own view having watched most of the video is that Professor Mason must have an incredibly thick skin to shrug off such a sustained level of antipathy. Some of it is crude and abusive, but it’s quite impressive how well informed some of the members are.

Physics Funding by Numbers

Posted in Science Politics with tags , , , , , on January 29, 2009 by telescoper

I just read today that HEFCE has decided on the way funds will be allocated for research following the 2008 Research Assessment Exercise. I have blogged about this previously (here, there and elsewhere), but to give you a quick reminder the exercise basically graded all research in UK universities on a scale from 4* (world-leading) to 1* (nationally recognized), producing for each department a profile giving the fraction of research in each category.

HEFCE has decided that English universities will be funded according to a formula that includes everything from 2* up to 4* but with a weighting 1:3:7.  Those graded 1* and unclassified get no funding at all. How they arrived at this formula is anyone’s guess. Personally I think it’s a bit harsh on 2* which is supposed to be internationally recognized research, but there you go.

Assuming there is also a multiplier for volume (i.e. the number of people submitted) we can now easily produce another version of the physics research league table which reveals the relative amount of money each will get. I don’t know the overall normalisation, of course.

The table shows the number of staff submitted (second column) and the overall fundability factor based on a 7:3:1 weighting of the published profile multiplied by the figure in column 2. This is like the “research power” table I showed here, only with a different and much steeper weighting (7,3,1,0) versus (4,3,2,1).

1. University of Cambridge 141.25 459.1
2. University of Oxford 140.10 392.3
3. Imperial College London 126.80 380.4
4. University College London 101.03 298.0
5. University of Manchester 82.80 227.7
6. University of Durham 69.50 205.0
7. University of Edinburgh 60.50 184.5
8. University of Nottingham 44.45 144.5
9. University of Glasgow 45.75 135.0
10. University of Warwick 51.00 130.1
11. University of Bristol 46.00 128.8
12. University of Birmingham 43.60 126.4
13. University of Southampton 45.30 120.0
14. Queen’s University Belfast 50.00 115.0
15. University of Leicester 45.00 114.8
16. University of St Andrews 32.20 104.7
17. University of Liverpool 34.60 96.9
18. University of Sheffield 31.50 92.9
19. University of Leeds 35.50 88.8
20. Lancaster University 26.40 88.4
21. Queen Mary, University of London 34.98 85.7
22. University of Exeter 28.00 77.0
23. University of Hertfordshire 28.00 72.8
24. University of York 26.00 67.6
25. Royal Holloway, University of London 27.96 67.1
26. University of Surrey 27.20 65.3
27. Cardiff University 32.30 64.6
28. University of Bath 20.20 63.6
29. University of Strathclyde 31.67 60.2
30. University of Sussex 20.00 55.0
31. Heriot-Watt University 19.50 51.7
32. Swansea University 20.75 48.8
33. Loughborough University 17.10 41.9
34. University of Central Lancashire 22.20 41.1
35. King’s College London 16.40 38.5
36. Liverpool John Moores University 16.50 35.5
37. Aberystwyth University 18.33 23.8
38. Keele University 10.00 18.0
39. Armagh Observatory 7.50 13.1
40. University of Kent 3.00 4.5
41. University of the West of Scotland 3.70 4.1
42. University of Brighton 1.00 1.8

It looks to me that the fraction of funds going to the big three at the top will probably be reduced quite significantly, although apparently there are  funds set aside to smooth over any catastrophic changes. I’d hazard a guess that things won’t change much for those in the middle.

I’ve left the Welsh and Scottish universities in the list for comparison, but there is no guarantee that HEFCW and SFC will use the same formula for Wales and Scotland as HEFCE did for England. I have no idea what is going to happen to Cardiff University’s funding at the moment.

Another bit of news worthing putting in here is that HEFCE has protected funding for STEM subjects (Science, Technology and Medicine) so that the apparently poor showing of some science subjects (especially physics) compared to, e.g., Economics will not necessarily mean that physics as a whole will suffer. How this works out in practice remains to be seen.

Apparently also the detailed breakdowns of how the final profiles were reached will go public soon. That will make for some interesting reading, although apparently everything relating to individual researchers will be shredded to prevent problems with the data protection act.

On the Cards

Posted in Uncategorized with tags , , , , , , , on January 27, 2009 by telescoper

After an interesting chat yesterday with a colleague about the difficulties involved in teaching probabilities, I thought it might be fun to write something about card games. Actually, much of science is intimately concerned with statistical reasoning and if any one activity was responsible for the development of the theory of probability, which underpins statistics, it was the rise of games of chance in the 16th and 17th centuries. Card, dice and lottery games still provide great examples of how to calculate probabilities, a skill which is very important for a physicist.

For those of you who did not misspend your youth playing with cards like I did, I should remind you that a standard pack of playing cards has 52 cards. There are 4 suits: clubs (♣), diamonds (♦), hearts (♥) and spades (♠). Clubs and spades are coloured black, while diamonds and hearts are red. Each suit contains thirteen cards, including an Ace (A), the plain numbered cards (2, 3, 4, 5, 6, 7, 8, 9 and 10), and the face cards: Jack (J), Queen (Q), and King (K). In most games the most valuable is the Ace, following by King, Queen and Jack and then from 10 down to 2.

I’ll start with Poker, because it seems to be one of the simplest ways of losing money these days. Imagine I start with a well-shuffled pack of 52 cards. In a game of five-card draw poker, the players essentially bet on who has the best hand made from five cards drawn from the pack. In more complicated versions of poker, such as Texas hold’em, one has, say, two “private” cards in one’s hand and, say, five on the table in plain view. These community cards are usually revealed in stages, allowing a round of betting at each stage. One has to make the best hand one can using five cards from ones private cards and those on the table. The existence of community cards makes this very interesting because it gives some additional information about other player’s holdings. For the present discussion, however, I will just stick to individual hands and their probabilities.

How many different five-card poker hands are possible?

To answer this question we need to know about permutations and combinations. Imagine constructing a poker hand from a standard deck. The deck is full when you start, which gives you 52 choices for the first card of your hand. Once that is taken you have 51 choices for the second, and so on down to 48 choices for the last card. One might think the answer is therefore 52×51×50×49 ×48=311,875,200, but that’s not right because it doesn’t actually matter which order your five cards are dealt to you.

Suppose you have 4 aces and the 2 of clubs in your hand; the sequences (A♣, A♥, A♦, A♠, 2♣) and (A♥ 2♣ A♠, A♦, A♣) are counted as distinct hands among the number I obtained above. There are many other possibilities like this where the cards are the same but the order is different. In fact there are 5×4×3×2× 1 = 120 such permutations . Mathematically this is denoted 5!, or five-factorial. Dividing the number above by this gives the actual number of possible five-card poker hands: 2,598,960. This number is important because it describes the size of the “possibility space”. Each of these hands is a possible poker deal, and each is assumed to be “equally likely”, unless the dealer is cheating.

This calculation is an example of a mathematical combination as opposed to a permutation. The number of combinations one can make of r things chosen from a set of n is usually denoted Cn,r. In the example above, r=5 and n=52. Note that 52×51×50×49 ×48 can be written 52!/47! The general result for the number of combinations can likewise be written Cn,r=n!/(n-r)!r!

Poker hands are characterized by the occurrence of particular events of varying degrees of probability. For example, a flush is five cards of the same suit but not in sequence (e.g. 2♠, 4♠, 7♠, 9♠, Q♠). A numerical sequence of cards regardless of suit (e.g. 7♣, 8♠, 9♥, 10♦, J♥) is called a straight. A sequence of cards of the same suit is called a straight flush. One can also have a pair of cards of the same value, or two pairs, or three of a kind, or four of a kind, or a full house which is three of one kind and two of another. One can also have nothing at all, i.e. not even a pair.

The relative value of the different hands is determined by how probable they are, and to work that out takes quite a bit of effort.

Consider the probability of getting, say, 5 spades (in other words, spade flush). To do this we have to calculate the number of distinct hands that have this composition.There are 13 spades in the deck to start with, so there are 13×12×11×10×9 permutations of 5 spades drawn from the pack, but, because of the possible internal rearrangements, we have to divide again by 5! The result is that there are 1287 possible hands containing 5 spades. Not all of these are mere flushes, however. Some of them will include sequences too, e.g. 8♠, 9♠, 10♠, J♠, Q♠, which makes them straight flushes. There are only 10 possible straight flushes in spades (starting with 2,3,4,5,6,7,8,9,10 or J), so only 1277 of the possible hands counted above are just flushes. This logic can apply to any of the suits, so in all there are 1277×4=5108 flush hands and 10×4=40 straight flush hands.

I won’t go through the details of calculating the probability of the other types of hand, but I’ve included a table showing their probabilities obtained by dividing the relevant number of possibilities by the total number of hands (given at the bottom of the middle column).

TYPE OF HAND

Number of Possible Hands

Probability

Straight Flush

40

0.000015

Four of a Kind

624

0.000240

Full House

3744

0.001441

Flush

5108

0.001965

Straight

10,200

0.003925

Three of a Kind

54,912

0.021129

Two Pair

123,552

0.047539

One Pair

1,098,240

0.422569

Nothing

1,302,540

0.501177

TOTALS

2,598,960

1.00000

 

 

 

Poker involves rounds of betting in which the players, amongst other things, try to assess how likely their hand is to win compared with the others involved in the game. If your hand is weak, you can fold and allow the accumulated bets to be given to your opponents. Alternatively, you can  bluff and bet strongly on a poor hand (even if you have “nothing”) to convince your opponents that your hand is strong. This tactic can be extremely successful in the right circumstances. In the words of the late great Paul Newman in the film Cool Hand Luke,  “sometimes nothing can be a real cool hand”.

If you bet heavily on your hand, the opponent may well think it is strong even if it contains nothing, and fold even if his hand has a higher value. To bluff successfully requires a good sense of timing – it depends crucially on who gets to bet first – and extremely cool nerves. To spot when an opponent is bluffing requires real psychological insight. These aspects of the game are in many ways more interesting than the basic hand probabilities, and they are difficult to reduce to mathematics.

Another card game that serves as a source for interesting problems in probability is Contract Bridge. This is one of the most difficult card games to play well because it is a game of logic that also involves chance to some degree. Bridge is a game for four people, arranged in two teams of two. The four sit at a table with members of each team facing opposite each other. Traditionally the different positions are called North, South, East and West although you don’t actually need a compass to play. North and South are partners, as are East and West.

For each hand of Bridge an ordinary pack of cards is shuffled and dealt out by one of the players, the dealer. Let us suppose that the dealer in this case is South. The pack is dealt out one card at a time to each player in turn, starting with West (to dealer’s immediate left) then North and so on in a clockwise direction. Each player ends up with thirteen cards when all the cards are dealt.

Now comes the first phase of the game, the auction. Each player looks at their cards and makes a bid, which is essentially a coded message that gives information to their partner about how good their hand is. A bid is basically an undertaking to win a certain number of tricks with a certain suit as trumps (or with no trumps). The meaning of tricks and trumps will become clear later. For example, dealer might bid “one spade” which is a suggestion that perhaps he and his partner could win one more trick than the opposition with spades as the trump suit. This means winning seven tricks, as there are always thirteen to be won in a given deal. The next to bid – in this case West – can either pass (saying “no bid”) or bid higher, like an auction. The value of the suits increases in the sequence clubs, diamonds, hearts and spades. So to outbid one spade (1S), West has to bid at least two hearts (2H), say, if hearts is the best suit for him but if South had opened 1C then 1H would have been sufficient to overcall . Next to bid is South’s partner, North. If he likes spades as trumps he can raise the original bid. If he likes them a lot he can jump to a much higher contract, such as four spades (4S).

This is the most straightforward level of Bridge bidding, but in reality there are many bids that don’t mean what they might appear to mean at first sight. Examples include conventional bids  (such as Stayman or Blackwood),  splinter and transfer bids and the rest of the complex lexicon of Bridge jargon. There are some bids to which partner must respond (forcing bids), and those to which a response is discretionary. And instead of overcalling a bid, one’s opponents could “double” either for penalties in the hope that the contract will fail or as a “take-out” to indicate strength in a suit other than the one just bid.

Bidding carries on in a clockwise direction until nobody dares take it higher. Three successive passes will end the auction, and the contract is then established. Whichever player opened the bidding in the suit that was finally chosen for trumps becomes “declarer”. If we suppose our example ended in 4S, then it was South that becomes declarer because he opened the bidding with 1S. If West had overcalled 2 Hearts (2H) and this had passed round the table, West would be declarer.

The scoring system for Bridge encourages teams to go for high contracts rather than low ones, so if one team has the best cards it doesn’t necessarily get an easy ride; it should undertake an ambitious contract rather than stroll through a simple one. In particular there are extra points for making “game” (a contract of four spades, four hearts, five clubs, five diamonds, or three no trumps). There is a huge bonus available for bidding and making a grand slam (an undertaking to win all thirteen tricks, i.e. seven of something) and a smaller but still impressive bonus for a small slam (six of something). This encourages teams to push for a valuable contract: tricks bid and made count a lot more than overtricks even without the slam bonus.

The second phase of the game now starts. The person to the left of declarer plays a card of their choice, possibly following yet another convention, such as “fourth highest of the longest suit”. The player opposite declarer puts all his cards on the table and becomes “dummy”, playing no further part in this particular hand. Dummy’s cards are then entirely under the control of the declarer. All three players can see the cards in dummy, but only declarer can see his own hand. Apart from the role of dummy, the card play is then similar to whist.

Each trick consists of four cards played in clockwise sequence from whoever leads. Each player, including dummy, must follow the suit led if he has a card of that suit in his hand. If a player doesn’t have a card of that suit he may “ruff”, i.e. play a trump card, or simply discard some card (probably of low value) from another suit. Good Bridge players keep a careful track of all discards to improve their knowledge of the cards held by their  opponents. Discards can also be used by the defence (i.e. East and West in this case) to signal to each other. Declarer can see dummy’s cards but the defenders don’t get to see each other’s.

One can win a trick in one of two ways. Either one plays a higher card of the same suit, e.g. K♥ beats 10♥, or anything lower than Q♥. Aces are high, by the way. Alternatively, if one has no cards of the suit that has been led, one can play a trump (or “ruff”). A trump always beats a card of the original suit, but more than one player may ruff and in that case the highest trump played carries the trick. For instance, East may ruff only to be over-ruffed by South if both have none of the suit led. Of course one may not have any trumps at all, making a ruff impossible. If one has neither the original suit nor a trump one has to discard something from another suit. The possibility of winning a trick by a ruff also does not exist if the contract is of the no-trumps variety.

Whoever wins a given trick leads to start the next one. This carries on until thirteen tricks have been played. Then comes the reckoning of whether the contract has been made. If so, points are awarded to declarer’s team. If not, penalty points are awarded to the defenders which are higher if the contract has been doubled. Then it’s time for another hand, probably another drink, and very possibly an argument about how badly declarer played the hand.

I’ve gone through the game in some detail in an attempt to make it clear why this is such an interesting game for probabilistic reasoning. During the auction, partial information is given about every player’s holding. It is vital to interpret this information correctly if the contract is to be made. The auction can reveal which of the defending team holds important high cards, or whether the trump suit is distributed strangely. Because the cards are played in strict clockwise sequence this matters a lot. On the other hand, even with very firm knowledge about where the important cards lie, one still often has a difficult logical puzzle to solve if all the potential winners in one’s hand are actually to be made into tricks. It can be a very subtle game.

I only have space-time for one illustration of this kind of thing, but it’s another one that is fun to work out. As is true to a lesser extent in poker, one is not really interested in the initial probabilities of the different hands but rather how to update these probabilities using conditional information as it may be revealed through the auction and card play. In poker this updating is done largely by interpreting the bets one’s opponents are making.

Let us suppose that I am South, and I have been daring enough to bid a grand slam in spades (7S). West leads, and North lays down dummy. I look at my hand and dummy, and realise that we have 11 trumps between us, missing only the King (K) and the 2. I have all other suits covered, and enough winners to make the contract provided I can make sure I win all the trump tricks. The King, however, poses a problem. The Ace of Spades will beat the King, but if I just lead the Ace, it may be that one of East or West has both the K and the 2. In this case he would simply play the two to my Ace. The King would be an automatic winner then: as the highest remaining trump it must win a trick eventually. The contract is then doomed.

On the other hand if the spades split 1-1 between East and West then the King drops when I lead the Ace, so that strategy makes the contract. It all depends how the cards split.

But there is a different way to play this situation. Suppose, for example, that A♠ and Q♠ are on the table (in dummy’s hand) and I, as declarer, have managed to win the first trick in my hand. If I think the K♠ lies in West’s hand, I lead a spade. West has to follow suit if he can. If he has the King, and plays it, I can cover it with the Ace so it doesn’t win. If, however, West plays low I can play Q♠. This will win if I am right about the location of the King. Next time I can lead the A♠ from dummy and the King will fall. This play is called a finesse.

But is this better than the previous strategy, playing for the drop? It’s all a question of probabilities, and this in turn boils down to the number of possible deals allow each strategy to work.

To start with, we need the total number of possible bridge hands. This is quite easy: it’s the number of combinations of 13 objects taken from 52, i.e. C52,13. This is a truly enormous number: over 600 billion. You have to play a lot of games to expect to be dealt the same hand twice!

What we now have to do is evaluate the probability of each possible arrangement of the missing King and two. Dummy and declarer’s hands are known to me. There are 26 remaining cards whose location I do not know. The relevant space of possibilities is now smaller than the original one. I have 26 cards to assign between East and West. There are C26,13 ways of assigning West’s 13 cards, but once I have done this the remaining 13 must be in East’s hand.

Suppose West has the 2 but not the K. Conditional on this assumption, I know one of his cards, but there are 12 others remaining to be assigned. There are therefore C24,12 hands with this possible arrangement of the trumps. Obviously the K has to be with East in this case. The finesse would not work as East would cover the Q with the K, but the K would drop if the A were played.

The opposite situation, with West having the K but not the 2 has the same number of possibilities associated with it. Here West must play the K when a spade is led so it will inevitably lose to the A. South abandons the idea of finessing when West rises and just covers it with the higher card.

Suppose instead West doesn’t have any trumps. There are C24,13 ways of constructing such a hand: 13 cards from the 24 remaining non-trumps. Here the finesse fails because the K is with East but the drop fails too. East plays the 2 on the A and the K becomes a winner.

The remaining possibility is that West has both trumps: this can happen in C24,11 ways. Here the finesse works but the drop fails. If West plays low on the South lead, declarer calls for the Q from dummy to hold the trick. Next lead he plays the A to drop the K.

To turn these counts into probabilities we just divide by the total number of different ways I can construct the hands of East and West, which is C26,13. The results are summarized in the table here.

Spades in West’s hand

Number of hands

Probability

Drop

Finesse

None

C24,13

0.24

0

0

K

C24,12

0.26

0.26

0.26

2

C24,12

0.26

0.26

0

K2

C24,11

0.24

0

0.24

Total

C26,13

1.00

0.52

0.50

The last two columns show the contributions of each arrangement to the probability of success of either playing for the drop or the finesse. You can see that the drop is slightly more likely to work than the finesse in this case.

Note, however, that this ignores any information gleaned from the auction, which could be crucial. For example, if West had made a bid then it is more likely that he had cards of some value so this might suggest the K might be in his hand. Note also that the probability of the drop and the probability of the finesse do not add up to one. This is because there are situations where both could work or both could fail.

This calculation does not mean that the finesse is never the right tactic. It sometimes has much higher probability than the drop, and is often strongly motivated by information the auction has revealed. Calculating the odds precisely, however, gets more complicated the more cards are missing from declarer’s holding. For those of you too lazy to compute the probabilities, the book On Gambling, by Oswald Jacoby contains tables of the odds for just about any bridge situation you can think of.

Finally on the subject of Bridge, I wanted to mention a fact that many people think is paradoxical but which isn’t really. Looking at the table shows that the odds of a 1-1 split in spades here are 0.52:0.48 or 13: 12. This comes from how many cards are in East and West’s hand when the play is attempted. There is a much quicker way of getting this answer than the brute force method I used above. Consider the hand with the spade two in it. There are 12 remaining opportunities in that hand that the spade K might fill, but there are 13 available slots for it in the other. The odds on a 1-1 split must therefore be 13:12. Now suppose instead of going straight for the trumps, I play off a few winners in the side suits (risking that they might be ruffed, of course). Suppose I lead out three Aces in the three suits other than spades and they all win. Now East and West have only 20 cards between them and by exactly the same reasoning as before, the odds of a 1-1 split have become 10:9 instead of 13:12. Playing out seemingly irrelevant suits has increased the probability of the drop working. Although I haven’t touched the spades, my assessment of the probability of the spade distribution has changed significantly.

This sort of thing is a major reason why I always think of probabilities in a Bayesian way. As information is gradually revealed one updates the assessment of the probability of the remaining unknowns.

But probability is only a part of Bridge; the best players don’t actually leave very much  to chance…

The Physics Overview

Posted in Science Politics with tags , , , , , , , , on January 17, 2009 by telescoper

I found out by accident the other day that the Panels conducting the 2008 Research Assessment Exercise have now published their subject overviews, in which they comment trends within each discipline.

Heading straight for the overview produced by the panel for Physics (which is available together with two other panels here),I found some interesting points, some of which relate to comments posted on my previous items about the RAE results (here and here) until I terminated the discussion.

One issue that concerns many physicists is how the research profiles produced by the RAE panel will translate into funding. I’ve taken the liberty of extracting a couple of paragraphs from the report to show what they think. (For those of you not up with the jargon, UoA19 is the Unit of Assessment 19, which is Physics).

The sub-panel is pleased with how much of the research fell into the 4* category and that this excellence is widely spread so that many smaller departments have their share of work assessed at the highest grade. Every submitted department to UoA19 had at least 70% of their overall quality profile at 2* or above, i.e. internationally recognised or above.

Sub-panel 19 takes the view that the research agenda of any group, or of any individual for that matter, is interspersed with fallow periods during which the next phase of the research is planned and during which outputs may be relatively incremental, even if of high scientific quality. In the normal course of events successful departments with a long term view will have a number of outputs at the 3* and 2* level indicating that the groundwork is being laid for the next set of 4* work. This is most obviously true for those teams involved with very major experiments in the big sciences, but also applies to some degree in small science. Thus the quality profile is a dynamic entity and even among groups of very high international standing there is likely to be cyclic variation in the relative amounts of 3* and 4* work according to the rhythm of their research programmes. Most departments have what we would consider a healthy balance between the perceived quality levels. The subpanel strongly believes that the entire overall profile should be considered when measuring the quality of a department, rather than focussing on the 4* component only.

I think this is very sensible, but for more reasons than are stated. For a start the judgement of what is 4* or 3* must be to some extent subjective and it would be crazy to allocate funding entirely according to the fraction of 4* work. I’ve heard informally that the error in any of the percentages for any assessment is plus or minus 10%, which also argues for a conservative formula. However one might argue about the outcome, the panels clearly spent a lot of time and effort determining the profiles so it would seem to make sense to use all the information they provide rather than just a part.

Curiously, though, the panel made no comment about why it is that physics came out so much worse than chemistry in the 2008 exercise (about one-third of the chemistry departments in the country had a profile-weighted quality mark higher than or equal to the highest-rated physics department). Perhaps they just think UK chemistry is a lot better than UK physics.

Anyway, as I said, the issue most of us are worrying about is how this will translate into cash. I suspect HEFCE hasn’t worked this out at all yet either. The panel clearly thinks that money shouldn’t just follow the 4* research, but the HEFCE managers might differ. If they do wish to follow a drastically selective policy they’ve got a very big problem: most physics departments are rated very close together in score. Any attempt to separate them using the entire profile would be hard to achieve and even harder to justify.

The panel also made a specific comment about Wales and Scotland, which is particularly interesting for me (being here in Cardiff):

Sub-panel 19 regards the Scottish Universities Physics Alliance collaboration between Scottish departments as a highly positive development enhancing the quality of research in Scotland. South of the border other collaborations have also been formed with similar objectives. On the other hand we note with concern the performance of three Welsh departments where strategic management did not seem to have been as effective as elsewhere.

I’m not sure whether the dig about Welsh physics departments is aimed at the Welsh funding agency HEFCW or the individual university groups; SUPA was set up with the strong involvement of SFC and various other physics groupings in England (such as the Midlands Physics Alliance) were actively encouraged by HEFCE. It is true, though, that the 3 active physics departments in Wales (Cardiff, Swansea and Aberystwyth) all did quite poorly in the RAE. In the last RAE, HEFCW did not apply as selective a funding formula as its English counterpart HEFCE with the result that Cardiff didn’t get as much research funding as it would if it had been in England. One might argue that this affected the performance this time around, but I’m not sure about this as it’s not clear how any extra funding coming into Cardiff would have been spent. I doubt if HEFCW will do any different this time either. Welsh politics has a strong North-South issue going on, so HEFCW will probably feel it has to maintain a department in the North. It therefore can’t penalise Aberystwyth too badly for its poor RAE showing. The other two departments are larger and had very similar profiles (Swansea better than Cardiff, in fact) so there’s very little justification for being too selective there either.

The panel remarked on the success of SUPA which received a substantial injection of cash from the Scottish Funding Council (SFC) and which has led to new appointments in strategic areas in several Scottish universities. I’m a little bit skeptical about the long-term benefits of this because the universities themselves will have to pick up the tab for these positions when the initial funding dries up. Although it will have bought them extra points on the RAE score the continuing financial viability of physics departments is far from guaranteed because nobody yet knows whether they will gain as much cash from the outcome as they spent to achieve it. The same goes for other universities, particularly Nottingham, who have massively increased their research activity with cash from various sources and consequently done very well in the RAE. But will they get back as much as they have put in? It remains to be seen.

What I would say about SUPA is that it has definitely given Scottish physics a higher profile, largely from the appointment of Ian Halliday to front it. He is an astute political strategist and respected scientist who performed impressively as Chief Executive of the now-defunct Particle Physics and Astronomy Research Council and is also President of the European Science Foundation. Having such a prominent figurehead gives the alliance more muscle than a group of departmental heads would ever hope to have.

So should there be a Welsh version of SUPA? Perhaps WUPA?

Well, Swansea and Cardiff certainly share some research interests in the area of condensed-matter physics but their largest activities (Astronomy in Cardiff, Particle Physics in Swansea) are pretty independent. It seems to me to be to be well worth thinking of some sort of initiative to pool resources and try to make Welsh physics a bit less parochial, but the question is how to do it. At coffee the other day, I suggested an initiative in the area of astroparticle physics could bring in genuinely high quality researchers as well as establishing synergy between Swansea and Cardiff, which are only an hour apart by train. The idea went down like a lead balloon, but I still think it’s a good one. Whether HEFCW has either the resources or the inclination to do something like it is another matter, even if the departments themselves were to come round.

Anyway, I’m sure there will be quite a lot more discussion about our post-RAE strategy if and when we learn more about the funding implications. I personally think we could do with a radical re-think of the way physics in Wales is organized and could do with a champion who has the clout of Scotland’s SUPA-man.

The mystery as far as I am concerned remains why Cardiff did so badly in the ratings. I think the first quote may offer part of the explanation because we have large groups in Astronomical Instrumentation and Gravitational Physics, both of which have very long lead periods. However, I am surprised and saddened by the fact that the fraction rated at 4* is so very low. We need to find out why. Urgently.

Maps, Territories and Landscapes

Posted in The Universe and Stuff with tags , , , , , , , , on January 10, 2009 by telescoper

I was looking through recent posts on cosmic variance and came across an interesting item featuring a map from another blog (run by Samuel Arbesman) which portrays the Milky Way in the style of  a public transport map:

mwta

This is just a bit of fun, of course, but I think maps like this are quite fascinating, not just as practical guides to navigating a transport system but also because they often stand up very well as works of art. It’s also interesting how they evolve with time  because of changes to the network and also changing ideas about stylistic matters.

A familiar example is the London Underground or Tube map. There is a fascinating website depicting the evolutionary history of this famous piece of graphic design. Early versions simply portrayed the railway lines inset into a normal geographical map which made them rather complicated, as the real layout of the lines is far from regular. A geographically accurate depiction of the modern tube network is shown here which makes the point:

tubegeo

A revolution occurred in 1933 when Harry Beck compiled the first “modern” version of the map. His great idea was to simplify the representation of the network around a single unifying feature. To this end he turned the Central Line (in red) into a straight line travelling left to right across the centre of the page, only changing direction at the extremities. All other lines were also distorted to run basically either North-South or East-West and produce a much more regular pattern, abandoning any attempt to represent the “real” geometry of the system but preserving its topology (i.e. its connectivity).  Here is an early version of his beautiful construction:

Note that although this a “modern” map in terms of how it represents the layout, it does look rather dated in terms of other design elements such as the border and typefaces used. We tend not to notice how much we surround the essential things with embellishments that date very quickly.

More modern versions of this map that you can get at tube stations and the like rather spoil the idea by introducing a kink in the central line to accommodate the complexity of the interchange between Bank and Monument stations as well as generally buggering about with the predominantly  rectilinear arrangement of the previous design:

I quite often use this map when I’m giving popular talks about physics. I think it illustrates quite nicely some of the philosophical issues related with theoretical representations of nature. I think of theories as being like maps, i.e. as attempts to make a useful representation of some  aspects of external reality. By useful, I mean the things we can use to make tests. However, there is a persistent tendency for some scientists to confuse the theory and the reality it is supposed to describe, especially a tendency to assert there is a one-to-one relationship between all elements of reality and the corresponding elements in the theoretical picture. This confusion was stated most succintly by the Polish scientist Alfred Korzybski in his memorable aphorism :

The map is not the territory.

I see this problem written particularly large with those physicists who persistently identify the landscape of string-theoretical possibilities with a multiverse of physically existing domains in which all these are realised. Of course, the Universe might be like that but it’s by no means clear to me that it has to be. I think we just don’t know what we’re doing well enough to know as much as we like to think we do.

A theory is also surrounded by a penumbra of non-testable elements, including those concepts that we use to translate the mathematical language of physics into everday words. We shouldn’t forget that many equations of physics have survived for a long time, but their interpretation has changed radically over the years.

The inevitable gap that lies between theory and reality does not mean that physics is a useless waste of time, it just means that its scope is limited. The Tube  map is not complete or accurate in all respects, but it’s excellent for what it was made for. Physics goes down the tubes when it loses sight of its key requirement: to be testable.

In any case, an attempt to make a grand unified theory of the London Underground system would no doubt produce a monstrous thing so unwieldly that it would be useless in practice. I think there’s a lesson there for string theorists too…

Now, anyone for a game of Mornington Crescent?

Who put the Bang in Big Bang?

Posted in The Universe and Stuff with tags , , , , , on December 29, 2008 by telescoper

Back from the frozen North, having had a very nice time over Christmas, I thought it was time to reactivate my blog and to redress the rather shameful lack of science on what is supposed to be a science blog. Rather than writing a brand new post, though, I’m going to cheat like a TV Chef by sticking up something that I did earlier. I’ve  had the following piece floating around on my laptop for a while so I thought I’d rehash it and post it on here. It is based on an article that was published in a heavily revised and shortened form in New Scientist in 2007, where it attracted some splenetic responses despite there not being anything particular controversial in it! It’s not particularly topical, but there you go. The television is full of repeats these days too.

Around twenty-five years ago a young physicist came up with what seemed at first to be an absurd idea: that, for a brief moment in the very distant past, just after the Big Bang, something weird happened to gravity that made it push rather than pull.  During this time the Universe went through an ultra-short episode of ultra-fast expansion. The physicist in question, Alan Guth, couldn’t prove that this “inflation” had happened nor could he suggest a compelling physical reason why it should, but the idea seemed nevertheless to solve several major problems in cosmology.

Twenty five years later on, Guth is a professor at MIT and inflation is now well established as an essential component of the standard model of cosmology. But should it be? After all, we still don’t know what caused it and there is little direct evidence that it actually took place. Data from probes of the cosmic microwave background seem to be consistent with the idea that inflation happened, but how confident can we be that it is really a part of the Universe’s history?

According to the Big Bang theory, the Universe was born in a dense fireball which has been expanding and cooling for about 14 billion years. The basic elements of this theory have been in place for over eighty years, but it is only in the last decade or so that a detailed model has been constructed which fits most of the available observations with reasonable precision. The problem is that the Big Bang model is seriously incomplete. The fact that we do not understand the nature of the dark matter and dark energy that appears to fill the Universe is a serious shortcoming. Even worse, we have no way at all of describing the very beginning of the Universe, which appears in the equations used by cosmologists as a “singularity”- a point of infinite density that defies any sensible theoretical calculation. We have no way to define a priori the initial conditions that determine the subsequent evolution of the Big Bang, so we have to try to infer from observations, rather than deduce by theory, the parameters that govern it.

The establishment of the new standard model (known in the trade as the “concordance” cosmology) is now allowing astrophysicists to turn back the clock in order to understand the very early stages of the Universe’s history and hopefully to understand the answer to the ultimate question of what happened at the Big Bang itself and thus answer the question “How did the Universe Begin?”

Paradoxically, it is observations on the largest scales accessible to technology that provide the best clues about the earliest stages of cosmic evolution. In effect, the Universe acts like a microscope: primordial structures smaller than atoms are blown up to astronomical scales by the expansion of the Universe. This also allows particle physicists to use cosmological observations to probe structures too small to be resolved in laboratory experiments.

Our ability to reconstruct the history of our Universe, or at least to attempt this feat, depends on the fact that light travels with a finite speed. The further away we see a light source, the further back in time its light was emitted. We can now observe light from stars in distant galaxies emitted when the Universe was less than one-sixth of its current size. In fact we can see even further back than this using microwave radiation rather than optical light. Our Universe is bathed in a faint glow of microwaves produced when it was about one-thousandth of its current size and had a temperature of thousands of degrees, rather than the chilly three degrees above absolute zero that characterizes the present-day Universe. The existence of this cosmic background radiation is one of the key pieces of evidence in favour of the Big Bang model; it was first detected in 1964 by Arno Penzias and Robert Wilson who subsequently won the Nobel Prize for their discovery.

The process by which the standard cosmological model was assembled has been a gradual one, but it culminated with recent results from the Wilkinson Microwave Anisotropy Probe (WMAP). For several years this satellite has been mapping the properties of the cosmic microwave background and how it varies across the sky. Small variations in the temperature of the sky result from sound waves excited in the hot plasma of the primordial fireball. These have characteristic properties that allow us to probe the early Universe in much the same way that solar astronomers use observations of the surface of the Sun to understand its inner structure,  a technique known as helioseismology. The detection of the primaeval sound waves is one of the triumphs of modern cosmology, not least because their amplitude tells us precisely how loud the Big Bang really was.

The pattern of fluctuations in the cosmic radiation also allows us to probe one of the exciting predictions of Einstein’s general theory of relativity: that space should be curved by the presence of matter or energy. Measurements from WMAP reveal that our Universe is very special: it has very little curvature, and so has a very finely balanced energy budget: the positive energy of the expansion almost exactly cancels the negative energy relating of gravitational attraction. The Universe is (very nearly) flat.

The observed geometry of the Universe provides a strong piece of evidence that there is an mysterious and overwhelming preponderance of dark stuff in our Universe. We can’t see this dark matter and dark energy directly, but we know it must be there because we know the overall budget is balanced. If only economics were as simple as physics.

Computer Simulation of the Cosmic Web

The concordance cosmology has been constructed not only from observations of the cosmic microwave background, but also using hints supplied by observations of distant supernovae and by the so-called “cosmic web” – the pattern seen in the large-scale distribution of galaxies which appears to match the properties calculated from computer simulations like the one shown above, courtesy of Volker Springel. The picture that has emerged to account for these disparate clues is consistent with the idea that the Universe is dominated by a blend of dark energy and dark matter, and in which the early stages of cosmic evolution involved an episode of accelerated expansion called inflation.

A quarter of a century ago, our understanding of the state of the Universe was much less precise than today’s concordance cosmology. In those days it was a domain in which theoretical speculation dominated over measurement and observation. Available technology simply wasn’t up to the task of performing large-scale galaxy surveys or detecting slight ripples in the cosmic microwave background. The lack of stringent experimental constraints made cosmology a theorists’ paradise in which many imaginative and esoteric ideas blossomed. Not all of these survived to be included in the concordance model, but inflation proved to be one of the hardiest (and indeed most beautiful) flowers in the cosmological garden.

Although some of the concepts involved had been formulated in the 1970s by Alexei Starobinsky, it was Alan Guth who in 1981 produced the paper in which the inflationary Universe picture first crystallized. At this time cosmologists didn’t know that the Universe was as flat as we now think it to be, but it was still a puzzle to understand why it was even anywhere near flat. There was no particular reason why the Universe should not be extremely curved. After all, the great theoretical breakthrough of Einstein’s general theory of relativity was the realization that space could be curved. Wasn’t it a bit strange that after all the effort needed to establish the connection between energy and curvature, our Universe decided to be flat? Of all the possible initial conditions for the Universe, isn’t this very improbable? As well as being nearly flat, our Universe is also astonishingly smooth. Although it contains galaxies that cluster into immense chains over a hundred million light years long, on scales of billions of light years it is almost featureless. This also seems surprising. Why is the celestial tablecloth so immaculately ironed?

Guth grappled with these questions and realized that they could be resolved rather elegantly if only the force of gravity could be persuaded to change its sign for a very short time just after the Big Bang. If gravity could push rather than pull, then the expansion of the Universe could speed up rather than slow down. Then the Universe could inflate by an enormous factor (1060 or more) in next to no time and, even if it were initially curved and wrinkled, all memory of this messy starting configuration would be lost. Our present-day Universe would be very flat and very smooth no matter how it had started out.

But how could this bizarre period of anti-gravity be realized? Guth hit upon a simple physical mechanism by which inflation might just work in practice. It relied on the fact that in the extreme conditions pertaining just after the Big Bang, matter does not behave according to the classical laws describing gases and liquids but instead must be described by quantum field theory. The simplest type of quantum field is called a scalar field; such objects are associated with particles that have no spin. Modern particle theory involves many scalar fields which are not observed in low-energy interactions, but which may well dominate affairs at the extreme energies of the primordial fireball.

Classical fluids can undergo what is called a phase transition if they are heated or cooled. Water for example, exists in the form of steam at high temperature but it condenses into a liquid as it cools. A similar thing happens with scalar fields: their configuration is expected to change as the Universe expands and cools. Phase transitions do not happen instantaneously, however, and sometimes the substance involved gets trapped in an uncomfortable state in between where it was and where it wants to be. Guth realized that if a scalar field got stuck in such a “false” state, energy – in a form known as vacuum energy – could become available to drive the Universe into accelerated expansion.We don’t know which scalar field of the many that may exist theoretically is responsible for generating inflation, but whatever it is, it is now dubbed the inflaton.

This mechanism is an echo of a much earlier idea introduced to the world of cosmology by Albert Einstein in 1916. He didn’t use the term vacuum energy; he called it a cosmological constant. He also didn’t imagine that it arose from quantum fields but considered it to be a modification of the law of gravity. Nevertheless, Einstein’s cosmological constant idea was incorporated by Willem de Sitter into a theoretical model of an accelerating Universe. This is essentially the same mathematics that is used in modern inflationary cosmology.  The connection between scalar fields and the cosmological constant may also eventually explain why our Universe seems to be accelerating now, but that would require a scalar field with a much lower effective energy scale than that required to drive inflation. Perhaps dark energy is some kind of shadow of the inflaton

Guth wasn’t the sole creator of inflation. Andy Albrecht and Paul Steinhardt, Andrei Linde, Alexei Starobinsky, and many others, produced different and, in some cases, more compelling variations on the basic theme. It was almost as if it was an idea whose time had come. Suddenly inflation was an indispensable part of cosmological theory. Literally hundreds of versions of it appeared in the leading scientific journals: old inflation, new inflation, chaotic inflation, extended inflation, and so on. Out of this activity came the realization that a phase transition as such wasn’t really necessary, all that mattered was that the field should find itself in a configuration where the vacuum energy dominated. It was also realized that other theories not involving scalar fields could behave as if they did. Modified gravity theories or theories with extra space-time dimensions provide ways of mimicking scalar fields with rather different physics. And if inflation could work with one scalar field, why not have inflation with two or more? The only problem was that there wasn’t a shred of evidence that inflation had actually happened.

This episode provides a fascinating glimpse into the historical and sociological development of cosmology in the eighties and nineties. Inflation is undoubtedly a beautiful idea. But the problems it solves were theoretical problems, not observational ones. For example, the apparent fine-tuning of the flatness of the Universe can be traced back to the absence of a theory of initial conditions for the Universe. Inflation turns an initially curved universe into a flat one, but the fact that the Universe appears to be flat doesn’t prove that inflation happened. There are initial conditions that lead to present-day flatness even without the intervention of an inflationary epoch. One might argue that these are special and therefore “improbable”, and consequently that it is more probable that inflation happened than that it didn’t. But on the other hand, without a proper theory of the initial conditions, how can we say which are more probable? Based on this kind of argument alone, we would probably never really know whether we live in an inflationary Universe or not.

But there is another thread in the story of inflation that makes it much more compelling as a scientific theory because it makes direct contact with observations. Although it was not the original motivation for the idea, Guth and others realized very early on that if a scalar field were responsible for inflation then it should be governed by the usual rules governing quantum fields. One of the things that quantum physics tells us is that nothing evolves entirely smoothly. Heisenberg’s famous Uncertainty Principle imposes a degree of unpredictability of the behaviour of the inflaton. The most important ramification of this is that although inflation smooths away any primordial wrinkles in the fabric of space-time, in the process it lays down others of its own. The inflationary wrinkles are really ripples, and are caused by wave-like fluctuations in the density of matter travelling through the Universe like sound waves travelling through air. Without these fluctuations the cosmos would be smooth and featureless, containing no variations in density or pressure and therefore no sound waves. Even if it began in a fireball, such a Universe would be silent. Inflation puts the Bang in Big Bang.

The acoustic oscillations generated by inflation have a broad spectrum (they comprise oscillations with a wide range of wavelengths), they are of small amplitude (about one hundred thousandth of the background); they are spatially random and have Gaussian statistics (like waves on the surface of the sea; this is the most disordered state); they are adiabatic (matter and radiation fluctuate together) and they are formed coherently.  This last point is perhaps the most important. Because inflation happens so rapidly all of the acoustic “modes” are excited at the same time. Hitting a metal pipe with a hammer generates a wide range of sound frequencies, but all the different modes of the start their oscillations at the same time. The result is not just random noise but something moderately tuneful. The Big Bang wasn’t exactly melodic, but there is a discernible relic of the coherent nature of the sound waves in the pattern of cosmic microwave temperature fluctuations seen by WMAP. The acoustic peaks seen in the WMAP angular spectrum  provide compelling evidence that whatever generated the pattern did so coherently.
 

There are very few alternative theories on the table that are capable of reproducing the WMAP results. Some interesting possibilities have emerged recently from string theory. Since this theory requires more space-time dimensions than the four we are used to, something has to be done with the extra ones we don’t observe. They could be wrapped up so small we can’t perceive them. Or, as is assumed in braneworld cosmologies our four-dimensional universe exists as a subspace (called a “brane”) embedded within a larger dimensional space; we don’t see the extra dimensions because we are confined on the subspace. These ideas may one day lead to a viable alternative to inflationary orthodoxy. But it is early days and not all the calculations needed to establish this theory have yet been done. In any case, not every cosmologist feels the urge to make cosmology consistent with string theory, which has even less evidence in favour of it than inflation. Some of the wilder outpourings of string-inspired cosmology seem to me to be the physics equivalent of nausea-induced vomiting.

So did inflation really happen? Does WMAP prove it? Will we ever know?

It is difficult to talk sensibly about scientific proof of phenomena that are so far removed from everyday experience. At what level can we prove anything in astronomy, even on the relatively small scale of the Solar System? We all accept that the Earth goes around the Sun, but do we really even know for sure that the Universe is expanding? I would say that the latter hypothesis has survived so many tests and is consistent with so many other aspects of cosmology that it has become, for pragmatic reasons, an indispensable part our world view. I would hesitate, though, to say that it was proven beyond all reasonable doubt. The same goes for inflation. It is a beautiful idea that fits snugly within the standard cosmological and binds many parts of it together. But that doesn’t necessarily make it true. Many theories are beautiful, but that is not sufficient to prove them right. When generating theoretical ideas scientists should be fearlessly radical, but when it comes to interpreting evidence we should all be unflinchingly conservative. WMAP has also provided a tantalizing glimpse into the future of cosmology, and yet more stringent tests of the standard framework that currently underpins it. Primordial fluctuations produce not only a pattern of temperature variations over the sky, but also a corresponding pattern of polarization. This is fiendishly difficult to measure, partly because it is such a weak signal (only a few percent of the temperature signal) and partly because the primordial microwaves are heavily polluted by polarized radiation from our own Galaxy. Although WMAP achieved the detection of this polarization, the published map is heavily corrupted by foregrounds.

Future generations of experiments, such as the Planck Surveyor (due for launch in 2009), will have to grapple with the thorny issue of foreground subtraction if substantial progress is to be made. But there is a crucial target that justifies these endeavours. Inflation does not just produce acoustic waves, it also generates different modes of fluctuation, called gravitational waves, that involve twisting deformations of space-time. Inflationary models connect the properties of acoustic and gravitational fluctuations so if the latter can be detected the implications for the theory are profound. Gravitational waves produce very particular form of polarization pattern (called the B-mode) which can’t be generated by acoustic waves so this seems a promising way to test inflation. Unfortunately the B-mode signal is very weak and the experience of WMAP suggests it might be swamped by foregrounds. But it is definitely worth a go, because it would add considerably to the evidence in favour of inflation as an element of physical reality

Besides providing strong evidence for the concordance cosmology, the WMAP satellite has also furnished some tantalizing evidence that there may be something missing. Not all the properties of the microwave sky seem consistent with the model. For example, the temperature pattern should be structureless, mirroring the random Gaussian fluctuations of the primordial density perturbations. In reality the data contains tentative evidence of strange alignments, such as the so-called “Axis of Evil” discovered by Kate Land and Joao Magueijo. These anomalies could be systematic errors in the data, or perhaps residual problems with the foreground that have to be subtracted, but they could also indicate the presence of things that can’t be described within the standard model. Cosmology is now a mature and (perhaps) respectable science: the coming together of theory and observation in the standard concordance model is a great advance in our understanding of the Universe and how it works. But it should be remembered that there are still many gaps in our knowledge. We don’t know the form of the dark matter. We don’t have any real understanding of dark energy.  We don’t know for sure if inflation happened and we are certainly a long way from being able to identify the inflaton. In a way we are as confused as we ever were about how the Universe began. But now, perhaps, we are confused on a higher level and for better reasons…

Pluralia Tantum

Posted in Literature, Pedantry with tags , , , on December 5, 2008 by telescoper

Meanwhile, over on the e-astronomer, Andy Lawrence recently posted an item about the lamentable tendency of astronomers to abuse the English language. The focus of his venom was “extincted”, a word used by many astro-types as an adjective to describe the state of affairs when light from a source (e.g. a quasar) has suffered “extinction” by intervening matter. “Extinction” is formed from the verb “extinguish” in the same way that “distinction” is formed from “distinguish”. Nobody would describe a professor as “distincted” (certainly not if it is Andy Lawrence) so, clearly, “extincted” is inappropriate. Actually if you really want to nit-pick you could object to “extinction” being applied to an object such as a  quasar, when it isn’t actually the object that is suffering from it but the light it has emitted.

But as a gripe, this is fair enough I’d say. Andy went on to encourage his legions of adoring readers to contribute their own pet hates, preferably with an astronomical orientation. My contribution was “decimate” which  means “to remove the tenth part” or “to reduce by ten percent”, from the Roman practice of punishing disobedient legions by killing every tenth man, but is often regrettably now used to mean “annihilate” or “obliterate”. You might think this hasn’t got much to do with astronomy but, sadly, it does. Indeed, a press release from STFC discussing the recent ten percent cuts to its grants budget states that consequent reduction in PDRAS

..will not cause the decimation of physics departments as has been speculated in media reports.

I would expect a civil servant to have done a bit better, so presumably this was written by an astronomer too. At any rate, it is precisely wrong.

You might argue that things like this don’t matter.  Language evolves,  and if modern usage deviates from its previous meanings then we should just let it change. I fully accept the dynamic nature of language and do not by any means object to all such changes. Society changes and so must the words we use. But if a change is (a) a result of sloppiness and (b) results in the loss of a very good use to be replaced by a bad one, then I think educated people should stand their ground and fight it. If we don’t do that language doesn’t just change, it decays.

Most of us practising scientists have to spend a lot of our time writing scientific papers, departmental memos, grant applications and even books. I think many astronomers see this activity as a chore, take no pleasure from it, and invest the minimum care on it. I was fortunate to have a really excellent writer, John Barrow, as my thesis supervisor and he convinced me that it was worth making the effort to write the best prose I could whatever the context. Not only does this attitude eliminate the ambiguity which is the bane of scientific writing. Taking pains over style and grammar also allows one to feel the pleasure of craftsmanship for its own sake. With John’s guidance and encouragement, I learned to enjoy writing through the satisfaction experienced by finding neat forms of words or nice turns of phrase. You never really feel good about what you do if you scrape through at the miminum acceptable level. Try to make the effort and you will be more fulfilled and the long hours of slog you spend putting together a complicated paper will at least be enlivened by a genuine sense of delight when things fall neatly into place, and a warm glow of achievement when you read it back and it sounds not just acceptable but actually good.

But I digress.

One of the other contributors to Andy’s list of examples of bad grammar was a chap called Norman Gray who objected to astronomers’ use of the word “data” as a plural noun, as in “the data indicate” rather than “the data indicates”. I was taken aback by this because I was expecting the opposite objection.

He has a lengthy rant about this on his own blog so I won’t repeat his arguments in detail here, merely a synopsis. The word “data” is formed from the latin plural of the word “datum” (itself formed from the past participle of the latin verb “dare”, meaning “to give”) hence meaning “things given” or words to that effect. The usage of “data” that we use now (to refer to measurements or quantitative information) seems not to have been present in roman or mediaeval times so Norman argues that it is a deliberate archaism to treat it as a latin plural now. He also argues that “data” in modern usage is a “mass noun” so should on that grounds also be treated as singular.

For those of you who aren’t up with such things, English nouns can be of two forms: “count” and “non-count” (or “mass”). Count nouns are those that can be enumerated and therefore have both plural and singular forms:  one eye, two eyes, etc. Non-count nouns (which is a better term than “mass nouns”) are those which describe something which is not enumerable, such as “furniture” or “cutlery”. Such things can’t be counted and they don’t have a different singular and plural forms. You can have two chairs (count noun) but can’t have two furnitures (non-count noun).

Count and non-count nouns require different grammatical treatment. You can ask “how much furniture do you have?” but not how many. The answer to a “how much” question usually requires a unit or measure word (e.g. “a vanload of furniture”) but the answer to a “how many” question would be just a number. Next time you are in a supermarket queue where it says “ten items or less” you will appreciate that it the sign is grammatically incorrect. “Item” is most definitely a count noun, so the correct form should be “ten items or fewer”..

Anyway, Norman Gray asserts that (a) “data” is a non-count noun and that (b) it should therefore be singular. Forms such as “the data are..” are out (“a vile anacoluthon”) and “the data is…” is in.

So is he right?

Not really.  Unkind though it may be to dismantle a carefully constructed obsession, I think his arguments have quite a few problems with them.

For a start, it seems clear to me that there are (at least) two distinct uses of the word data. One is clearly of non-count type. This is the use of “data” to describe an undifferentiated unspecified or unlimited quantity of information such as that stored on a computer disk. Of such stuff you might well ask “how much data do you have?” and the answer would be in some units (e.g. Gbytes). This clearly identifies it as a mass noun.

But there is another meaning, which is that ascribed to specified pieces of information either given (as per the original latin) or obtained from a measurement. Such things are precisely defined, enumerable and clearly therefore of count-noun form. Indeed one such entity could reasonably be called a datum and the plural would be data. This usage applies when the context defines the relevant quantum of information so no unit is required. This is the usage that arises in most scientific papers, as opposed to software manuals. “In Figure 1, the data are plotted…” is correct. Although it sounds clumsy you could well ask in such a situation “how many data do you have?” (meaning how many measurements do you have) and the answer would just be a number. Archaism? No. It’s just right.

To labour the point still further,  here are another two sentences that show the different uses:

“If I had less data my disk would have more free space on it.” (Non-count)

“If I had fewer data I would not be able to obtain an astrometric solution.” (Count).

Contrary to Norman’s claims, it is not unusual for the same words (if they’re nouns) to have both count and non-count forms in different contexts. I give the example of “whisky” as in “my glass is full of whisky” (non-count) versus “two whiskies, please, barman”. His objection to this was that in the second case a whisky is an artefact of a metonymic shift which takes the word “whisky” to refer to the glass containing it.

Metonymy involves using a word related to a thing rather than the word for thing itself, as in “I have hungry mouths to feed”; it’s not really the mouths that are fed, but the people the mouths belong to. In fact there’s a bit of this going on when people talk about sources being “extincted” rather than their light.

This invalidates the example because, Norman alleges, the resulting meaning is different. This objection is a bit silly because the whole point is that the two forms should have different meanings, otherwise why have them? In any case the  example  simply involves me asking for two well-defined quantities of whisky. I’m not convinced of the relevance of metonymy here. What I care about is the whisky, not what it comes in, and when I drink the whisky I don’t drink the glass anyway. Metonymy would apply if I talked about drinking a couple of glasses. Consider “I drank two whiskies, one after the other” versus “I drank two glasses one after the other”. In both cases what has actually been drunk?

There are countless other examples (pun intended). “Fire” can be a mass noun “fire is dangerous”) but also a count noun (“the firemen were fighting three fires simultaneously”). Another nice one  is “hair” which is non-count when it is on someone’s head (“my hair is going grey”) but count when  they, in the plural, are being split.

Interestingly, though, the  non-count forms of these nouns are all singular. Indeed, many non-count nouns exist only in the singular: such nouns are called singularia tantum. Examples include “dust” and “wealth”. So,  if we accept that “data” can be a non-count noun, does that mean that it should necessarily be treated as singular when it does take on that role?

An example that might be taken to support this view could be “statistics” (the field thereof) which is a non-count noun. Although it appears to be derived from a plural, you would certainly say “statistics is a hard subject”  rather than “statistics are a hard subject”.  On the other hand “statistics” can refer to a set, each element of which is a statistic (i.e. a number), thus giving another example of a noun that can be of either count or non-count form; you might reasonably say “the statistics are impressive” in the count case.  The non-count form “statistics” is a better  example of metonymy than the example above, as it refers to the study of the (count) statistics rather than to the things themselves.

In fact there are also mass nouns, described as pluralia tantum, which exist only in the plural. A (not entirely accurate) list is given here. Examples include scissors and pants, for which the normal measure  is a “pair”. Although these are technically non-count nouns (in the sense that you can’t have one scissor, etc) they don’t shed much light on the example in front of us. Perhaps more pertinent is the word “clothes” which is of non-count type but which is certainly plural. You can’t have one “clothe” (or any other number for that matter) but you would definitely say “your clothes are dirty”.

A more subtle example with relevance to the latin root of “data” is “media” which can refer to broadcast media (non-count) or plural of medium (count).  “The media are out to get me”  seems a correct construction to me, so the non-count form of this noun is a plurale tantum (singular of pluralia tantum).

So,  just because a word may be a non-count noun, it doesn’t necessarily have to be singular.

To summarise,  my argument is that (a) it is not correct to assert “data” is a mass noun. It may or may not be, depending on the context. If it is acting as a count noun (which I contend is the case in most science writing) then it is definitely plural. Furthermore, even in cases where it is clearly a mass noun, and especially if you reject the alternative meaning as a count noun, then  it is still by no means obvious that it must be treated as singular (because of the existence of the plurale tantum). In fact I would go a bit further and argue that you can only justify the singular non-count form at all if you accept that there is a count alternative. To be honest, though, I think I prefer the singular interpretation in the non-count case, as in “statistics”. It just sounds better.

If anyone has managed to read all the way through this exercise in pedantry I’d be interested to see any comments on my analysis of data.

Theories of Everything

Posted in The Universe and Stuff with tags , on October 18, 2008 by telescoper

A string theorist arrives home one evening. When he goes into his house, his wife tells him that she’s hired a private detective who has been following him for the past week and she now knows he’s having an affair with another woman.

“But darling…” says the string theorist. “I can explain everything.”