Archive for January, 2009

Physics Funding by Numbers

Posted in Science Politics with tags , , , , , on January 29, 2009 by telescoper

I just read today that HEFCE has decided on the way funds will be allocated for research following the 2008 Research Assessment Exercise. I have blogged about this previously (here, there and elsewhere), but to give you a quick reminder the exercise basically graded all research in UK universities on a scale from 4* (world-leading) to 1* (nationally recognized), producing for each department a profile giving the fraction of research in each category.

HEFCE has decided that English universities will be funded according to a formula that includes everything from 2* up to 4* but with a weighting 1:3:7.  Those graded 1* and unclassified get no funding at all. How they arrived at this formula is anyone’s guess. Personally I think it’s a bit harsh on 2* which is supposed to be internationally recognized research, but there you go.

Assuming there is also a multiplier for volume (i.e. the number of people submitted) we can now easily produce another version of the physics research league table which reveals the relative amount of money each will get. I don’t know the overall normalisation, of course.

The table shows the number of staff submitted (second column) and the overall fundability factor based on a 7:3:1 weighting of the published profile multiplied by the figure in column 2. This is like the “research power” table I showed here, only with a different and much steeper weighting (7,3,1,0) versus (4,3,2,1).

1. University of Cambridge 141.25 459.1
2. University of Oxford 140.10 392.3
3. Imperial College London 126.80 380.4
4. University College London 101.03 298.0
5. University of Manchester 82.80 227.7
6. University of Durham 69.50 205.0
7. University of Edinburgh 60.50 184.5
8. University of Nottingham 44.45 144.5
9. University of Glasgow 45.75 135.0
10. University of Warwick 51.00 130.1
11. University of Bristol 46.00 128.8
12. University of Birmingham 43.60 126.4
13. University of Southampton 45.30 120.0
14. Queen’s University Belfast 50.00 115.0
15. University of Leicester 45.00 114.8
16. University of St Andrews 32.20 104.7
17. University of Liverpool 34.60 96.9
18. University of Sheffield 31.50 92.9
19. University of Leeds 35.50 88.8
20. Lancaster University 26.40 88.4
21. Queen Mary, University of London 34.98 85.7
22. University of Exeter 28.00 77.0
23. University of Hertfordshire 28.00 72.8
24. University of York 26.00 67.6
25. Royal Holloway, University of London 27.96 67.1
26. University of Surrey 27.20 65.3
27. Cardiff University 32.30 64.6
28. University of Bath 20.20 63.6
29. University of Strathclyde 31.67 60.2
30. University of Sussex 20.00 55.0
31. Heriot-Watt University 19.50 51.7
32. Swansea University 20.75 48.8
33. Loughborough University 17.10 41.9
34. University of Central Lancashire 22.20 41.1
35. King’s College London 16.40 38.5
36. Liverpool John Moores University 16.50 35.5
37. Aberystwyth University 18.33 23.8
38. Keele University 10.00 18.0
39. Armagh Observatory 7.50 13.1
40. University of Kent 3.00 4.5
41. University of the West of Scotland 3.70 4.1
42. University of Brighton 1.00 1.8

It looks to me that the fraction of funds going to the big three at the top will probably be reduced quite significantly, although apparently there are  funds set aside to smooth over any catastrophic changes. I’d hazard a guess that things won’t change much for those in the middle.

I’ve left the Welsh and Scottish universities in the list for comparison, but there is no guarantee that HEFCW and SFC will use the same formula for Wales and Scotland as HEFCE did for England. I have no idea what is going to happen to Cardiff University’s funding at the moment.

Another bit of news worthing putting in here is that HEFCE has protected funding for STEM subjects (Science, Technology and Medicine) so that the apparently poor showing of some science subjects (especially physics) compared to, e.g., Economics will not necessarily mean that physics as a whole will suffer. How this works out in practice remains to be seen.

Apparently also the detailed breakdowns of how the final profiles were reached will go public soon. That will make for some interesting reading, although apparently everything relating to individual researchers will be shredded to prevent problems with the data protection act.

Little Bits of History

Posted in Biographical, Jazz with tags , , , , on January 28, 2009 by telescoper

I noticed this morning that I’ve passed a bit of a milestone on here. I’ve actually reached my 100th post. That probably means I’ve been spending way too much time blogging but, undaunted, here I go again.

Ages ago (or it seems like ages ago) I posted an item about Humphrey Lyttelton and during the course of it I mentioned that my Dad had played the drums with Humph some years ago. I did mention in that post that I would put up a picture as soon as I found it, which I have now done. Here it is, taken probably somewhere around 1990.

humph_dad_2

I’m not entirely sure of the venue. I always thought this session took place in the Corner House in Newcastle but on closer inspection it doesn’t really look like it in this photograph so I wouldn’t bet on my memory being right.  It’s not a great photograph, but that’s definitely my  Dad (Alan Coles) on the drums. I don’t know the other personnel, but you do get a  proper impression of how tall Humph was (he’s on trumpet, of course) .

Humph of course had his own band but many jazz venues (including the Corner House) preferred to invite soloists only to come and play with the house band. The main reason I think was that it was cheaper that way. And of course the local musicians loved it because they got to play with their heros. My Dad idolized Humphrey Lyttelton but when he finally got to play with him he was extremely nervous and didn’t particularly enjoy the evening.

Semi-professional bands like the Savoy Band shown here couldn’t afford fancy band uniforms or outfits so for some reason they all seem to settle on cheap red nylon shirts, as shown in the picture. I don’t know why because they’re not at all pleasant to wear if you’re going to be sweaty. But these shirts reminded me of a story that I’ve bored people with over many years. When I was  little (in the  70s) there was a similar band in Newcastle called the Phoenix Jazz Band. They also wore horrible red nylon shirts for gigs, except for their young bass player (a guy called Gordon) who refused to do so. This uppity young student teacher turned up for gigs in a black-and-yellow hooped jersey so he looked rather like a bumble-bee or a wasp. The rest of the band called him, rather sarcastically, Sting. He soon went on to other things but the name stuck.

My dad always claimed that Sting had played the double bass in our garage – when I lived in Benwell village. I don’t remember having seen him though, and I might well have been having my leg pulled. Actually it wasn’t a garage anyway, more of a big wooden shed where he kept his drums and lots of other junk.

Anyway, I don’t know if I’ve mentioned this but I did for a while have dreams of becoming a Jazz musician myself. I wanted to be a saxophonist but my Dad persuaded me that I should learn to play the clarinet first and it would be easy then to switch to sax. I don’t think it was very good advice because they’re quite different instruments to play, but I rather think he had pushed the clarinet because he wanted me to play traditional Jazz rather than modern stuff.

I found that I had quite a good ear for music and a pretty good sense of rhythm so I mastered the rudiments fairly quickly but never got much further than that. I even got as far as sitting in with some bands, but never became a full-time member of one.

Sitting in with one of these traditional Jazz bands  is a very informal business. Usually the repertoire consists of standard tunes that everyone knows and there are no real arrangements as such. The trumpet usually plays the lead for a chorus or two, with impromptu clarinet and trombone alongside, then there’s a sequence of solos (usually a couple of choruses for each player, unless you really get into it and the leader shouts “take another!”), and then you play out to the end. Other than that you make it up as you go along.

But there is one notable exception to this, a number called High Society. This probably began as a Mardi Gras parade tune but later on came to be played as an up-tempo flag-waver. Almost every Jazz band, however, plays it the same way. It starts with a sort-of call to arms with drum rolls and a few phrases on the horns a bit like a fanfare before moving into tempo and it has quite a few scored passages that are played straight (i.e. without improvisation). When it breaks eventually into the solos there is an unwritten rule that the clarinet soloist plays a standard set-piece solo obbligato, at least for one chorus, after which it’s back to the more normal improvised solo.

I don’t know how this became such a strong tradition but you can check it out yourself. There are dozens of versions of High Society played by different Jazz bands and the clarinettist will always play the same basic notes. There’s a classic recording by Jelly Roll Morton on which there are two clarinettists (Albert Nicholas and Sidney Bechet) who both play the original licks, one after the other.

The story I heard was that this solo (as well as possibly the tune itself) was written by a man called Alphonse Picou who was born in 1878 and played with the first real Jazz band in New Orleans, which was led by the legendary figure of Buddy Bolden, the first great jazz trumpeter. Bolden died in 1931 but no recordings by him have ever come to light because he stopped playing before 1910 and spent most of the rest of his life in mental institutions. It is said that Buddy Bolden’s band did make a cylinder recording, but this grail-like object has never been found.

High Society is such a well known tune and is such fun to play that it is very often part of after-hours Jam sessions at clubs like the Corner House where I did once actually play the  Alphonse Picou solo from memory (or at least some sort of approximation to it), having heard it so many times on different records.

Last weekend, when I was playing around on Youtube, I chanced upon a bit of film of New Orleans Jam Session from 1958. It was looking back down a very long tunnel into ancient history but you could have knocked me down with a feather when I saw, sitting down next to the piano at the left, the great man himself, Alphonse Picou. I never thought there would be a film of him, thinking that he was, like Buddy Bolden, an almost mythical figure.  I later found elsewhere, a clip from the same session of him playing his own famous solo! However, he was 80 years old and very frail at the time and he doesn’t actually play it that  well so I’ll spare his posthumous blushes (he died in 1961) by picking a rather better number from the same session.

The tune I’ve picked to put on here is called Mamie’s Blues.  They play it with that lovely lazily lilting beat that’s so typical of authentic New Orleans Jazz but is actually so difficult to get right.  And if it  wasn’t enough to see Alphonse Picou, there are several other legendary names too: Paul Barbarin (drums), George Lewis (clarinet) and Jim Robinson (trombone) amonst others. The session happened 50 years ago at which point these were all very old men and they’re all long gone now.This clip, to me, is every bit as important a piece of history as, say, an original score by Mozart.

They may all look like they’ve seen better days, but they certainly still knew how to play!

On the Cards

Posted in Uncategorized with tags , , , , , , , on January 27, 2009 by telescoper

After an interesting chat yesterday with a colleague about the difficulties involved in teaching probabilities, I thought it might be fun to write something about card games. Actually, much of science is intimately concerned with statistical reasoning and if any one activity was responsible for the development of the theory of probability, which underpins statistics, it was the rise of games of chance in the 16th and 17th centuries. Card, dice and lottery games still provide great examples of how to calculate probabilities, a skill which is very important for a physicist.

For those of you who did not misspend your youth playing with cards like I did, I should remind you that a standard pack of playing cards has 52 cards. There are 4 suits: clubs (♣), diamonds (♦), hearts (♥) and spades (♠). Clubs and spades are coloured black, while diamonds and hearts are red. Each suit contains thirteen cards, including an Ace (A), the plain numbered cards (2, 3, 4, 5, 6, 7, 8, 9 and 10), and the face cards: Jack (J), Queen (Q), and King (K). In most games the most valuable is the Ace, following by King, Queen and Jack and then from 10 down to 2.

I’ll start with Poker, because it seems to be one of the simplest ways of losing money these days. Imagine I start with a well-shuffled pack of 52 cards. In a game of five-card draw poker, the players essentially bet on who has the best hand made from five cards drawn from the pack. In more complicated versions of poker, such as Texas hold’em, one has, say, two “private” cards in one’s hand and, say, five on the table in plain view. These community cards are usually revealed in stages, allowing a round of betting at each stage. One has to make the best hand one can using five cards from ones private cards and those on the table. The existence of community cards makes this very interesting because it gives some additional information about other player’s holdings. For the present discussion, however, I will just stick to individual hands and their probabilities.

How many different five-card poker hands are possible?

To answer this question we need to know about permutations and combinations. Imagine constructing a poker hand from a standard deck. The deck is full when you start, which gives you 52 choices for the first card of your hand. Once that is taken you have 51 choices for the second, and so on down to 48 choices for the last card. One might think the answer is therefore 52×51×50×49 ×48=311,875,200, but that’s not right because it doesn’t actually matter which order your five cards are dealt to you.

Suppose you have 4 aces and the 2 of clubs in your hand; the sequences (A♣, A♥, A♦, A♠, 2♣) and (A♥ 2♣ A♠, A♦, A♣) are counted as distinct hands among the number I obtained above. There are many other possibilities like this where the cards are the same but the order is different. In fact there are 5×4×3×2× 1 = 120 such permutations . Mathematically this is denoted 5!, or five-factorial. Dividing the number above by this gives the actual number of possible five-card poker hands: 2,598,960. This number is important because it describes the size of the “possibility space”. Each of these hands is a possible poker deal, and each is assumed to be “equally likely”, unless the dealer is cheating.

This calculation is an example of a mathematical combination as opposed to a permutation. The number of combinations one can make of r things chosen from a set of n is usually denoted Cn,r. In the example above, r=5 and n=52. Note that 52×51×50×49 ×48 can be written 52!/47! The general result for the number of combinations can likewise be written Cn,r=n!/(n-r)!r!

Poker hands are characterized by the occurrence of particular events of varying degrees of probability. For example, a flush is five cards of the same suit but not in sequence (e.g. 2♠, 4♠, 7♠, 9♠, Q♠). A numerical sequence of cards regardless of suit (e.g. 7♣, 8♠, 9♥, 10♦, J♥) is called a straight. A sequence of cards of the same suit is called a straight flush. One can also have a pair of cards of the same value, or two pairs, or three of a kind, or four of a kind, or a full house which is three of one kind and two of another. One can also have nothing at all, i.e. not even a pair.

The relative value of the different hands is determined by how probable they are, and to work that out takes quite a bit of effort.

Consider the probability of getting, say, 5 spades (in other words, spade flush). To do this we have to calculate the number of distinct hands that have this composition.There are 13 spades in the deck to start with, so there are 13×12×11×10×9 permutations of 5 spades drawn from the pack, but, because of the possible internal rearrangements, we have to divide again by 5! The result is that there are 1287 possible hands containing 5 spades. Not all of these are mere flushes, however. Some of them will include sequences too, e.g. 8♠, 9♠, 10♠, J♠, Q♠, which makes them straight flushes. There are only 10 possible straight flushes in spades (starting with 2,3,4,5,6,7,8,9,10 or J), so only 1277 of the possible hands counted above are just flushes. This logic can apply to any of the suits, so in all there are 1277×4=5108 flush hands and 10×4=40 straight flush hands.

I won’t go through the details of calculating the probability of the other types of hand, but I’ve included a table showing their probabilities obtained by dividing the relevant number of possibilities by the total number of hands (given at the bottom of the middle column).

TYPE OF HAND

Number of Possible Hands

Probability

Straight Flush

40

0.000015

Four of a Kind

624

0.000240

Full House

3744

0.001441

Flush

5108

0.001965

Straight

10,200

0.003925

Three of a Kind

54,912

0.021129

Two Pair

123,552

0.047539

One Pair

1,098,240

0.422569

Nothing

1,302,540

0.501177

TOTALS

2,598,960

1.00000

 

 

 

Poker involves rounds of betting in which the players, amongst other things, try to assess how likely their hand is to win compared with the others involved in the game. If your hand is weak, you can fold and allow the accumulated bets to be given to your opponents. Alternatively, you can  bluff and bet strongly on a poor hand (even if you have “nothing”) to convince your opponents that your hand is strong. This tactic can be extremely successful in the right circumstances. In the words of the late great Paul Newman in the film Cool Hand Luke,  “sometimes nothing can be a real cool hand”.

If you bet heavily on your hand, the opponent may well think it is strong even if it contains nothing, and fold even if his hand has a higher value. To bluff successfully requires a good sense of timing – it depends crucially on who gets to bet first – and extremely cool nerves. To spot when an opponent is bluffing requires real psychological insight. These aspects of the game are in many ways more interesting than the basic hand probabilities, and they are difficult to reduce to mathematics.

Another card game that serves as a source for interesting problems in probability is Contract Bridge. This is one of the most difficult card games to play well because it is a game of logic that also involves chance to some degree. Bridge is a game for four people, arranged in two teams of two. The four sit at a table with members of each team facing opposite each other. Traditionally the different positions are called North, South, East and West although you don’t actually need a compass to play. North and South are partners, as are East and West.

For each hand of Bridge an ordinary pack of cards is shuffled and dealt out by one of the players, the dealer. Let us suppose that the dealer in this case is South. The pack is dealt out one card at a time to each player in turn, starting with West (to dealer’s immediate left) then North and so on in a clockwise direction. Each player ends up with thirteen cards when all the cards are dealt.

Now comes the first phase of the game, the auction. Each player looks at their cards and makes a bid, which is essentially a coded message that gives information to their partner about how good their hand is. A bid is basically an undertaking to win a certain number of tricks with a certain suit as trumps (or with no trumps). The meaning of tricks and trumps will become clear later. For example, dealer might bid “one spade” which is a suggestion that perhaps he and his partner could win one more trick than the opposition with spades as the trump suit. This means winning seven tricks, as there are always thirteen to be won in a given deal. The next to bid – in this case West – can either pass (saying “no bid”) or bid higher, like an auction. The value of the suits increases in the sequence clubs, diamonds, hearts and spades. So to outbid one spade (1S), West has to bid at least two hearts (2H), say, if hearts is the best suit for him but if South had opened 1C then 1H would have been sufficient to overcall . Next to bid is South’s partner, North. If he likes spades as trumps he can raise the original bid. If he likes them a lot he can jump to a much higher contract, such as four spades (4S).

This is the most straightforward level of Bridge bidding, but in reality there are many bids that don’t mean what they might appear to mean at first sight. Examples include conventional bids  (such as Stayman or Blackwood),  splinter and transfer bids and the rest of the complex lexicon of Bridge jargon. There are some bids to which partner must respond (forcing bids), and those to which a response is discretionary. And instead of overcalling a bid, one’s opponents could “double” either for penalties in the hope that the contract will fail or as a “take-out” to indicate strength in a suit other than the one just bid.

Bidding carries on in a clockwise direction until nobody dares take it higher. Three successive passes will end the auction, and the contract is then established. Whichever player opened the bidding in the suit that was finally chosen for trumps becomes “declarer”. If we suppose our example ended in 4S, then it was South that becomes declarer because he opened the bidding with 1S. If West had overcalled 2 Hearts (2H) and this had passed round the table, West would be declarer.

The scoring system for Bridge encourages teams to go for high contracts rather than low ones, so if one team has the best cards it doesn’t necessarily get an easy ride; it should undertake an ambitious contract rather than stroll through a simple one. In particular there are extra points for making “game” (a contract of four spades, four hearts, five clubs, five diamonds, or three no trumps). There is a huge bonus available for bidding and making a grand slam (an undertaking to win all thirteen tricks, i.e. seven of something) and a smaller but still impressive bonus for a small slam (six of something). This encourages teams to push for a valuable contract: tricks bid and made count a lot more than overtricks even without the slam bonus.

The second phase of the game now starts. The person to the left of declarer plays a card of their choice, possibly following yet another convention, such as “fourth highest of the longest suit”. The player opposite declarer puts all his cards on the table and becomes “dummy”, playing no further part in this particular hand. Dummy’s cards are then entirely under the control of the declarer. All three players can see the cards in dummy, but only declarer can see his own hand. Apart from the role of dummy, the card play is then similar to whist.

Each trick consists of four cards played in clockwise sequence from whoever leads. Each player, including dummy, must follow the suit led if he has a card of that suit in his hand. If a player doesn’t have a card of that suit he may “ruff”, i.e. play a trump card, or simply discard some card (probably of low value) from another suit. Good Bridge players keep a careful track of all discards to improve their knowledge of the cards held by their  opponents. Discards can also be used by the defence (i.e. East and West in this case) to signal to each other. Declarer can see dummy’s cards but the defenders don’t get to see each other’s.

One can win a trick in one of two ways. Either one plays a higher card of the same suit, e.g. K♥ beats 10♥, or anything lower than Q♥. Aces are high, by the way. Alternatively, if one has no cards of the suit that has been led, one can play a trump (or “ruff”). A trump always beats a card of the original suit, but more than one player may ruff and in that case the highest trump played carries the trick. For instance, East may ruff only to be over-ruffed by South if both have none of the suit led. Of course one may not have any trumps at all, making a ruff impossible. If one has neither the original suit nor a trump one has to discard something from another suit. The possibility of winning a trick by a ruff also does not exist if the contract is of the no-trumps variety.

Whoever wins a given trick leads to start the next one. This carries on until thirteen tricks have been played. Then comes the reckoning of whether the contract has been made. If so, points are awarded to declarer’s team. If not, penalty points are awarded to the defenders which are higher if the contract has been doubled. Then it’s time for another hand, probably another drink, and very possibly an argument about how badly declarer played the hand.

I’ve gone through the game in some detail in an attempt to make it clear why this is such an interesting game for probabilistic reasoning. During the auction, partial information is given about every player’s holding. It is vital to interpret this information correctly if the contract is to be made. The auction can reveal which of the defending team holds important high cards, or whether the trump suit is distributed strangely. Because the cards are played in strict clockwise sequence this matters a lot. On the other hand, even with very firm knowledge about where the important cards lie, one still often has a difficult logical puzzle to solve if all the potential winners in one’s hand are actually to be made into tricks. It can be a very subtle game.

I only have space-time for one illustration of this kind of thing, but it’s another one that is fun to work out. As is true to a lesser extent in poker, one is not really interested in the initial probabilities of the different hands but rather how to update these probabilities using conditional information as it may be revealed through the auction and card play. In poker this updating is done largely by interpreting the bets one’s opponents are making.

Let us suppose that I am South, and I have been daring enough to bid a grand slam in spades (7S). West leads, and North lays down dummy. I look at my hand and dummy, and realise that we have 11 trumps between us, missing only the King (K) and the 2. I have all other suits covered, and enough winners to make the contract provided I can make sure I win all the trump tricks. The King, however, poses a problem. The Ace of Spades will beat the King, but if I just lead the Ace, it may be that one of East or West has both the K and the 2. In this case he would simply play the two to my Ace. The King would be an automatic winner then: as the highest remaining trump it must win a trick eventually. The contract is then doomed.

On the other hand if the spades split 1-1 between East and West then the King drops when I lead the Ace, so that strategy makes the contract. It all depends how the cards split.

But there is a different way to play this situation. Suppose, for example, that A♠ and Q♠ are on the table (in dummy’s hand) and I, as declarer, have managed to win the first trick in my hand. If I think the K♠ lies in West’s hand, I lead a spade. West has to follow suit if he can. If he has the King, and plays it, I can cover it with the Ace so it doesn’t win. If, however, West plays low I can play Q♠. This will win if I am right about the location of the King. Next time I can lead the A♠ from dummy and the King will fall. This play is called a finesse.

But is this better than the previous strategy, playing for the drop? It’s all a question of probabilities, and this in turn boils down to the number of possible deals allow each strategy to work.

To start with, we need the total number of possible bridge hands. This is quite easy: it’s the number of combinations of 13 objects taken from 52, i.e. C52,13. This is a truly enormous number: over 600 billion. You have to play a lot of games to expect to be dealt the same hand twice!

What we now have to do is evaluate the probability of each possible arrangement of the missing King and two. Dummy and declarer’s hands are known to me. There are 26 remaining cards whose location I do not know. The relevant space of possibilities is now smaller than the original one. I have 26 cards to assign between East and West. There are C26,13 ways of assigning West’s 13 cards, but once I have done this the remaining 13 must be in East’s hand.

Suppose West has the 2 but not the K. Conditional on this assumption, I know one of his cards, but there are 12 others remaining to be assigned. There are therefore C24,12 hands with this possible arrangement of the trumps. Obviously the K has to be with East in this case. The finesse would not work as East would cover the Q with the K, but the K would drop if the A were played.

The opposite situation, with West having the K but not the 2 has the same number of possibilities associated with it. Here West must play the K when a spade is led so it will inevitably lose to the A. South abandons the idea of finessing when West rises and just covers it with the higher card.

Suppose instead West doesn’t have any trumps. There are C24,13 ways of constructing such a hand: 13 cards from the 24 remaining non-trumps. Here the finesse fails because the K is with East but the drop fails too. East plays the 2 on the A and the K becomes a winner.

The remaining possibility is that West has both trumps: this can happen in C24,11 ways. Here the finesse works but the drop fails. If West plays low on the South lead, declarer calls for the Q from dummy to hold the trick. Next lead he plays the A to drop the K.

To turn these counts into probabilities we just divide by the total number of different ways I can construct the hands of East and West, which is C26,13. The results are summarized in the table here.

Spades in West’s hand

Number of hands

Probability

Drop

Finesse

None

C24,13

0.24

0

0

K

C24,12

0.26

0.26

0.26

2

C24,12

0.26

0.26

0

K2

C24,11

0.24

0

0.24

Total

C26,13

1.00

0.52

0.50

The last two columns show the contributions of each arrangement to the probability of success of either playing for the drop or the finesse. You can see that the drop is slightly more likely to work than the finesse in this case.

Note, however, that this ignores any information gleaned from the auction, which could be crucial. For example, if West had made a bid then it is more likely that he had cards of some value so this might suggest the K might be in his hand. Note also that the probability of the drop and the probability of the finesse do not add up to one. This is because there are situations where both could work or both could fail.

This calculation does not mean that the finesse is never the right tactic. It sometimes has much higher probability than the drop, and is often strongly motivated by information the auction has revealed. Calculating the odds precisely, however, gets more complicated the more cards are missing from declarer’s holding. For those of you too lazy to compute the probabilities, the book On Gambling, by Oswald Jacoby contains tables of the odds for just about any bridge situation you can think of.

Finally on the subject of Bridge, I wanted to mention a fact that many people think is paradoxical but which isn’t really. Looking at the table shows that the odds of a 1-1 split in spades here are 0.52:0.48 or 13: 12. This comes from how many cards are in East and West’s hand when the play is attempted. There is a much quicker way of getting this answer than the brute force method I used above. Consider the hand with the spade two in it. There are 12 remaining opportunities in that hand that the spade K might fill, but there are 13 available slots for it in the other. The odds on a 1-1 split must therefore be 13:12. Now suppose instead of going straight for the trumps, I play off a few winners in the side suits (risking that they might be ruffed, of course). Suppose I lead out three Aces in the three suits other than spades and they all win. Now East and West have only 20 cards between them and by exactly the same reasoning as before, the odds of a 1-1 split have become 10:9 instead of 13:12. Playing out seemingly irrelevant suits has increased the probability of the drop working. Although I haven’t touched the spades, my assessment of the probability of the spade distribution has changed significantly.

This sort of thing is a major reason why I always think of probabilities in a Bayesian way. As information is gradually revealed one updates the assessment of the probability of the remaining unknowns.

But probability is only a part of Bridge; the best players don’t actually leave very much  to chance…

A New Theory of the Universe

Posted in The Universe and Stuff with tags , , , , on January 24, 2009 by telescoper

Yesterday I went on the train to London to visit my old friends in Mile End. I worked at the place that is now called Queen Mary, University of London for nearly a decade and missed it quite a lot when I moved to Nottingham. More recently I’ve had a bit more time and plausible excuses to visit London, including yesterday’s invitation to give a seminar at the Astronomy Unit. Although we were a bit late starting, owing to extremely slow service in the restaurant where we had lunch before the talk, it all seemed to go quite well. Afterwards we had a few beers and a nice chat before I took the train back to Cardiff again.

In the pub (which was the Half Moon, formerly the Half Moon Theatre,  a place of great historical interest) I remembered a joke I sometimes make during cosmology talks but had forgotten to do in the one I had just given.  I’m not sure it will work in written form, but here goes anyway.

I’ve blogged before about the current state of cosmology, but it’s probably a good idea to give a quick reminder before going any further. We have a standard cosmological model, known as the concordance cosmology, which accounts for most relevant observations in a pretty convincing way and is based on the idea that the Universe began with a Big Bang.  However, there are a few things about this model that are curious, to say the least.

First, there is the spatial geometry of the Universe. According to Einstein’s general theory of relativity, universes come in three basic shapes: closed, open and flat. These are illustrated to the right. The flat space has “normal” geometry in which the interior angles of a triangle add up to 180 degrees. In a closed space the sum of the angles is greater than 180 degrees, and  in an open space it is less. Of course the space we live in is three-dimensional but the pictures show two-dimensional surfaces.

But you get the idea.

The point is that the flat space is very special. The two curved spaces are much more general because they can be described by a parameter called their curvature which could in principle take any value (either positive for a closed space, or negative for an open space). In other words the sphere at the top could have any radius from very small (large curvature) to very large (small curvature). Likewise with the “saddle” representing an open space. The flat space must have exactly zero curvature. There are many ways to be curved, but only one way to be flat.

Yet, as near as dammit, our Universe appears to be flat. So why, with all the other options theoretically available to it, did the Universe decide to choose the most special one, which also happens in my opinion to be also the most boring?

Then there is the way the Universe is put together. In order to be flat there must be an exact balance between the energy contained in the expansion of the Universe (positive kinetic energy) and the energy involved in the gravitational interactions between everything in it (negative potential energy). In general relativity, you see, the curvature relates to the total amount of energy.

On the left you can see the breakdown of the various components involved in the standard model with the whole pie representing a flat Universe. You see there’s a vary strange mixture dominated by dark energy (which we don’t understand) and dark mattter (which we don’t understand). The bit we understand a little bit better (because we can sometimes see it directly) is only 4% of the whole thing. The proportions look very peculiar.

And then finally, there is the issue that I talked about in my seminar in London and have actually blogged about (here and there) previously, which is why the Universe appears to be a bit lop-sided and asymmetrical when we’d like it to be a bit more aesthetically pleasing.

All these curiosities are naturally accounted for in my New Theory of the Universe, which asserts that the Divine Creator actually bought  the entire Cosmos  in IKEA.

This hypothesis immediately explains why the Universe is flat. Absolutely everything in IKEA comes in flat packs. Curvature is not allowed.

But this is not the only success of my theory. When God got home he obviously opened the flat pack, found the instructions and read the dreaded words “EASY SELF-ASSEMBLY”. Even the omnipotent would struggle to follow the bizarre set of cartoons and diagrams that accompany even the simplest IKEA furniture. The result is therefore predictable: strange pieces that don’t seem to fit together, bits left over whose purpose is not at all clear, and an overall appearance that is not at all like one would have expected.

It’s clear  where the lop-sidedness comes in too. Probably some of the parts were left out so the whole thing isn’t  held together properly and is probably completely unstable. This sort of thing happens all the time with IKEA stuff. And why is it you can never find the right size Allen Key to sort it out?

So there you have it. My new Theory of the Universe. Some details need to be worked out, but it is as good an explanation of these issues as I have heard. I claim my Nobel Prize.

If anything will ever get me a trip to Sweden, this will.

Taking down their Particulars and Examining their Testimonials

Posted in Uncategorized with tags , , , on January 21, 2009 by telescoper

If you’ve looked at Cosmic Variance recently you will know that has almost gone up in flames (metaphorically speaking) . The incendiary item was what I thought was a gently humorous post on the subject of recommendation letters for entry into graduate schools, which evolved to include postdoctoral positions too. This item has generated nearly a hundred comments so far, some of which are quite sensible and interesting but others worryingly vitriolic. One correspondent in particular got hold of the wrong end of the stick and proceeded to beat wildly about the bush with it,  accusing academics of everything from intellectual snobbery to the Whitechapel Murders.

I’m actually quite pleased that the more extremist comments are there, as they make mine look quite sensible which they perhaps wouldn’t if they were on their own. I’ve therefore collected my thoughts here to see if they generate any reaction.

A follow-up post attempted to defuse the issue with an injection of common sense, but it remains to be seen whether this will indeed steady the ship. (I’m proud of that multiply mixed metaphor.)

The principal bone of contention is the matter of “recommendation letters” and whether a Professor should ever write negative comments when asked to recommend a student for a place on a graduate course.

In the UK we generally don’t have “recommendations” but “references” or “testimonials” which are supposed to describe the candidate’s character and abilities in a manner that is useful to those doing the recruitment. They are not meant to be written in absurdly hyperbolic terms and they are not meant to ignore any demonstrable shortcomings of the applicant. They are supposed to advise the people doing the recruitment of the suitability of the candidate in a sober, balanced and objective way. Fortunately, most students applying to graduate schools are actually rather good so there are many more positives than negatives, but if there are weaknesses in my view these  must be mentioned, even this turns out to be the kiss of death to their application.

Another objection is to recommendation letters that include statements of the form Aaron is better than Brenda but not as good as Charlie. I don’t object at all to the idea of a reference that includes some form of  ordering like this. Since there are inevitably more applicants than places the panel will have to make a ranking, so why not help them by giving your input? After all, you know the candidates better than the panel does.

The point is that the referee is not only providing a service for the student but also for the recruiting school. On this basis, it is, I think, perfectly valid to include negative points as long as they can be justified objectively.

British professors are often criticized by our colleagues over the pond for writing very reserved recommendation letters, but having one year received references from a US institution on behalf of 4 different students who were all apparently the best student that institution had ever had in physics, I think I prefer the old-fashioned British understatement.

However, references transcripts and other paperwork can only establish whether a student has reached the threshold level of technical competence that is needed to commence a research degree. That’s a necessary but not sufficient condition for their success as a scientist. The other factors – drive, imagination, commitment, and diligence (which apparently is a term of abuse in the USA) are much harder to assess. I think this part has to be done at interview. You can’t just rely on examination results because it’s by no means true that the best students at passing examinations turn into the best graduate students. Research is a whole different ball game.

I also think there’s a difference between how references are used for making a job appointment versus a place at graduate school. Where I come from, in the UK, graduate study is funded by a studentship which pays a stipend rather than a salary and the successful applicants are not formally employed by the university. It is rather different in the case of a postdoc where the successful candidate is an employee of the institution.

Owing to recent changes in employment legislation in the UK, best practice for the process of appointing staff is now considered to not even ask for references until after short-listing or even after interview. The purpose of references is simply to verify that the information the applicant has given in the application is complete and correct; they are not to be used in deciding on quality. Shortlisting is done on the basis of whether the applicant can show in the application that they have skills that match the requirement of the post. The final decisions is made after interview of shortlisted candidates.

Many academics hate this new-fangled way of doing things partly because its a new-fangled way of doing things but also because they tend to rely heavily on references in judging the relative quality of candidates for PDRA appointments. Personally, however, I don’t find references particularly helpful in this context – especially those from America where the language is so inflated as to be laughable – so, unlike most of my colleagues, I’m quite happy to embrace the “new” approach. I think relying too much on references is a tantamount to wanting other people to make difficult decisions for you rather than making them yourself.

On top of this, modern protocol requires the use of a standard application form rather than just a CV and list of research interests. That way the relative merits of all candidates can be judged on the basis of the same pieces of information and answers to the  same questions. Whatever you think about this process, it certainly does make things more transparent.

I’m currently advertising a postdoc job and will be shortlisting for this position on the basis I have described, i.e. without asking for references uprfront. I’ll only look at references later on, after shortlisting.  This is the first time I will have done it this way, so I am interested to see how it works.

But I can’t see at all how one could possibly make decisions concerning entry of an undergraduate student onto a graduate programme without using references earlier on in the process.

Playing to The Gallery 

Posted in Music, Television with tags , , , on January 19, 2009 by telescoper

I was very sad yesterday to hear of the death at the age of 83 of the pioneering children TV’s presenter Tony Hart.  The newspapers and television have been filled with suitably glowing tributes to him, because he was not only a superb presenter but also a warm and generous person. That’s quite a rare combination in the world of television, so I’m told.

I knew of him primarily through Vision On, a programme which I watched avidly as a child, and only found out much later on that it was intended to be for deaf children. The show involved comedy sketches and cartoons, as well as Tony Hart’s contributions which involved creating works of art live in front of the camera. He hardly ever spoke and used only the simplest of materials to create very beautiful things with the idea that this would inspire his audience to get in touch with their artistic side without making it look too much like a lesson. He did it brilliantly.

Here’s the middle chunk of a broadcast from 1975 which will bring it all bank to those of you of a certain age like me, but notable also in that it includes Sylvester McCoy who later became the 7th Doctor Who:

Best of all, this segment ends with my favourite bit, The Gallery, accompanied by a piece of music which is almost as redolent with nostalgia for me as the theme from Doctor Who. The track concerned is called Left Bank Two and was performed by the Noveltones; it can be heard here in full. Just a trio of vibraphone, guitar and drums played with brushes, I think it’s a masterpiece of relaxed simplicity. Nobody got his collar wet playing it, that’s for sure. It’s the sort of music you might have expected to hear in a smart cocktail bar in the early 60s but is now inextricably linked to The Gallery.

I was struck watching the above clip just how good the childrens’ drawings and paintings were too. I tried several times to get something shown in The Gallery, but never succeeded.

What’s all the Noise?

Posted in Science Politics, The Universe and Stuff with tags , , , , on January 18, 2009 by telescoper

Now there’s a funny thing…

I’ve just come across a news item from last week which I followed up by looking at the official NASA press release. I’m very slow to pick up on things these days, but I thought I’d mention it anyway.

The experiment concerned is called ARCADE 2, which is an somewhat contrived acronym derived from Absolute Radiometer for Cosmology, Astrophysics and Diffuse Emission. It is essentially a balloon-borne detector designed to analyse radio waves with frequencies in the range 3 to 90 Ghz. The experiment actually flew in 2006, so it has clearly taken considerable time to analyse the resulting data.

Being on a balloon that flies for a relatively short time (2.5 hours in this case) means that only a part of the sky was mapped, amounting to about 7% of the whole celestial sphere but that is enough to map a sizeable piece of the Galaxy as well as a fairly representative chunk of deep space.

There are four science papers on the arXiv about this mission: one describes the instrument itself; another discusses radio emission from our own galaxy, the Milky Way; the third discusses the overall contribution of extragalactic origin in the frequency range covered by the instrument; the last discusses the implications about extragalactic sources of radio emission.

The thing that jumps out from this collection of very interesting science papers is that there is an unexplained, roughly isotropic, background of radio noise, consistent with a power-law spectrum. Of course to isolate this component requires removing known radio emission from our Galaxy and from identified extragalactic sources, as well as understanding the systematics of the radiometer during its flight. But after a careful analysis of these the authors present strong evidence of excess emission over and above known sources. The spectrum of this radio buzz falls quite steeply with frequency so appears in the two long-wavelength channels at 3 and 8 GHz.

So where does this come from? Well, we just don’t know.

The problem is that no sensible extrapolation of known radio sources to high redshift appears to be able to generate an integrated flux equivalent to that observed. Here is a bit of the discussion from the paper:

It is possible to imagine that an unknown population of discrete sources exist below the flux limit of existing surveys. We argue earlier that these cannot be a simple extension of the source counts of star-forming galaxies. As a toy model, we consider a population of sources distributed with a delta function in flux a factor of 10 fainter than the 8.4 GHz survey limit of Fomalont et al. (2002). At a flux of 0.75 μJy, it would take over 1100 such sources per square arcmin to produce the unexplained emission we see at 3.20 GHz, assuming a frequency index of −2.56. This source density is more than two orders of magnitude higher than expected from extrapolation to the same flux limit of the known source population. It is, however, only modestly greater than the surface density of objects revealed in the faintest optical surveys, e.g., the Hubble Ultra Deep Field (Beckwith et al. 2006).  The unexplained emission might result from an early population of non thermal emission from low-luminosity AGN; such a source would evade the constraint implied by the far-IR measurements.

The point is that ordinary galaxies produce a broad spectrum of radiation and it is difficult to boost the flux at one frequency without violating limits imposed at others. It might be able to invoke Active Galactic Nuclei (AGN) to do the trick, but I’m not sure. I am sure there’ll be a lot work going on trying to see how this might fit in with all the other things we know about galaxy formation and evolution but for the time being it’s a mystery.

I’m equally sure that these results will spawn a plethora of more esoteric theoretical explanations, inevitably including the ridiculous as well as perhaps the sublime. Charged dark matter springs to mind.

Or maybe it’s not even extragalactic. Could it be from an unknown source inside the Milky Way? If so, it might shed some light on the curiosities we find in the cosmic microwave background that I’ve mentioned here and there, but it seems to peak at too low a frequency to account for much of the overall microwave sky temperature.

But it does have a lesson for astronomy funders. ARCADE 2 is a very cheap experiment (by NASA standards). Moreover, the science goals of the experiment did not include “discovering a new cosmic background”. It just goes to show that even in these times of big, expensive and narrowly targetted missions there is still space for serendipity.

The Physics Overview

Posted in Science Politics with tags , , , , , , , , on January 17, 2009 by telescoper

I found out by accident the other day that the Panels conducting the 2008 Research Assessment Exercise have now published their subject overviews, in which they comment trends within each discipline.

Heading straight for the overview produced by the panel for Physics (which is available together with two other panels here),I found some interesting points, some of which relate to comments posted on my previous items about the RAE results (here and here) until I terminated the discussion.

One issue that concerns many physicists is how the research profiles produced by the RAE panel will translate into funding. I’ve taken the liberty of extracting a couple of paragraphs from the report to show what they think. (For those of you not up with the jargon, UoA19 is the Unit of Assessment 19, which is Physics).

The sub-panel is pleased with how much of the research fell into the 4* category and that this excellence is widely spread so that many smaller departments have their share of work assessed at the highest grade. Every submitted department to UoA19 had at least 70% of their overall quality profile at 2* or above, i.e. internationally recognised or above.

Sub-panel 19 takes the view that the research agenda of any group, or of any individual for that matter, is interspersed with fallow periods during which the next phase of the research is planned and during which outputs may be relatively incremental, even if of high scientific quality. In the normal course of events successful departments with a long term view will have a number of outputs at the 3* and 2* level indicating that the groundwork is being laid for the next set of 4* work. This is most obviously true for those teams involved with very major experiments in the big sciences, but also applies to some degree in small science. Thus the quality profile is a dynamic entity and even among groups of very high international standing there is likely to be cyclic variation in the relative amounts of 3* and 4* work according to the rhythm of their research programmes. Most departments have what we would consider a healthy balance between the perceived quality levels. The subpanel strongly believes that the entire overall profile should be considered when measuring the quality of a department, rather than focussing on the 4* component only.

I think this is very sensible, but for more reasons than are stated. For a start the judgement of what is 4* or 3* must be to some extent subjective and it would be crazy to allocate funding entirely according to the fraction of 4* work. I’ve heard informally that the error in any of the percentages for any assessment is plus or minus 10%, which also argues for a conservative formula. However one might argue about the outcome, the panels clearly spent a lot of time and effort determining the profiles so it would seem to make sense to use all the information they provide rather than just a part.

Curiously, though, the panel made no comment about why it is that physics came out so much worse than chemistry in the 2008 exercise (about one-third of the chemistry departments in the country had a profile-weighted quality mark higher than or equal to the highest-rated physics department). Perhaps they just think UK chemistry is a lot better than UK physics.

Anyway, as I said, the issue most of us are worrying about is how this will translate into cash. I suspect HEFCE hasn’t worked this out at all yet either. The panel clearly thinks that money shouldn’t just follow the 4* research, but the HEFCE managers might differ. If they do wish to follow a drastically selective policy they’ve got a very big problem: most physics departments are rated very close together in score. Any attempt to separate them using the entire profile would be hard to achieve and even harder to justify.

The panel also made a specific comment about Wales and Scotland, which is particularly interesting for me (being here in Cardiff):

Sub-panel 19 regards the Scottish Universities Physics Alliance collaboration between Scottish departments as a highly positive development enhancing the quality of research in Scotland. South of the border other collaborations have also been formed with similar objectives. On the other hand we note with concern the performance of three Welsh departments where strategic management did not seem to have been as effective as elsewhere.

I’m not sure whether the dig about Welsh physics departments is aimed at the Welsh funding agency HEFCW or the individual university groups; SUPA was set up with the strong involvement of SFC and various other physics groupings in England (such as the Midlands Physics Alliance) were actively encouraged by HEFCE. It is true, though, that the 3 active physics departments in Wales (Cardiff, Swansea and Aberystwyth) all did quite poorly in the RAE. In the last RAE, HEFCW did not apply as selective a funding formula as its English counterpart HEFCE with the result that Cardiff didn’t get as much research funding as it would if it had been in England. One might argue that this affected the performance this time around, but I’m not sure about this as it’s not clear how any extra funding coming into Cardiff would have been spent. I doubt if HEFCW will do any different this time either. Welsh politics has a strong North-South issue going on, so HEFCW will probably feel it has to maintain a department in the North. It therefore can’t penalise Aberystwyth too badly for its poor RAE showing. The other two departments are larger and had very similar profiles (Swansea better than Cardiff, in fact) so there’s very little justification for being too selective there either.

The panel remarked on the success of SUPA which received a substantial injection of cash from the Scottish Funding Council (SFC) and which has led to new appointments in strategic areas in several Scottish universities. I’m a little bit skeptical about the long-term benefits of this because the universities themselves will have to pick up the tab for these positions when the initial funding dries up. Although it will have bought them extra points on the RAE score the continuing financial viability of physics departments is far from guaranteed because nobody yet knows whether they will gain as much cash from the outcome as they spent to achieve it. The same goes for other universities, particularly Nottingham, who have massively increased their research activity with cash from various sources and consequently done very well in the RAE. But will they get back as much as they have put in? It remains to be seen.

What I would say about SUPA is that it has definitely given Scottish physics a higher profile, largely from the appointment of Ian Halliday to front it. He is an astute political strategist and respected scientist who performed impressively as Chief Executive of the now-defunct Particle Physics and Astronomy Research Council and is also President of the European Science Foundation. Having such a prominent figurehead gives the alliance more muscle than a group of departmental heads would ever hope to have.

So should there be a Welsh version of SUPA? Perhaps WUPA?

Well, Swansea and Cardiff certainly share some research interests in the area of condensed-matter physics but their largest activities (Astronomy in Cardiff, Particle Physics in Swansea) are pretty independent. It seems to me to be to be well worth thinking of some sort of initiative to pool resources and try to make Welsh physics a bit less parochial, but the question is how to do it. At coffee the other day, I suggested an initiative in the area of astroparticle physics could bring in genuinely high quality researchers as well as establishing synergy between Swansea and Cardiff, which are only an hour apart by train. The idea went down like a lead balloon, but I still think it’s a good one. Whether HEFCW has either the resources or the inclination to do something like it is another matter, even if the departments themselves were to come round.

Anyway, I’m sure there will be quite a lot more discussion about our post-RAE strategy if and when we learn more about the funding implications. I personally think we could do with a radical re-think of the way physics in Wales is organized and could do with a champion who has the clout of Scotland’s SUPA-man.

The mystery as far as I am concerned remains why Cardiff did so badly in the ratings. I think the first quote may offer part of the explanation because we have large groups in Astronomical Instrumentation and Gravitational Physics, both of which have very long lead periods. However, I am surprised and saddened by the fact that the fraction rated at 4* is so very low. We need to find out why. Urgently.

Crimes and Misdemeanours

Posted in Uncategorized with tags , , on January 16, 2009 by telescoper

I’m indebted to Frazer Pearce for sending me a very interesting item about Susan Crawford, a former judge who served in a legal capacity for the US Army. She recently went on record to state without equivocation that the treatment meted out to detainee Mohammed al-Qahtani at Guantanamo Bay was “torture”. Not “coercive interrogation”. Not “enhanced interrogation”. Not any other “nontorturous form of interrogation”. Just plain torture.

The implications of this conclusion could be very profound. I would like to think that every single member of the Bush administration who sanctioned this should now be prosecuted under international law. I would also prosecute anyone who knew about it but failed to stop it, as their behaviour means that they were still party to a conspiracy to commit torture. It would, however, take someone with extraordinary courage (and financial backing) to force such an action through the legal system.

I’m not holding my breath.

Coincidentally, another George was in the news today although this one was O’Dowd rather than Bush. `Boy George’ was today sentenced to 15 months in jail for “falsely imprisoning” a male escort at his London flat. I’ll spare my delicate readers the more salacious details of the offence, but it seems the 47-year old former Culture Club singer was buzzing with cocaine at the time and suffering from paranoid delusions that his paid guest had tampered with his computer. He therefore tied him up and assaulted him. Having been found guilty by the jury he was sentenced today.

Initially I thought 15 months sounded very harsh, but then I didn’t know the extent of what had happened until I read the account in today’s newspaper. The violent and degrading treatment he inflicted on the 29-year old escort clearly merited a stern response so, on reflection, I’m glad in many respects that the judge was severe. Whatever you may think of the morality of the escort business, workers in that trade (whether straight or gay) are still human beings and deserve to be treated with respect.

I say “in some respects” because I’m very pessimistic about the criminal justice system generally. We lock up a staggering number of people in jail, many more than any other Western European country. The police spend their time trying to catch offenders, some of them get sent to jail and they see their job done. But I’ve yet to see any evidence of anything good coming out the other end of this depressing pipeline. I’m not convinced a jail sentence is going to cure Boy George of the drug problems that clearly led to this situation.

On the other hand, I’m by no means arguing that celebrities should be treated differently from others. I’m just saying that there has to be a better way, if only someone could think of it. Until they do, I think Boy George should do his time. At least he won’t have to pay for male company.

But isn’t it ironic that the other George – the one guilty of mass murder as well as torture – will probably get away scot free?

The Fall Before

Posted in Poetry with tags on January 15, 2009 by telescoper

Browsing the BBC website for any evidence of at all of good news amid the continuing fiasco that is the British banking system, the murderous onslaught in Gaza, and the defeat of Newcastle United in last night’s FA Cup replay, I happened upon a quite interesting little item from which I picked out the following:

The use of the word ‘fall’ or ‘the fall’ to mean autumn is commonly assumed to be an Americanism, but in fact it is found in the works of Michael Drayton (1563-1631), Thomas Middleton (1580-1627) and Sir Walter Ralegh (1554-1618).

There is also a quotation from John Dryden (1631-1700) to back this up:

What crowds of patients the town doctor kills, Or how, last fall, he raised the weekly bills.

This is more appropriate for the USA than the UK nowadays as over here we now have the wonderful National Health System.

While contributing to a discussion on the e-astronomer, which subsequently evolved into an extended exercise in pedantry here, it struck me that many words we British think of as being Americanisms were in fact in common use over here in the 16th and 17th Centuries. This period marks the birth of American English as the language used by the colonials evolved fairly independently thereafter until films and television re-established contact in the 20th Century and set up a feedback loop. “Fall” seems to be another example of a word which carried on being used in mainstream American usage but was replaced over here by “autumn”.

Another example that strikes me is “gotten” which is commonplace in the USA but rarely used in England except in phrases like “ill-gotten gains”. It is used in Scotland and in other dialects, but in mainstream English is considered to be archaic, and the form “got” is generally used instead. As the past participle of the verb “to get”, however, it is by no means grammatically incorrect and it was a standard form in English during the 16th century and abounds in Shakespeare, such as in the phrase “He was gotten in drink” from The Merry Wives of Windsor.

Conversely and curiously we still use the form “forgotten” for the past participle of “forget” and the form “forgot” (as a participle) is considered archaic or poetic. The phrase “I have forgot much, Cynara! Gone with the wind” occurs in Ernest Dowson’s famous poem Non Sum Qualis Eram Bonae Sub Regno Cynarae, but it wouldn’t be considered correct in modern English prose.

There’s no real logic to all this, which is what makes it interesting…

Although hearing or reading the word “gotten” in contributions from the other side of the pond no longer jars, and I’ve always found the word “fall” to be rather poetic anyway, there are still some divergences that I can’t cope with. Once on a trip to the States I was alarmed when informed that the plane would be landing momentarily.