Archive for probability

The Inductive Detective

Posted in Bad Statistics, Literature, The Universe and Stuff with tags , , , , , , , on September 4, 2009 by telescoper

I was watching an old episode of Sherlock Holmes last night – from the classic  Granada TV series featuring Jeremy Brett’s brilliant (and splendidly camp) portrayal of the eponymous detective. One of the  things that fascinates me about these and other detective stories is how often they use the word “deduction” to describe the logical methods involved in solving a crime.

As a matter of fact, what Holmes generally uses is not really deduction at all, but inference (a process which is predominantly inductive).

In deductive reasoning, one tries to tease out the logical consequences of a premise; the resulting conclusions are, generally speaking, more specific than the premise. “If these are the general rules, what are the consequences for this particular situation?” is the kind of question one can answer using deduction.

The kind of reasoning of reasoning Holmes employs, however, is essentially opposite to this. The  question being answered is of the form: “From a particular set of observations, what can we infer about the more general circumstances that relating to them?”. The following example from a Study in Scarlet is exactly of this type:

From a drop of water a logician could infer the possibility of an Atlantic or a Niagara without having seen or heard of one or the other.

The word “possibility” makes it clear that no certainty is attached to the actual existence of either the Atlantic or Niagara, but the implication is that observations of (and perhaps experiments on) a single water drop could allow one to infer sufficient of the general properties of water in order to use them to deduce the possible existence of other phenomena. The fundamental process is inductive rather than deductive, although deductions do play a role once general rules have been established.

In the example quoted there is  an inductive step between the water drop and the general physical and chemical properties of water and then a deductive step that shows that these laws could describe the Atlantic Ocean. Deduction involves going from theoretical axioms to observations whereas induction  is the reverse process.

I’m probably labouring this distinction, but the main point of doing so is that a great deal of science is fundamentally inferential and, as a consequence, it entails dealing with inferences (or guesses or conjectures) that are inherently uncertain as to their application to real facts. Dealing with these uncertain aspects requires a more general kind of logic than the  simple Boolean form employed in deductive reasoning. This side of the scientific method is sadly neglected in most approaches to science education.

In physics, the attitude is usually to establish the rules (“the laws of physics”) as axioms (though perhaps giving some experimental justification). Students are then taught to solve problems which generally involve working out particular consequences of these laws. This is all deductive. I’ve got nothing against this as it is what a great deal of theoretical research in physics is actually like, it forms an essential part of the training of an physicist.

However, one of the aims of physics – especially fundamental physics – is to try to establish what the laws of nature actually are from observations of particular outcomes. It would be simplistic to say that this was entirely inductive in character. Sometimes deduction plays an important role in scientific discoveries. For example,  Albert Einstein deduced his Special Theory of Relativity from a postulate that the speed of light was constant for all observers in uniform relative motion. However, the motivation for this entire chain of reasoning arose from previous studies of eletromagnetism which involved a complicated interplay between experiment and theory that eventually led to Maxwell’s equations. Deduction and induction are both involved at some level in a kind of dialectical relationship.

The synthesis of the two approaches requires an evaluation of the evidence the data provides concerning the different theories. This evidence is rarely conclusive, so  a wider range of logical possibilities than “true” or “false” needs to be accommodated. Fortunately, there is a quantitative and logically rigorous way of doing this. It is called Bayesian probability. In this way of reasoning,  the probability (a number between 0 and 1 attached to a hypothesis, model, or anything that can be described as a logical proposition of some sort) represents the extent to which a given set of data supports the given hypothesis.  The calculus of probabilities only reduces to Boolean algebra when the probabilities of all hypothesese involved are either unity (certainly true) or zero (certainly false). In between “true” and “false” there are varying degrees of “uncertain” represented by a number between 0 and 1, i.e. the probability.

Overlooking the importance of inductive reasoning has led to numerous pathological developments that have hindered the growth of science. One example is the widespread and remarkably naive devotion that many scientists have towards the philosophy of the anti-inductivist Karl Popper; his doctrine of falsifiability has led to an unhealthy neglect of  an essential fact of probabilistic reasoning, namely that data can make theories more probable. More generally, the rise of the empiricist philosophical tradition that stems from David Hume (another anti-inductivist) spawned the frequentist conception of probability, with its regrettable legacy of confusion and irrationality.

My own field of cosmology provides the largest-scale illustration of this process in action. Theorists make postulates about the contents of the Universe and the laws that describe it and try to calculate what measurable consequences their ideas might have. Observers make measurements as best they can, but these are inevitably restricted in number and accuracy by technical considerations. Over the years, theoretical cosmologists deductively explored the possible ways Einstein’s General Theory of Relativity could be applied to the cosmos at large. Eventually a family of theoretical models was constructed, each of which could, in principle, describe a universe with the same basic properties as ours. But determining which, if any, of these models applied to the real thing required more detailed data.  For example, observations of the properties of individual galaxies led to the inferred presence of cosmologically important quantities of  dark matter. Inference also played a key role in establishing the existence of dark energy as a major part of the overall energy budget of the Universe. The result is now that we have now arrived at a standard model of cosmology which accounts pretty well for most relevant data.

Nothing is certain, of course, and this model may well turn out to be flawed in important ways. All the best detective stories have twists in which the favoured theory turns out to be wrong. But although the puzzle isn’t exactly solved, we’ve got good reasons for thinking we’re nearer to at least some of the answers than we were 20 years ago.

I think Sherlock Holmes would have approved.

Test Odds

Posted in Cricket with tags , , on August 24, 2009 by telescoper

I’m very grateful to Daniel Mortlock for sending me this fascinating plot. It comes from the cricket pages of The Times Online and it shows how the probability of the various possible outcomes of the Final Ashes Test at the Oval evolved with time according to their “Hawk-Eye Analysis”.

pastedGraphic

 I think I should mention that Daniel is an Australian supporter, so this graph must make painful viewing for him! Anyway, it’s a fascinating plot, which I read as an application of Bayesian probability.

At the beginning of the match, a prior probability is assigned to each of the three possible outcomes: England win (blue); Australia win (yellow); and Draw (grey). It looks like these are roughly in the ratio 1:2:2. No details are given as to how these were arrived at, but it must have taken into account the fact that Australia thrashed England in the previous match at Headingley. Information from previous Tests at the Oval was presumably also included.I don’t know if the fact that England won the toss and decided to bat first altered the prior odds significantly, but it should have.

Anyway, what happens next depends on how sophisticated a model is used to determine the subsequent evolution of the  probabilities. In good Bayesian fashion, information is incorporate in a likelihood function determined by the model and this is used to update the  prior  to produce a posterior probability. This is passed on as a prior for  the next time step. And so it goes on until the end of the match where, regardless of what prior is chosen, the data force the model to the correct conclusion.

The red dots show the fall of wickets, but the odds fluctuate continually in accord with variables such as scoring rate, number of wickets,  and, presumably, the weather. Some form of difference equation is clearly being used, but we don’t know the details.

England got off to a pretty good start, so their probability to win started to creep up, but not by all that much, presumably because the model didn’t think their first-innings total of 332 was enough against a good batting side like Australia. However, the odds of a draw fell more significantly as a result of fairly quick scoring and the lack of any rain delays.

When the Australians batted they were going well at the start so England’s probability to win started to fall and theirs to rise. But when they started to lose quick wickets (largely to Stuart Broad), the blue and yellow trajectories swap over and England became favourites by a large margin. Despite a wobble when they lost 3 early wickets and some jitters when Australia’s batsmen put healthy partnerships together, England remained the more probable to win from that point to the end.

Although it all basically makes some sense, there are some curiosities.  Daniel Mortlock asked, for example, whether Australia were  really as likely to win at about 200 for 2 on the fourth day as  England were when Australia were 70 without loss in the first innings?  That’s what the graph seems to say. His reading of this is that too much stock is placed in the difficulty of   breaking a big (100+ runs) parnership, as the curves seem to   “accelerate” when the batsmen seem to be doing well.

I wonder how new information is included in general terms. Australia’s poor first innings batting (160 all out) in any case only reduced their win probability to about the level that England started at. How was their batting in the first innings balanced against their performance in the last match?

I’d love to know more about the algorithm used in this analysis, but I suspect it is copyright. There may be a good reason for not disclosing it. I have noticed in recent years that bookmakers have been setting extremely parsimonious odds for cricket outcomes. Gone are the days (Headingley 1981) when bookmakers offered 500-1 against England to beat Australia, which they then proceeded to do. In those days the bookmakers relied on expert advisors to fix their odds. I believe it was the late Godfrey Evans who persuaded them to offer 500-1. I’m not sure if they ever asked him again!

The system on which Hawkeye is based is much more conservative. Even on the last day of the test, odds against an Australian victory remained around 4-1 until they were down to their last few wickets. Notice also that the odds on a draw were never as long against as they should have been either, when that outcome was clearly virtually impossible. On the morning of the final day I could only find 10-1 against the draw which I think is remarkably ungenerous. However, even with an England victory a near certainty you could still find odds like 1-4. It seems like the system doesn’t like to produce extremely long or extremely short odds.

Perhaps the bookies are now using analyses like this to set their odds, which explains why betting on cricket isn’t as much fun as it used to be. On the other hand, if the system is predisposed against very short odds then maybe that’s the kind of bet to make in order to win. Things like this may be why the algorithm behind Hawkeye isn’t published…

On the Cards

Posted in Uncategorized with tags , , , , , , , on January 27, 2009 by telescoper

After an interesting chat yesterday with a colleague about the difficulties involved in teaching probabilities, I thought it might be fun to write something about card games. Actually, much of science is intimately concerned with statistical reasoning and if any one activity was responsible for the development of the theory of probability, which underpins statistics, it was the rise of games of chance in the 16th and 17th centuries. Card, dice and lottery games still provide great examples of how to calculate probabilities, a skill which is very important for a physicist.

For those of you who did not misspend your youth playing with cards like I did, I should remind you that a standard pack of playing cards has 52 cards. There are 4 suits: clubs (♣), diamonds (♦), hearts (♥) and spades (♠). Clubs and spades are coloured black, while diamonds and hearts are red. Each suit contains thirteen cards, including an Ace (A), the plain numbered cards (2, 3, 4, 5, 6, 7, 8, 9 and 10), and the face cards: Jack (J), Queen (Q), and King (K). In most games the most valuable is the Ace, following by King, Queen and Jack and then from 10 down to 2.

I’ll start with Poker, because it seems to be one of the simplest ways of losing money these days. Imagine I start with a well-shuffled pack of 52 cards. In a game of five-card draw poker, the players essentially bet on who has the best hand made from five cards drawn from the pack. In more complicated versions of poker, such as Texas hold’em, one has, say, two “private” cards in one’s hand and, say, five on the table in plain view. These community cards are usually revealed in stages, allowing a round of betting at each stage. One has to make the best hand one can using five cards from ones private cards and those on the table. The existence of community cards makes this very interesting because it gives some additional information about other player’s holdings. For the present discussion, however, I will just stick to individual hands and their probabilities.

How many different five-card poker hands are possible?

To answer this question we need to know about permutations and combinations. Imagine constructing a poker hand from a standard deck. The deck is full when you start, which gives you 52 choices for the first card of your hand. Once that is taken you have 51 choices for the second, and so on down to 48 choices for the last card. One might think the answer is therefore 52×51×50×49 ×48=311,875,200, but that’s not right because it doesn’t actually matter which order your five cards are dealt to you.

Suppose you have 4 aces and the 2 of clubs in your hand; the sequences (A♣, A♥, A♦, A♠, 2♣) and (A♥ 2♣ A♠, A♦, A♣) are counted as distinct hands among the number I obtained above. There are many other possibilities like this where the cards are the same but the order is different. In fact there are 5×4×3×2× 1 = 120 such permutations . Mathematically this is denoted 5!, or five-factorial. Dividing the number above by this gives the actual number of possible five-card poker hands: 2,598,960. This number is important because it describes the size of the “possibility space”. Each of these hands is a possible poker deal, and each is assumed to be “equally likely”, unless the dealer is cheating.

This calculation is an example of a mathematical combination as opposed to a permutation. The number of combinations one can make of r things chosen from a set of n is usually denoted Cn,r. In the example above, r=5 and n=52. Note that 52×51×50×49 ×48 can be written 52!/47! The general result for the number of combinations can likewise be written Cn,r=n!/(n-r)!r!

Poker hands are characterized by the occurrence of particular events of varying degrees of probability. For example, a flush is five cards of the same suit but not in sequence (e.g. 2♠, 4♠, 7♠, 9♠, Q♠). A numerical sequence of cards regardless of suit (e.g. 7♣, 8♠, 9♥, 10♦, J♥) is called a straight. A sequence of cards of the same suit is called a straight flush. One can also have a pair of cards of the same value, or two pairs, or three of a kind, or four of a kind, or a full house which is three of one kind and two of another. One can also have nothing at all, i.e. not even a pair.

The relative value of the different hands is determined by how probable they are, and to work that out takes quite a bit of effort.

Consider the probability of getting, say, 5 spades (in other words, spade flush). To do this we have to calculate the number of distinct hands that have this composition.There are 13 spades in the deck to start with, so there are 13×12×11×10×9 permutations of 5 spades drawn from the pack, but, because of the possible internal rearrangements, we have to divide again by 5! The result is that there are 1287 possible hands containing 5 spades. Not all of these are mere flushes, however. Some of them will include sequences too, e.g. 8♠, 9♠, 10♠, J♠, Q♠, which makes them straight flushes. There are only 10 possible straight flushes in spades (starting with 2,3,4,5,6,7,8,9,10 or J), so only 1277 of the possible hands counted above are just flushes. This logic can apply to any of the suits, so in all there are 1277×4=5108 flush hands and 10×4=40 straight flush hands.

I won’t go through the details of calculating the probability of the other types of hand, but I’ve included a table showing their probabilities obtained by dividing the relevant number of possibilities by the total number of hands (given at the bottom of the middle column).

TYPE OF HAND

Number of Possible Hands

Probability

Straight Flush

40

0.000015

Four of a Kind

624

0.000240

Full House

3744

0.001441

Flush

5108

0.001965

Straight

10,200

0.003925

Three of a Kind

54,912

0.021129

Two Pair

123,552

0.047539

One Pair

1,098,240

0.422569

Nothing

1,302,540

0.501177

TOTALS

2,598,960

1.00000

 

 

 

Poker involves rounds of betting in which the players, amongst other things, try to assess how likely their hand is to win compared with the others involved in the game. If your hand is weak, you can fold and allow the accumulated bets to be given to your opponents. Alternatively, you can  bluff and bet strongly on a poor hand (even if you have “nothing”) to convince your opponents that your hand is strong. This tactic can be extremely successful in the right circumstances. In the words of the late great Paul Newman in the film Cool Hand Luke,  “sometimes nothing can be a real cool hand”.

If you bet heavily on your hand, the opponent may well think it is strong even if it contains nothing, and fold even if his hand has a higher value. To bluff successfully requires a good sense of timing – it depends crucially on who gets to bet first – and extremely cool nerves. To spot when an opponent is bluffing requires real psychological insight. These aspects of the game are in many ways more interesting than the basic hand probabilities, and they are difficult to reduce to mathematics.

Another card game that serves as a source for interesting problems in probability is Contract Bridge. This is one of the most difficult card games to play well because it is a game of logic that also involves chance to some degree. Bridge is a game for four people, arranged in two teams of two. The four sit at a table with members of each team facing opposite each other. Traditionally the different positions are called North, South, East and West although you don’t actually need a compass to play. North and South are partners, as are East and West.

For each hand of Bridge an ordinary pack of cards is shuffled and dealt out by one of the players, the dealer. Let us suppose that the dealer in this case is South. The pack is dealt out one card at a time to each player in turn, starting with West (to dealer’s immediate left) then North and so on in a clockwise direction. Each player ends up with thirteen cards when all the cards are dealt.

Now comes the first phase of the game, the auction. Each player looks at their cards and makes a bid, which is essentially a coded message that gives information to their partner about how good their hand is. A bid is basically an undertaking to win a certain number of tricks with a certain suit as trumps (or with no trumps). The meaning of tricks and trumps will become clear later. For example, dealer might bid “one spade” which is a suggestion that perhaps he and his partner could win one more trick than the opposition with spades as the trump suit. This means winning seven tricks, as there are always thirteen to be won in a given deal. The next to bid – in this case West – can either pass (saying “no bid”) or bid higher, like an auction. The value of the suits increases in the sequence clubs, diamonds, hearts and spades. So to outbid one spade (1S), West has to bid at least two hearts (2H), say, if hearts is the best suit for him but if South had opened 1C then 1H would have been sufficient to overcall . Next to bid is South’s partner, North. If he likes spades as trumps he can raise the original bid. If he likes them a lot he can jump to a much higher contract, such as four spades (4S).

This is the most straightforward level of Bridge bidding, but in reality there are many bids that don’t mean what they might appear to mean at first sight. Examples include conventional bids  (such as Stayman or Blackwood),  splinter and transfer bids and the rest of the complex lexicon of Bridge jargon. There are some bids to which partner must respond (forcing bids), and those to which a response is discretionary. And instead of overcalling a bid, one’s opponents could “double” either for penalties in the hope that the contract will fail or as a “take-out” to indicate strength in a suit other than the one just bid.

Bidding carries on in a clockwise direction until nobody dares take it higher. Three successive passes will end the auction, and the contract is then established. Whichever player opened the bidding in the suit that was finally chosen for trumps becomes “declarer”. If we suppose our example ended in 4S, then it was South that becomes declarer because he opened the bidding with 1S. If West had overcalled 2 Hearts (2H) and this had passed round the table, West would be declarer.

The scoring system for Bridge encourages teams to go for high contracts rather than low ones, so if one team has the best cards it doesn’t necessarily get an easy ride; it should undertake an ambitious contract rather than stroll through a simple one. In particular there are extra points for making “game” (a contract of four spades, four hearts, five clubs, five diamonds, or three no trumps). There is a huge bonus available for bidding and making a grand slam (an undertaking to win all thirteen tricks, i.e. seven of something) and a smaller but still impressive bonus for a small slam (six of something). This encourages teams to push for a valuable contract: tricks bid and made count a lot more than overtricks even without the slam bonus.

The second phase of the game now starts. The person to the left of declarer plays a card of their choice, possibly following yet another convention, such as “fourth highest of the longest suit”. The player opposite declarer puts all his cards on the table and becomes “dummy”, playing no further part in this particular hand. Dummy’s cards are then entirely under the control of the declarer. All three players can see the cards in dummy, but only declarer can see his own hand. Apart from the role of dummy, the card play is then similar to whist.

Each trick consists of four cards played in clockwise sequence from whoever leads. Each player, including dummy, must follow the suit led if he has a card of that suit in his hand. If a player doesn’t have a card of that suit he may “ruff”, i.e. play a trump card, or simply discard some card (probably of low value) from another suit. Good Bridge players keep a careful track of all discards to improve their knowledge of the cards held by their  opponents. Discards can also be used by the defence (i.e. East and West in this case) to signal to each other. Declarer can see dummy’s cards but the defenders don’t get to see each other’s.

One can win a trick in one of two ways. Either one plays a higher card of the same suit, e.g. K♥ beats 10♥, or anything lower than Q♥. Aces are high, by the way. Alternatively, if one has no cards of the suit that has been led, one can play a trump (or “ruff”). A trump always beats a card of the original suit, but more than one player may ruff and in that case the highest trump played carries the trick. For instance, East may ruff only to be over-ruffed by South if both have none of the suit led. Of course one may not have any trumps at all, making a ruff impossible. If one has neither the original suit nor a trump one has to discard something from another suit. The possibility of winning a trick by a ruff also does not exist if the contract is of the no-trumps variety.

Whoever wins a given trick leads to start the next one. This carries on until thirteen tricks have been played. Then comes the reckoning of whether the contract has been made. If so, points are awarded to declarer’s team. If not, penalty points are awarded to the defenders which are higher if the contract has been doubled. Then it’s time for another hand, probably another drink, and very possibly an argument about how badly declarer played the hand.

I’ve gone through the game in some detail in an attempt to make it clear why this is such an interesting game for probabilistic reasoning. During the auction, partial information is given about every player’s holding. It is vital to interpret this information correctly if the contract is to be made. The auction can reveal which of the defending team holds important high cards, or whether the trump suit is distributed strangely. Because the cards are played in strict clockwise sequence this matters a lot. On the other hand, even with very firm knowledge about where the important cards lie, one still often has a difficult logical puzzle to solve if all the potential winners in one’s hand are actually to be made into tricks. It can be a very subtle game.

I only have space-time for one illustration of this kind of thing, but it’s another one that is fun to work out. As is true to a lesser extent in poker, one is not really interested in the initial probabilities of the different hands but rather how to update these probabilities using conditional information as it may be revealed through the auction and card play. In poker this updating is done largely by interpreting the bets one’s opponents are making.

Let us suppose that I am South, and I have been daring enough to bid a grand slam in spades (7S). West leads, and North lays down dummy. I look at my hand and dummy, and realise that we have 11 trumps between us, missing only the King (K) and the 2. I have all other suits covered, and enough winners to make the contract provided I can make sure I win all the trump tricks. The King, however, poses a problem. The Ace of Spades will beat the King, but if I just lead the Ace, it may be that one of East or West has both the K and the 2. In this case he would simply play the two to my Ace. The King would be an automatic winner then: as the highest remaining trump it must win a trick eventually. The contract is then doomed.

On the other hand if the spades split 1-1 between East and West then the King drops when I lead the Ace, so that strategy makes the contract. It all depends how the cards split.

But there is a different way to play this situation. Suppose, for example, that A♠ and Q♠ are on the table (in dummy’s hand) and I, as declarer, have managed to win the first trick in my hand. If I think the K♠ lies in West’s hand, I lead a spade. West has to follow suit if he can. If he has the King, and plays it, I can cover it with the Ace so it doesn’t win. If, however, West plays low I can play Q♠. This will win if I am right about the location of the King. Next time I can lead the A♠ from dummy and the King will fall. This play is called a finesse.

But is this better than the previous strategy, playing for the drop? It’s all a question of probabilities, and this in turn boils down to the number of possible deals allow each strategy to work.

To start with, we need the total number of possible bridge hands. This is quite easy: it’s the number of combinations of 13 objects taken from 52, i.e. C52,13. This is a truly enormous number: over 600 billion. You have to play a lot of games to expect to be dealt the same hand twice!

What we now have to do is evaluate the probability of each possible arrangement of the missing King and two. Dummy and declarer’s hands are known to me. There are 26 remaining cards whose location I do not know. The relevant space of possibilities is now smaller than the original one. I have 26 cards to assign between East and West. There are C26,13 ways of assigning West’s 13 cards, but once I have done this the remaining 13 must be in East’s hand.

Suppose West has the 2 but not the K. Conditional on this assumption, I know one of his cards, but there are 12 others remaining to be assigned. There are therefore C24,12 hands with this possible arrangement of the trumps. Obviously the K has to be with East in this case. The finesse would not work as East would cover the Q with the K, but the K would drop if the A were played.

The opposite situation, with West having the K but not the 2 has the same number of possibilities associated with it. Here West must play the K when a spade is led so it will inevitably lose to the A. South abandons the idea of finessing when West rises and just covers it with the higher card.

Suppose instead West doesn’t have any trumps. There are C24,13 ways of constructing such a hand: 13 cards from the 24 remaining non-trumps. Here the finesse fails because the K is with East but the drop fails too. East plays the 2 on the A and the K becomes a winner.

The remaining possibility is that West has both trumps: this can happen in C24,11 ways. Here the finesse works but the drop fails. If West plays low on the South lead, declarer calls for the Q from dummy to hold the trick. Next lead he plays the A to drop the K.

To turn these counts into probabilities we just divide by the total number of different ways I can construct the hands of East and West, which is C26,13. The results are summarized in the table here.

Spades in West’s hand

Number of hands

Probability

Drop

Finesse

None

C24,13

0.24

0

0

K

C24,12

0.26

0.26

0.26

2

C24,12

0.26

0.26

0

K2

C24,11

0.24

0

0.24

Total

C26,13

1.00

0.52

0.50

The last two columns show the contributions of each arrangement to the probability of success of either playing for the drop or the finesse. You can see that the drop is slightly more likely to work than the finesse in this case.

Note, however, that this ignores any information gleaned from the auction, which could be crucial. For example, if West had made a bid then it is more likely that he had cards of some value so this might suggest the K might be in his hand. Note also that the probability of the drop and the probability of the finesse do not add up to one. This is because there are situations where both could work or both could fail.

This calculation does not mean that the finesse is never the right tactic. It sometimes has much higher probability than the drop, and is often strongly motivated by information the auction has revealed. Calculating the odds precisely, however, gets more complicated the more cards are missing from declarer’s holding. For those of you too lazy to compute the probabilities, the book On Gambling, by Oswald Jacoby contains tables of the odds for just about any bridge situation you can think of.

Finally on the subject of Bridge, I wanted to mention a fact that many people think is paradoxical but which isn’t really. Looking at the table shows that the odds of a 1-1 split in spades here are 0.52:0.48 or 13: 12. This comes from how many cards are in East and West’s hand when the play is attempted. There is a much quicker way of getting this answer than the brute force method I used above. Consider the hand with the spade two in it. There are 12 remaining opportunities in that hand that the spade K might fill, but there are 13 available slots for it in the other. The odds on a 1-1 split must therefore be 13:12. Now suppose instead of going straight for the trumps, I play off a few winners in the side suits (risking that they might be ruffed, of course). Suppose I lead out three Aces in the three suits other than spades and they all win. Now East and West have only 20 cards between them and by exactly the same reasoning as before, the odds of a 1-1 split have become 10:9 instead of 13:12. Playing out seemingly irrelevant suits has increased the probability of the drop working. Although I haven’t touched the spades, my assessment of the probability of the spade distribution has changed significantly.

This sort of thing is a major reason why I always think of probabilities in a Bayesian way. As information is gradually revealed one updates the assessment of the probability of the remaining unknowns.

But probability is only a part of Bridge; the best players don’t actually leave very much  to chance…