Archive for statistics

One More for the Bad Statistics in Astronomy File…

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , on May 20, 2015 by telescoper

It’s been a while since I last posted anything in the file marked Bad Statistics, but I can remedy that this morning with a comment or two on the following paper by Robertson et al. which I found on the arXiv via the Astrostatistics Facebook page. It’s called Stellar activity mimics a habitable-zone planet around Kapteyn’s star and it the abstract is as follows:

Kapteyn’s star is an old M subdwarf believed to be a member of the Galactic halo population of stars. A recent study has claimed the existence of two super-Earth planets around the star based on radial velocity (RV) observations. The innermost of these candidate planets–Kapteyn b (P = 48 days)–resides within the circumstellar habitable zone. Given recent progress in understanding the impact of stellar activity in detecting planetary signals, we have analyzed the observed HARPS data for signatures of stellar activity. We find that while Kapteyn’s star is photometrically very stable, a suite of spectral activity indices reveals a large-amplitude rotation signal, and we determine the stellar rotation period to be 143 days. The spectral activity tracers are strongly correlated with the purported RV signal of “planet b,” and the 48-day period is an integer fraction (1/3) of the stellar rotation period. We conclude that Kapteyn b is not a planet in the Habitable Zone, but an artifact of stellar activity.

It’s not really my area of specialism but it seemed an interesting conclusions so I had a skim through the rest of the paper. Here’s the pertinent figure, Figure 3,

bad_stat_figure

It looks like difficult data to do a correlation analysis on and there are lots of questions to be asked  about  the form of the errors and how the bunching of the data is handled, to give just two examples.I’d like to have seen a much more comprehensive discussion of this in the paper. In particular the statistic chosen to measure the correlation between variates is the Pearson product-moment correlation coefficient, which is intended to measure linear association between variables. There may indeed be correlations in the plots shown above, but it doesn’t look to me that a straight line fit characterizes it very well. It looks to me in some of the  cases that there are simply two groups of data points…

However, that’s not the real reason for flagging this one up. The real reason is the following statement in the text:

bad_stat_text

Aargh!

No matter how the p-value is arrived at (see comments above), it says nothing about the “probability of no correlation”. This is an error which is sadly commonplace throughout the scientific literature, not just astronomy.  The point is that the p-value relates to the probability that the given value of the test statistic (in this case the Pearson product-moment correlation coefficient, r) would arise by chace in the sample if the null hypothesis H (in this case that the two variates are uncorrelated) were true. In other words it relates to P(r|H). It does not tells us anything directly about the probability of H. That would require the use of Bayes’ Theorem. If you want to say anything at all about the probability of a hypothesis being true or not you should use a Bayesian approach. And if you don’t want to say anything about the probability of a hypothesis being true or not then what are you trying to do anyway?

If I had my way I would ban p-values altogether, but it people are going to use them I do wish they would be more careful about the statements make about them.

Social Physics & Astronomy

Posted in The Universe and Stuff with tags , , , , , , on January 25, 2015 by telescoper

When I give popular talks about Cosmology,  I sometimes look for appropriate analogies or metaphors in television programmes about forensic science, such as CSI: Crime Scene Investigation which I watch quite regularly (to the disdain of many of my colleagues and friends). Cosmology is methodologically similar to forensic science because it is generally necessary in both these fields to proceed by observation and inference, rather than experiment and deduction: cosmologists have only one Universe;  forensic scientists have only one scene of the crime. They can collect trace evidence, look for fingerprints, establish or falsify alibis, and so on. But they can’t do what a laboratory physicist or chemist would typically try to do: perform a series of similar experimental crimes under slightly different physical conditions. What we have to do in cosmology is the same as what detectives do when pursuing an investigation: make inferences and deductions within the framework of a hypothesis that we continually subject to empirical test. This process carries on until reasonable doubt is exhausted, if that ever happens.

Of course there is much more pressure on detectives to prove guilt than there is on cosmologists to establish the truth about our Cosmos. That’s just as well, because there is still a very great deal we do not know about how the Universe works.I have a feeling that I’ve stretched this analogy to breaking point but at least it provides some kind of excuse for writing about an interesting historical connection between astronomy and forensic science by way of the social sciences.

The gentleman shown in the picture on the left is Lambert Adolphe Jacques Quételet, a Belgian astronomer who lived from 1796 to 1874. His principal research interest was in the field of celestial mechanics. He was also an expert in statistics. In Quételet’s  time it was by no means unusual for astronomers to well-versed in statistics, but he  was exceptionally distinguished in that field. Indeed, Quételet has been called “the father of modern statistics”. and, amongst other things he was responsible for organizing the first ever international conference on statistics in Paris in 1853.

His fame as a statistician owed less to its applications to astronomy, however, than the fact that in 1835 he had written a very influential book which, in English, was titled A Treatise on Man but whose somewhat more verbose original French title included the phrase physique sociale (“social physics”). I don’t think modern social scientists would see much of a connection between what they do and what we do in the physical sciences. Indeed the philosopher Auguste Comte was annoyed that Quételet appropriated the phrase “social physics” because he did not approve of the quantitative statistical-based  approach that it had come to represent. For that reason Comte  ditched the term from his own work and invented the modern subject of  sociology…

Quételet had been struck not only by the regular motions performed by the planets across the sky, but also by the existence of strong patterns in social phenomena, such as suicides and crime. If statistics was essential for understanding the former, should it not be deployed in the study of the latter? Quételet’s first book was an attempt to apply statistical methods to the development of man’s physical and intellectual faculties. His follow-up book Anthropometry, or the Measurement of Different Faculties in Man (1871) carried these ideas further, at the expense of a much clumsier title.

This foray into “social physics” was controversial at the time, for good reason. It also made Quételet extremely famous in his lifetime and his influence became widespread. For example, Francis Galton wrote about the deep impact Quételet had on a person who went on to become extremely famous:

Her statistics were more than a study, they were indeed her religion. For her Quételet was the hero as scientist, and the presentation copy of his “Social Physics” is annotated on every page. Florence Nightingale believed – and in all the actions of her life acted on that belief – that the administrator could only be successful if he were guided by statistical knowledge. The legislator – to say nothing of the politician – too often failed for want of this knowledge. Nay, she went further; she held that the universe – including human communities – was evolving in accordance with a divine plan; that it was man’s business to endeavour to understand this plan and guide his actions in sympathy with it. But to understand God’s thoughts, she held we must study statistics, for these are the measure of His purpose. Thus the study of statistics was for her a religious duty.

The person  in question was of course  Florence Nightingale. Not many people know that she was an adept statistician who was an early advocate of the use of pie charts to represent data graphically; she apparently found them useful when dealing with dim-witted army officers and dimmer-witted politicians.

The type of thinking described in the quote  also spawned a number of highly unsavoury developments in pseudoscience, such as the eugenics movement (in which Galton himself was involved), and some of the vile activities related to it that were carried out in Nazi Germany. But an idea is not responsible for the people who believe in it, and Quételet’s work did lead to many good things, such as the beginnings of forensic science.

A young medical student by the name of Louis-Adolphe Bertillon was excited by the whole idea of “social physics”, to the extent that he found himself imprisoned for his dangerous ideas during the revolution of 1848, along with one of his Professors, Achile Guillard, who later invented the subject of demography, the study of racial groups and regional populations. When they were both released, Bertillon became a close confidante of Guillard and eventually married his daughter Zoé. Their second son, Adolphe Bertillon, turned out to be a prodigy.

Young Adolphe was so inspired by Quételet’s work, which had no doubt been introduced to him by his father, that he hit upon a novel way to solve crimes. He would create a database of measured physical characteristics of convicted criminals. He chose 11 basic measurements, including length and width of head, right ear, forearm, middle and ring fingers, left foot, height, length of trunk, and so on. On their own none of these individual characteristics could be probative, but it ought to be possible to use a large number of different measurements to establish identity with a very high probability. Indeed, after two years’ study, Bertillon reckoned that the chances of two individuals having all 11 measurements in common were about four million to one. He further improved the system by adding photographs, in portrait and from the side, and a note of any special marks, like scars or moles.

Bertillonage, as this system became known, was rather cumbersome but proved highly successful in a number of high-profile criminal cases in Paris. By 1892, Bertillon was exceedingly famous but nowadays the word bertillonage only appears in places like the Observer’s Azed crossword.

The main reason why Bertillon’s fame subsided and his system fell into disuse was the development of an alternative and much simpler method of criminal identification: fingerprints. The first systematic use of fingerprints on a large scale was implemented in India in 1858 in an attempt to stamp out electoral fraud.

The name of the British civil servant who had the idea of using fingerprinting in this way was Sir William James Herschel (1833-1917), the eldest child of Sir John Herschel, the astronomer, and thus the grandson of Sir William Herschel, the discoverer of Uranus. Another interesting connection between astronomy and forensic science.

 

 

 

Bayes, Laplace and Bayes’ Theorem

Posted in Bad Statistics with tags , , , , , , , , on October 1, 2014 by telescoper

A  couple of interesting pieces have appeared which discuss Bayesian reasoning in the popular media. One is by Jon Butterworth in his Grauniad science blog and the other is a feature article in the New York Times. I’m in early today because I have an all-day Teaching and Learning Strategy Meeting so before I disappear for that I thought I’d post a quick bit of background.

One way to get to Bayes’ Theorem is by starting with

P(A|C)P(B|AC)=P(B|C)P(A|BC)=P(AB|C)

where I refer to three logical propositions A, B and C and the vertical bar “|” denotes conditioning, i.e. P(A|B) means the probability of A being true given the assumed truth of B; “AB” means “A and B”, etc. This basically follows from the fact that “A and B” must always be equivalent to “B and A”.  Bayes’ theorem  then follows straightforwardly as

P(B|AC) = K^{-1}P(B|C)P(A|BC) = K^{-1} P(AB|C)

where

K=P(A|C).

Many versions of this, including the one in Jon Butterworth’s blog, exclude the third proposition and refer to A and B only. I prefer to keep an extra one in there to remind us that every statement about probability depends on information either known or assumed to be known; any proper statement of probability requires this information to be stated clearly and used appropriately but sadly this requirement is frequently ignored.

Although this is called Bayes’ theorem, the general form of it as stated here was actually first written down not by Bayes, but by Laplace. What Bayes did was derive the special case of this formula for “inverting” the binomial distribution. This distribution gives the probability of x successes in n independent “trials” each having the same probability of success, p; each “trial” has only two possible outcomes (“success” or “failure”). Trials like this are usually called Bernoulli trials, after Daniel Bernoulli. If we ask the question “what is the probability of exactly x successes from the possible n?”, the answer is given by the binomial distribution:

P_n(x|n,p)= C(n,x) p^x (1-p)^{n-x}

where

C(n,x)= \frac{n!}{x!(n-x)!}

is the number of distinct combinations of x objects that can be drawn from a pool of n.

You can probably see immediately how this arises. The probability of x consecutive successes is p multiplied by itself x times, or px. The probability of (n-x) successive failures is similarly (1-p)n-x. The last two terms basically therefore tell us the probability that we have exactly x successes (since there must be n-x failures). The combinatorial factor in front takes account of the fact that the ordering of successes and failures doesn’t matter.

The binomial distribution applies, for example, to repeated tosses of a coin, in which case p is taken to be 0.5 for a fair coin. A biased coin might have a different value of p, but as long as the tosses are independent the formula still applies. The binomial distribution also applies to problems involving drawing balls from urns: it works exactly if the balls are replaced in the urn after each draw, but it also applies approximately without replacement, as long as the number of draws is much smaller than the number of balls in the urn. I leave it as an exercise to calculate the expectation value of the binomial distribution, but the result is not surprising: E(X)=np. If you toss a fair coin ten times the expectation value for the number of heads is 10 times 0.5, which is five. No surprise there. After another bit of maths, the variance of the distribution can also be found. It is np(1-p).

So this gives us the probability of x given a fixed value of p. Bayes was interested in the inverse of this result, the probability of p given x. In other words, Bayes was interested in the answer to the question “If I perform n independent trials and get x successes, what is the probability distribution of p?”. This is a classic example of inverse reasoning, in that it involved turning something like P(A|BC) into something like P(B|AC), which is what is achieved by the theorem stated at the start of this post.

Bayes got the correct answer for his problem, eventually, but by very convoluted reasoning. In my opinion it is quite difficult to justify the name Bayes’ theorem based on what he actually did, although Laplace did specifically acknowledge this contribution when he derived the general result later, which is no doubt why the theorem is always named in Bayes’ honour.

 

This is not the only example in science where the wrong person’s name is attached to a result or discovery. Stigler’s Law of Eponymy strikes again!

So who was the mysterious mathematician behind this result? Thomas Bayes was born in 1702, son of Joshua Bayes, who was a Fellow of the Royal Society (FRS) and one of the very first nonconformist ministers to be ordained in England. Thomas was himself ordained and for a while worked with his father in the Presbyterian Meeting House in Leather Lane, near Holborn in London. In 1720 he was a minister in Tunbridge Wells, in Kent. He retired from the church in 1752 and died in 1761. Thomas Bayes didn’t publish a single paper on mathematics in his own name during his lifetime but was elected a Fellow of the Royal Society (FRS) in 1742.

The paper containing the theorem that now bears his name was published posthumously in the Philosophical Transactions of the Royal Society of London in 1763. In his great Philosophical Essay on Probabilities Laplace wrote:

Bayes, in the Transactions Philosophiques of the Year 1763, sought directly the probability that the possibilities indicated by past experiences are comprised within given limits; and he has arrived at this in a refined and very ingenious manner, although a little perplexing.

The reasoning in the 1763 paper is indeed perplexing, and I remain convinced that the general form we now we refer to as Bayes’ Theorem should really be called Laplace’s Theorem. Nevertheless, Bayes did establish an extremely important principle that is reflected in the title of the New York Times piece I referred to at the start of this piece. In a nutshell this is that probabilities of future events can be updated on the basis of past measurements or, as I prefer to put it, “one person’s posterior is another’s prior”.

 

 

 

Politics, Polls and Insignificance

Posted in Bad Statistics, Politics with tags , , , , , on July 29, 2014 by telescoper

In between various tasks I had a look at the news and saw a story about opinion polls that encouraged me to make another quick contribution to my bad statistics folder.

The piece concerned (in the Independent) includes the following statement:

A ComRes survey for The Independent shows that the Conservatives have dropped to 27 per cent, their lowest in a poll for this newspaper since the 2010 election. The party is down three points on last month, while Labour, now on 33 per cent, is up one point. Ukip is down one point to 17 per cent, with the Liberal Democrats up one point to eight per cent and the Green Party up two points to seven per cent.

The link added to ComRes is mine; the full survey can be found here. Unfortunately, the report, as is sadly almost always the case in surveys of this kind, neglects any mention of the statistical uncertainty in the poll. In fact the last point is based on a telephone poll of a sample of just 1001 respondents. Suppose the fraction of the population having the intention to vote for a particular party is p. For a sample of size n with x respondents indicating that they hen one can straightforwardly estimate p \simeq x/n. So far so good, as long as there is no bias induced by the form of the question asked nor in the selection of the sample, which for a telephone poll is doubtful.

A  little bit of mathematics involving the binomial distribution yields an answer for the uncertainty in this estimate of p in terms of the sampling error:

\sigma = \sqrt{\frac{p(1-p)}{n}}

For the sample size given, and a value p \simeq 0.33 this amounts to a standard error of about 1.5%. About 95% of samples drawn from a population in which the true fraction is p will yield an estimate within p \pm 2\sigma, i.e. within about 3% of the true figure. In other words the typical variation between two samples drawn from the same underlying population is about 3%.

If you don’t believe my calculation then you could use ComRes’ own “margin of error calculator“. The UK electorate as of 2012 numbered 46,353,900 and a sample size of 1001 returns a margin of error of 3.1%. This figure is not quoted in the report however.

Looking at the figures quoted in the report will tell you that all of the changes reported since last month’s poll are within the sampling uncertainty and are therefore consistent with no change at all in underlying voting intentions over this period.

A summary of the report posted elsewhere states:

A ComRes survey for the Independent shows that Labour have jumped one point to 33 per cent in opinion ratings, with the Conservatives dropping to 27 per cent – their lowest support since the 2010 election.

No! There’s no evidence of support for Labour having “jumped one point”, even if you could describe such a marginal change as a “jump” in the first place.

Statistical illiteracy is as widespread amongst politicians as it is amongst journalists, but the fact that silly reports like this are commonplace doesn’t make them any less annoying. After all, the idea of sampling uncertainty isn’t all that difficult to understand. Is it?

And with so many more important things going on in the world that deserve better press coverage than they are getting, why does a “quality” newspaper waste its valuable column inches on this sort of twaddle?

A Keno Game Problem

Posted in Cute Problems with tags , , , , on July 25, 2014 by telescoper

It’s been a while since I posted anything in the Cute Problems category so, given that I’ve got an unexpected gap of half an hour today, I thought I’d return to one of my side interests, the mathematics and games and gambling.

There is a variety of gambling games called Keno games in which a player selects (or is given) a set of numbers, some or all of which the player hopes to match with numbers drawn without replacement from a larger set of numbers. A common example of this type of game is Bingo. These games mostly originate in the 19th Century when travelling carnivals and funfairs often involved booths in which customers could gamble in various ways; similar things happen today, though perhaps with more sophisticated games.

In modern Casino Keno (sometimes called Race Horse Keno) a player receives a card with the numbers from 1 to 80 marked on it. He or she then marks a selection between 1 and 15 numbers and indicates the amount of a proposed bet; if n numbers are marked then the game is called `n-spot Keno’. Obviously, in 1-spot Keno, only one number is marked. Twenty numbers are then drawn without replacement from a set comprising the integers 1 to 80, using some form of randomizing device. If an appropriate proportion of the marked numbers are in fact drawn the player gets a payoff calculated by the House. Below you can see the usual payoffs for 10-spot Keno:

tabke
If fewer than five of your numbers are drawn, you lose your £1 stake. The expected gain on a £1 bet can be calculated by working out the probability of each of the outcomes listed above multiplied by the corresponding payoff, adding these together and then subtracting the probability of losing your stake (which corresponds to a gain of -£1). If this overall expected gain is negative (which it will be for any competently run casino) then the expected loss is called the house edge. In other words, if you can expect to lose £X on a £1 bet then X is the house edge.

What is the house edge for 10-spot Keno?

Answers through the comments box please!

Time for a Factorial Moment…

Posted in Bad Statistics with tags , , on July 22, 2014 by telescoper

Another very busy and very hot day so no time for a proper blog post. I suggest we all take a short break and enjoy a Factorial Moment:

Factorial Moment

I remember many moons ago spending ages calculating the factorial moments of the Poisson-Lognormal distribution, only to find that they were well known. If only I’d had Google then…

Uncertain Attitudes

Posted in Bad Statistics, Politics with tags , , , , on May 28, 2014 by telescoper

It’s been a while since I posted anything in the bad statistics file, but an article in today’s Grauniad has now given me an opportunity to rectify that omission.
The piece concerned, entitled Racism on the rise in Britain is based on some new data from the British Social Attitudes survey; the full report can be found here (PDF). The main result is shown in this graph:

Racism_graph

The version of this plot shown in the Guardian piece has the smoothed long-term trend (the blue curve, based on a five-year moving average of the data and clearly generally downward since 1986) removed.

In any case the report, as is sadly almost always the case in surveys of this kind, neglects any mention of the statistical uncertainty in the survey. In fact the last point is based on a sample of 2149 respondents. Suppose the fraction of the population describing themselves as having some prejudice is p. For a sample of size n with x respondents indicating that they describe themselves as “very prejudiced or a little prejudiced” then one can straightforwardly estimate p \simeq x/n. So far so good, as long as there is no bias induced by the form of the question asked nor in the selection of the sample…

However, a little bit of mathematics involving the binomial distribution yields an answer for the uncertainty in this estimate of p in terms of the sampling error:

\sigma = \sqrt{\frac{p(1-p)}{n}}

For the sample size given, and a value p \simeq 0.35 this amounts to a standard error of about 1%. About 95% of samples drawn from a population in which the true fraction is p will yield an estimate within p \pm 2\sigma, i.e. within about 2% of the true figure. This is consistent with the “noise” on the unsmoothed curve and it shows that the year-on-year variation shown in the unsmoothed graph is largely attributable to sampling uncertainty; note that the sample sizes vary from year to year too. The results for 2012 and 2013 are 26% and 30% exactly, which differ by 4% and are therefore explicable solely in terms of sampling fluctuations.

I don’t know whether racial prejudice is on the rise in the UK or not, nor even how accurately such attitudes are measured by such surveys in the first place, but there’s no evidence in these data of any significant change over the past year. Given the behaviour of the smoothed data however, there is evidence that in the very long term the fraction of population identifying themselves as prejudiced is actually falling.

Newspapers however rarely let proper statistics get in the way of a good story, even to the extent of removing evidence that contradicts their own prejudice.

Galaxies, Glow-worms and Chicken Eyes

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , , , , on February 26, 2014 by telescoper

I just came across a news item based on a research article in Physical Review E by Jiao et al. with the abstract:

Optimal spatial sampling of light rigorously requires that identical photoreceptors be arranged in perfectly regular arrays in two dimensions. Examples of such perfect arrays in nature include the compound eyes of insects and the nearly crystalline photoreceptor patterns of some fish and reptiles. Birds are highly visual animals with five different cone photoreceptor subtypes, yet their photoreceptor patterns are not perfectly regular. By analyzing the chicken cone photoreceptor system consisting of five different cell types using a variety of sensitive microstructural descriptors, we find that the disordered photoreceptor patterns are “hyperuniform” (exhibiting vanishing infinite-wavelength density fluctuations), a property that had heretofore been identified in a unique subset of physical systems, but had never been observed in any living organism. Remarkably, the patterns of both the total population and the individual cell types are simultaneously hyperuniform. We term such patterns “multihyperuniform” because multiple distinct subsets of the overall point pattern are themselves hyperuniform. We have devised a unique multiscale cell packing model in two dimensions that suggests that photoreceptor types interact with both short- and long-ranged repulsive forces and that the resultant competition between the types gives rise to the aforementioned singular spatial features characterizing the system, including multihyperuniformity. These findings suggest that a disordered hyperuniform pattern may represent the most uniform sampling arrangement attainable in the avian system, given intrinsic packing constraints within the photoreceptor epithelium. In addition, they show how fundamental physical constraints can change the course of a biological optimization process. Our results suggest that multihyperuniform disordered structures have implications for the design of materials with novel physical properties and therefore may represent a fruitful area for future research.

The point made in the paper is that the photoreceptors found in the eyes of chickens possess a property called disordered hyperuniformity which means that the appear disordered on small scales but exhibit order over large distances. Here’s an illustration:

chicken_eyes

It’s an interesting paper, but I’d like to quibble about something it says in the accompanying news story. The caption with the above diagram states

Left: visual cell distribution in chickens; right: a computer-simulation model showing pretty much the exact same thing. The colored dots represent the centers of the chicken’s eye cells.

Well, as someone who has spent much of his research career trying to discern and quantify patterns in collections of points – in my case they tend to be galaxies rather than photoreceptors – I find it difficult to defend the use of the phrase “pretty much the exact same thing”. It’s notoriously difficult to look at realizations of stochastic point processes and decided whether they are statistically similar or not. For that you generally need quite sophisticated mathematical analysis.  In fact, to my eye, the two images above don’t look at all like “pretty much the exact same thing”. I’m not at all sure that the model works as well as it is claimed, as the statistical analysis presented in the paper is relatively simple: I’d need to see some more quantitative measures of pattern morphology and clustering, especially higher-order correlation functions, before I’m convinced.

Anyway, all this reminded me of a very old post of mine about the difficulty of discerning patterns in distributions of points. Take the two (not very well scanned)  images here as examples:

points

You will have to take my word for it that one of these is a realization of a two-dimensional Poisson point process (which is, in a well-defined sense completely “random”) and the other contains spatial correlations between the points. One therefore has a real pattern to it, and one is a realization of a completely unstructured random process.

I sometimes show this example in popular talks and get the audience to vote on which one is the random one. The vast majority usually think that the one on the right is the one that is random and the left one is the one with structure to it. It is not hard to see why. The right-hand pattern is very smooth (what one would naively expect for a constant probability of finding a point at any position in the two-dimensional space) , whereas the  left one seems to offer a profusion of linear, filamentary features and densely concentrated clusters.

In fact, it’s the left picture that was generated by a Poisson process using a Monte Carlo random number generator. All the structure that is visually apparent is imposed by our own sensory apparatus, which has evolved to be so good at discerning patterns that it finds them when they’re not even there!

The right process is also generated by a Monte Carlo technique, but the algorithm is more complicated. In this case the presence of a point at some location suppresses the probability of having other points in the vicinity. Each event has a zone of avoidance around it; the points are therefore anticorrelated. The result of this is that the pattern is much smoother than a truly random process should be. In fact, this simulation has nothing to do with galaxy clustering really. The algorithm used to generate it was meant to mimic the behaviour of glow-worms (a kind of beetle) which tend to eat each other if they get too close. That’s why they spread themselves out in space more uniformly than in the random pattern. In fact, the tendency displayed in this image of the points to spread themselves out more smoothly than a random distribution is in in some ways reminiscent of the chicken eye problem.

The moral of all this is that people are actually pretty hopeless at understanding what “really” random processes look like, probably because the word random is used so often in very imprecise ways and they don’t know what it means in a specific context like this. The point about random processes, even simpler ones like repeated tossing of a coin, is that coincidences happen much more frequently than one might suppose. By the same token, people are also pretty hopeless at figuring out whether two distributions of points resemble each other in some kind of statistical sense, because that can only be made precise if one defines some specific quantitative measure of clustering pattern, which is not easy to do.

Double Indemnity – Statistics Noir

Posted in Film with tags , , , , on February 20, 2014 by telescoper

The other day I decided to treat myself by watching a DVD of the  film  Double Indemnity. It’s a great movie for many reasons, not least because when it was released in 1944 it immediately established much of the language and iconography of the genre that has come to be known as film noir, which I’ve written about on a number of occasions on this blog; see here for example. Like many noir movies the plot revolves around the destructive relationship between a femme fatale and male anti-hero and, as usual for the genre, the narrative strategy involves use of flashbacks and a first-person voice-over. The photography is done in such a way as to surround the protagonists with dark, threatening shadows. In fact almost every interior in the film (including the one shown in the clip below) has Venetian blinds for this purpose. These chiaroscuro lighting effects charge even the most mundane encounters with psychological tension or erotic suspense.

di6

To the left is an example still from Double Indemnity which shows a number of trademark features. The shadows cast by venetian blinds on the wall, the cigarette being smoked by Barbara Stanwyck and the curious construction of the mise en scene are all very characteristic of the style. What is even more wonderful about this particular shot however is the way the shadow of Fred McMurray’s character enters the scene before he does. The Barbara Stanwyck character is just about to shoot him with a pearl-handled revolver; this image suggests that he is already on his way to the underworld as he enters the room.

I won’t repeat any more of the things I’ve already said about this great movie, but I will say a couple of things that struck me watching it again at the weekend. The first is that even after having seen it dozens of times of the year I still found it intense and gripping. The other is that I think one of the contributing factors to its greatness which is not often discussed is a wonderful cameo by Edward G Robinson , who steals every scene he appears in as the insurance investigator Barton Keyes. Here’s an example, which I’ve chosen because it provides an interesting illustration of the the scientific use of statistical information, another theme I’ve visited frequently on this blog:

Statistical Challenges in 21st Century Cosmology

Posted in The Universe and Stuff with tags , , on December 2, 2013 by telescoper

I received the following email about a forthcoming conference which is probably of interest to a (statistically) significant number of readers of this blog so I thought I’d share it here with an encouragement to attend:

–o–

IAUS306 – Statistical Challenges in 21st Century Cosmology

We are pleased to announce the IAU Symposium 306 on Statistical Challenges in 21st Century Cosmology, which will take place in Lisbon, Portugal from 26-29 May 2014, with a tutorial day on 25 May.  Apologies if you receive this more than once.

Full exploitation of the very large surveys of the Cosmic Microwave Background, Large-Scale Structure, weak gravitational lensing and future 21cm surveys will require use of the best statistical techniques to answer the major cosmological questions of the 21st century, such as the nature of Dark Energy and gravity.

Thus it is timely to emphasise the importance of inference in cosmology, and to promote dialogue between astronomers and statisticians. This has been recognized by the creation of the IAU Working Group in Astrostatistics and Astroinformatics in 2012.

IAU Symposium 306 will be devoted to problems of inference in cosmology, from data processing to methods and model selection, and will have an important element of cross-disciplinary involvement from the statistics communities.

Keynote speakers

• Cosmic Microwave Background :: Graca Rocha (USA / Portugal)

• Weak Gravitational Lensing :: Masahiro Takada (Japan)

• Combining probes :: Anais Rassat (Switzerland)

• Statistics of Fields :: Sabino Matarrese (Italy)

• Large-scale structure :: Licia Verde (Spain)

• Bayesian methods :: David van Dyk (UK)

• 21cm cosmology :: Mario Santos (South Africa / Portugal)

• Massive parameter estimation :: Ben Wandelt (France)

• Overwhelmingly large datasets :: Alex Szalay (USA)

• Errors and nonparametric estimation :: Aurore Delaigle (Australia)

You are invited to submit an abstract for a contributed talk or poster for the meeting, via the meeting website. The deadline for abstract submission is 21st March 2014. Full information on the scientific rationale, programme, proceedings, critical dates, and local arrangements will be on the symposium web site here.

Deadlines

13 January 2014 – Grant requests

21 March 2014 – Abstract submission

4 April 2014 – Notification of abstract acceptance

11 April 2014 – Close of registration

30 June 2014 – Manuscript submission