Archive for the Bad Statistics Category

The 3.5 keV “Line” that (probably) wasn’t…

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , , , on July 26, 2016 by telescoper

About a year ago I wrote a blog post about a mysterious “line” in the X-ray spectra of galaxy clusters corresponding to an energy of around 3.5 keV. The primary reference for the claim is a paper by Bulbul et al which is, of course, freely available on the arXiv.

The key graph from that paper is this:

XMMspectrum

The claimed feature – it stretches the imagination considerably to call it a “line” – is shown in red. No, I’m not particularly impressed either, but this is what passes for high-quality data in X-ray astronomy!

Anyway, there has just appeared on the arXiv a paper by the Hitomi Collaboration describing what are basically the only set of science results that the Hitomi satellite managed to obtain before it fell to bits earlier this year. These were observations of the Perseus Cluster.

Here is the abstract:

High-resolution X-ray spectroscopy with Hitomi was expected to resolve the origin of the faint unidentified E=3.5 keV emission line reported in several low-resolution studies of various massive systems, such as galaxies and clusters, including the Perseus cluster. We have analyzed the Hitomi first-light observation of the Perseus cluster. The emission line expected for Perseus based on the XMM-Newton signal from the large cluster sample under the dark matter decay scenario is too faint to be detectable in the Hitomi data. However, the previously reported 3.5 keV flux from Perseus was anomalously high compared to the sample-based prediction. We find no unidentified line at the reported flux level. The high flux derived with XMM MOS for the Perseus region covered by Hitomi is excluded at >3-sigma within the energy confidence interval of the most constraining previous study. If XMM measurement uncertainties for this region are included, the inconsistency with Hitomi is at a 99% significance for a broad dark-matter line and at 99.7% for a narrow line from the gas. We do find a hint of a broad excess near the energies of high-n transitions of Sxvi (E=3.44 keV rest-frame) – a possible signature of charge exchange in the molecular nebula and one of the proposed explanations for the 3.5 keV line. While its energy is consistent with XMM pn detections, it is unlikely to explain the MOS signal. A confirmation of this interesting feature has to wait for a more sensitive observation with a future calorimeter experiment.

And here is the killer plot:

Perseus_Hitomi

The spectrum looks amazingly detailed, which makes the demise of Hitomi all the more tragic, but the 3.5 keV is conspicuous by its absence. So there you are, yet another supposedly significant feature that excited a huge amount of interest turns out to be nothing of the sort. To be fair, as the abstract states, the anomalous line was only seen by stacking spectra of different clusters and might still be there but too faint to be seen in an individual cluster spectrum. Nevertheless I’d say the probability of there being any feature at 3.5 keV has decreased significantly after this observation.

P.S. rumours suggest that the 750 GeV diphoton “excess” found at the Large Hadron Collider may be about to meet a similar fate.

The Distribution of Cauchy

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , on April 6, 2016 by telescoper

Back into the swing of teaching after a short break, I have been doing some lectures this week about complex analysis to theoretical physics students. The name of a brilliant French mathematician called Augustin Louis Cauchy (1789-1857) crops up very regularly in this branch of mathematics, e.g. in the Cauchy integral formula and the Cauchy-Riemann conditions, which reminded me of some old jottings aI made about the Cauchy distribution, which I never used in the publication to which they related, so I thought I’d just quickly pop the main idea on here in the hope that some amongst you might find it interesting and/or amusing.

What sparked this off is that the simplest cosmological models (including the particular one we now call the standard model) assume that the primordial density fluctuations we see imprinted in the pattern of temperature fluctuations in the cosmic microwave background and which we think gave rise to the large-scale structure of the Universe through the action of gravitational instability, were distributed according to Gaussian statistics (as predicted by the simplest versions of the inflationary universe theory).  Departures from Gaussianity would therefore, if found, yield important clues about physics beyond the standard model.

Cosmology isn’t the only place where Gaussian (normal) statistics apply. In fact they arise  fairly generically,  in circumstances where variation results from the linear superposition of independent influences, by virtue of the Central Limit Theorem. Thermal noise in experimental detectors is often treated as following Gaussian statistics, for example.

The Gaussian distribution has some nice properties that make it possible to place meaningful bounds on the statistical accuracy of measurements made in the presence of Gaussian fluctuations. For example, we all know that the margin of error of the determination of the mean value of a quantity from a sample of size n independent Gaussian-dsitributed varies as 1/\sqrt{n}; the larger the sample, the more accurately the global mean can be known. In the cosmological context this is basically why mapping a larger volume of space can lead, for instance, to a more accurate determination of the overall mean density of matter in the Universe.

However, although the Gaussian assumption often applies it doesn’t always apply, so if we want to think about non-Gaussian effects we have to think also about how well we can do statistical inference if we don’t have Gaussianity to rely on.

That’s why I was playing around with the peculiarities of the Cauchy distribution. This distribution comes up in a variety of real physics problems so it isn’t an artificially pathological case. Imagine you have two independent variables X and Y each of which has a Gaussian distribution with zero mean and unit variance. The ratio Z=X/Y has a probability density function of the form

p(z)=\frac{1}{\pi(1+z^2)},

which is a Cauchy distribution. There’s nothing at all wrong with this as a distribution – it’s not singular anywhere and integrates to unity as a pdf should. However, it does have a peculiar property that none of its moments is finite, not even the mean value!

Following on from this property is the fact that Cauchy-distributed quantities violate the Central Limit Theorem. If we take n independent Gaussian variables then the distribution of sum X_1+X_2 + \ldots X_n has the normal form, but this is also true (for large enough n) for the sum of n independent variables having any distribution as long as it has finite variance.

The Cauchy distribution has infinite variance so the distribution of the sum of independent Cauchy-distributed quantities Z_1+Z_2 + \ldots Z_n doesn’t tend to a Gaussian. In fact the distribution of the sum of any number of  independent Cauchy variates is itself a Cauchy distribution. Moreover the distribution of the mean of a sample of size n does not depend on n for Cauchy variates. This means that making a larger sample doesn’t reduce the margin of error on the mean value!

This was essentially the point I made in a previous post about the dangers of using standard statistical techniques – which usually involve the Gaussian assumption – to distributions of quantities formed as ratios.

We cosmologists should be grateful that we don’t seem to live in a Universe whose fluctuations are governed by Cauchy, rather than (nearly) Gaussian, statistics. Measuring more of the Universe wouldn’t be any use in determining its global properties as we’d always be dominated by cosmic variance

The Insignificance of ORB

Posted in Bad Statistics with tags , , , on April 5, 2016 by telescoper

A piece about opinion polls ahead of the EU Referendum which appeared in today’s Daily Torygraph has spurred me on to make a quick contribution to my bad statistics folder.

The piece concerned includes the following statement:

David Cameron’s campaign to warn voters about the dangers of leaving the European Union is beginning to win the argument ahead of the referendum, a new Telegraph poll has found.

The exclusive poll found that the “Remain” campaign now has a narrow lead after trailing last month, in a sign that Downing Street’s tactic – which has been described as “Project Fear” by its critics – is working.

The piece goes on to explain

The poll finds that 51 per cent of voters now support Remain – an increase of 4 per cent from last month. Leave’s support has decreased five points to 44 per cent.

This conclusion is based on the results of a survey by ORB in which the number of participants was 800. Yes, eight hundred.

How much can we trust this result on statistical grounds?

Suppose the fraction of the population having the intention to vote in a particular way in the EU referendum is p. For a sample of size n with x respondents indicating that they hen one can straightforwardly estimate p \simeq x/n. So far so good, as long as there is no bias induced by the form of the question asked nor in the selection of the sample which, given the fact that such polls have been all over the place seems rather unlikely.

A little bit of mathematics involving the binomial distribution yields an answer for the uncertainty in this estimate of p in terms of the sampling error:

\sigma = \sqrt{\frac{p(1-p)}{n}}

For the sample size of 800 given, and an actual value p \simeq 0.5 this amounts to a standard error of about 2%. About 95% of samples drawn from a population in which the true fraction is p will yield an estimate within p \pm 2\sigma, i.e. within about 4% of the true figure. In other words the typical variation between two samples drawn from the same underlying population is about 4%. In other other words, the change reported between the two ORB polls mentioned above can be entirely explained by sampling variation and does not at all imply any systematic change of public opinion between the two surveys.

I need hardly point out that in a two-horse race (between “Remain” and “Leave”) an increase of 4% in the Remain vote corresponds to a decrease in the Leave vote by the same 4% so a 50-50 population vote can easily generate a margin as large as  54-46 in such a small sample.

Why do pollsters bother with such tiny samples? With such a large margin error they are basically meaningless.

I object to the characterization of the Remain campaign as “Project Fear” in any case. I think it’s entirely sensible to point out the serious risks that an exit from the European Union would generate for the UK in loss of trade, science funding, financial instability, and indeed the near-inevitable secession of Scotland. But in any case this poll doesn’t indicate that anything is succeeding in changing anything other than statistical noise.

Statistical illiteracy is as widespread amongst politicians as it is amongst journalists, but the fact that silly reports like this are commonplace doesn’t make them any less annoying. After all, the idea of sampling uncertainty isn’t all that difficult to understand. Is it?

And with so many more important things going on in the world that deserve better press coverage than they are getting, why does a “quality” newspaper waste its valuable column inches on this sort of twaddle?

More fine structure silliness …

Posted in Bad Statistics, The Universe and Stuff with tags , on March 17, 2016 by telescoper

Wondering what had happened to claims of a spatial variation of the fine-structure constant?

Well, they’re still around but there’s still very little convincing evidence to support them, as this post explains…

A Bump at the Large Hadron Collider

Posted in Bad Statistics, The Universe and Stuff with tags , , , on December 16, 2015 by telescoper

Very busy, so just a quickie today. Yesterday the good folk at the Large Hadron Collider announced their latest batch of results. You can find the complete set from the CMS experiment here and from ATLAS here.

The result that everyone is talking about is shown in the following graph, which shows the number of diphoton events as a function of energy:

Atlas_Bump

Attention is focussing on the apparent “bump” at around 750 GeV; you can find an expert summary by a proper particle physicist here and another one here.

It is claimed that the “significance level” of this “detection” is 3.6σ. I won’t comment on that precise statement partly because it depends on the background signal being well understood but mainly because I don’t think this is the right language in which to express such a result in the first place. Experimental particle physicists do seem to be averse to doing proper Bayesian analyses of their data.

However if you take the claim in the way such things are usually presented it is roughly equivalent to a statement that the odds against this being a real detection are greater that 6000:1. If any particle physicists out there are willing to wager £6000 for £1 of mine that this result will be confirmed by future measurements then I’d happily take them up on that bet!

P.S. Entirely predictably there are 10 theory papers on today’s ArXiv offering explanations of the alleged bump, none of which says that it’s a noise feature..

 

 

Gamma-Ray Bursts and the Cosmological Principle

Posted in Astrohype, Bad Statistics, The Universe and Stuff with tags , , , on September 13, 2015 by telescoper

There’s been a reasonable degree of hype surrounding a paper published in Monthly Notices of the Royal Astronomical Society (and available on the arXiv here). The abstract of this paper reads:

According to the cosmological principle (CP), Universal large-scale structure is homogeneous and isotropic. The observable Universe, however, shows complex structures even on very large scales. The recent discoveries of structures significantly exceeding the transition scale of 370 Mpc pose a challenge to the CP. We report here the discovery of the largest regular formation in the observable Universe; a ring with a diameter of 1720 Mpc, displayed by 9 gamma-ray bursts (GRBs), exceeding by a factor of 5 the transition scale to the homogeneous and isotropic distribution. The ring has a major diameter of 43° and a minor diameter of 30° at a distance of 2770 Mpc in the 0.78 < z < 0.86 redshift range, with a probability of 2 × 10−6 of being the result of a random fluctuation in the GRB count rate. Evidence suggests that this feature is the projection of a shell on to the plane of the sky. Voids and string-like formations are common outcomes of large-scale structure. However, these structures have maximum sizes of 150 Mpc, which are an order of magnitude smaller than the observed GRB ring diameter. Evidence in support of the shell interpretation requires that temporal information of the transient GRBs be included in the analysis. This ring-shaped feature is large enough to contradict the CP. The physical mechanism responsible for causing it is unknown.

The so-called “ring” can be seen here:
ring_Australia

In my opinion it’s not a ring at all, but an outline of Australia. What’s the probability of a random distribution of dots looking exactly like that? Is it really evidence for the violation of the Cosmological Principle, or for the existence of the Cosmic Antipodes?

For those of you who don’t get that gag, a cosmic antipode occurs in, e.g., closed Friedmann cosmologies in which the spatial sections take the form of a hypersphere (or 3-sphere). The antipode is the point diametrically opposite the observer on this hypersurface, just as it is for the surface of a 2-sphere such as the Earth. The antipode is only visible if it lies inside the observer’s horizon, a possibility which is ruled out for standard cosmologies by current observations. I’ll get my coat.

Anyway, joking apart, the claims in the abstract of the paper are extremely strong but the statistical arguments supporting them are deeply unconvincing. Indeed, I am quite surprised the paper passed peer review. For a start there’s a basic problem of “a posteriori” reasoning here. We see a group of objects that form a map of Australia ring and then are surprised that such a structure appears so rarely in simulations of our favourite model. But all specific configurations of points are rare in a Poisson point process. We would be surprised to see a group of dots in the shape of a pretzel too, or the face of Jesus, but that doesn’t mean that such an occurrence has any significance. It’s an extraordinarily difficult problem to put a meaningful measure on the space of geometrical configurations, and this paper doesn’t succeed in doing that.

For a further discussion of the tendency that people have to see patterns where none exist, take a look at this old post from which I’ve taken this figure which is generated by drawing points independently and uniformly randomly:

pointaI can see all kinds of shapes in this pattern, but none of them has any significance (other than psychological). In a mathematically well-defined sense there is no structure in this pattern! Add to that difficulty the fact that so few points are involved and I think it becomes very clear that this “structure” doesn’t provide any evidence at all for the violation of the Cosmological Principle. Indeed it seems neither do the authors. The very last paragraph of the paper is as follows:

GRBs are very rare events superimposed on the cosmic
web identified by superclusters. Because of this, the ring is
probably not a real physical structure. Further studies are
needed to reveal whether or not the Ring could have been
produced by a low-frequency spatial harmonic of the large-
scale matter density distribution and/or of universal star
forming activity.

It’s a pity that this note of realism didn’t make it into either the abstract or, more importantly, the accompanying press release. Peer review will never be perfect, but we can do without this sort of hype. Anyway, I confidently predict that a proper refutation will appear shortly….

P.S. For a more technical discussion of the problems of inferring the presence of large structures from sparsely-sampled distributions, see here.

Adventures with the One-Point Distribution Function

Posted in Bad Statistics, Books, Talks and Reviews, Talks and Reviews, The Universe and Stuff with tags , , on September 1, 2015 by telescoper

As I promised a few people, here are the slides I used for my talk earlier today at the meeting I am attending. Actually I was given only 30 minutes and used up a lot of that time on two things that haven’t got much to do with the title. One was a quiz to identify the six famous astronomers (or physicists) who had made important contributions to statistics (Slide 2) and the other was on some issues that arose during the discussion session yesterday evening. I didn’t in the end talk much about the topic given in the title, which was about how, despite learning a huge amount about certain aspects of galaxy clustering, we are still far from a good understanding of the one-point distribution of density fluctuations. I guess I’ll get the chance to talk more about that in the near future!

P.S. I think the six famous faces should be easy to identify, so there are no prizes but please feel free to guess through the comments box!

Statistics in Astronomy

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , , , , , on August 29, 2015 by telescoper

A few people at the STFC Summer School for new PhD students in Cardiff last week asked if I could share the slides. I’ve given the Powerpoint presentation to the organizers so presumably they will make the presentation available, but I thought I’d include it here too. I’ve corrected a couple of glitches I introduced trying to do some last-minute hacking just before my talk!

As you will inferfrom the slides, I decided not to compress an entire course on statistical methods into a one-hour talk. Instead I tried to focus on basic principles, primarily to get across the importance of Bayesian methods for tackling the usual tasks of hypothesis testing and parameter estimation. The Bayesian framework offers the only mathematically consistent way of tackling such problems and should therefore be the preferred method of using data to test theories. Of course if you have data but no theory or a theory but no data, any method is going to struggle. And if you have neither data nor theory you’d be better off getting one of the other before trying to do anything. Failing that, you could always go down the pub.

Rather than just leave it at that I thought I’d append some stuff  I’ve written about previously on this blog, many years ago, about the interesting historical connections between Astronomy and Statistics.

Once the basics of mathematical probability had been worked out, it became possible to think about applying probabilistic notions to problems in natural philosophy. Not surprisingly, many of these problems were of astronomical origin but, on the way, the astronomers that tackled them also derived some of the basic concepts of statistical theory and practice. Statistics wasn’t just something that astronomers took off the shelf and used; they made fundamental contributions to the development of the subject itself.

The modern subject we now know as physics really began in the 16th and 17th century, although at that time it was usually called Natural Philosophy. The greatest early work in theoretical physics was undoubtedly Newton’s great Principia, published in 1687, which presented his idea of universal gravitation which, together with his famous three laws of motion, enabled him to account for the orbits of the planets around the Sun. But majestic though Newton’s achievements undoubtedly were, I think it is fair to say that the originator of modern physics was Galileo Galilei.

Galileo wasn’t as much of a mathematical genius as Newton, but he was highly imaginative, versatile and (very much unlike Newton) had an outgoing personality. He was also an able musician, fine artist and talented writer: in other words a true Renaissance man.  His fame as a scientist largely depends on discoveries he made with the telescope. In particular, in 1610 he observed the four largest satellites of Jupiter, the phases of Venus and sunspots. He immediately leapt to the conclusion that not everything in the sky could be orbiting the Earth and openly promoted the Copernican view that the Sun was at the centre of the solar system with the planets orbiting around it. The Catholic Church was resistant to these ideas. He was hauled up in front of the Inquisition and placed under house arrest. He died in the year Newton was born (1642).

These aspects of Galileo’s life are probably familiar to most readers, but hidden away among scientific manuscripts and notebooks is an important first step towards a systematic method of statistical data analysis. Galileo performed numerous experiments, though he certainly didn’t carry out the one with which he is most commonly credited. He did establish that the speed at which bodies fall is independent of their weight, not by dropping things off the leaning tower of Pisa but by rolling balls down inclined slopes. In the course of his numerous forays into experimental physics Galileo realised that however careful he was taking measurements, the simplicity of the equipment available to him left him with quite large uncertainties in some of the results. He was able to estimate the accuracy of his measurements using repeated trials and sometimes ended up with a situation in which some measurements had larger estimated errors than others. This is a common occurrence in many kinds of experiment to this day.

Very often the problem we have in front of us is to measure two variables in an experiment, say X and Y. It doesn’t really matter what these two things are, except that X is assumed to be something one can control or measure easily and Y is whatever it is the experiment is supposed to yield information about. In order to establish whether there is a relationship between X and Y one can imagine a series of experiments where X is systematically varied and the resulting Y measured.  The pairs of (X,Y) values can then be plotted on a graph like the example shown in the Figure.

XY

In this example on it certainly looks like there is a straight line linking Y and X, but with small deviations above and below the line caused by the errors in measurement of Y. This. You could quite easily take a ruler and draw a line of “best fit” by eye through these measurements. I spent many a tedious afternoon in the physics labs doing this sort of thing when I was at school. Ideally, though, what one wants is some procedure for fitting a mathematical function to a set of data automatically, without requiring any subjective intervention or artistic skill. Galileo found a way to do this. Imagine you have a set of pairs of measurements (xi,yi) to which you would like to fit a straight line of the form y=mx+c. One way to do it is to find the line that minimizes some measure of the spread of the measured values around the theoretical line. The way Galileo did this was to work out the sum of the differences between the measured yi and the predicted values mx+c at the measured values x=xi. He used the absolute difference |yi-(mxi+c)| so that the resulting optimal line would, roughly speaking, have as many of the measured points above it as below it. This general idea is now part of the standard practice of data analysis, and as far as I am aware, Galileo was the first scientist to grapple with the problem of dealing properly with experimental error.

error

The method used by Galileo was not quite the best way to crack the puzzle, but he had it almost right. It was again an astronomer who provided the missing piece and gave us essentially the same method used by statisticians (and astronomy) today.

Gauss_11Karl Friedrich Gauss (left) was undoubtedly one of the greatest mathematicians of all time, so it might be objected that he wasn’t really an astronomer. Nevertheless he was director of the Observatory at Göttingen for most of his working life and was a keen observer and experimentalist. In 1809, he developed Galileo’s ideas into the method of least-squares, which is still used today for curve fitting.

This approach involves basically the same procedure but involves minimizing the sum of [yi-(mxi+c)]2 rather than |yi-(mxi+c)|. This leads to a much more elegant mathematical treatment of the resulting deviations – the “residuals”.  Gauss also did fundamental work on the mathematical theory of errors in general. The normal distribution is often called the Gaussian curve in his honour.

After Galileo, the development of statistics as a means of data analysis in natural philosophy was dominated by astronomers. I can’t possibly go systematically through all the significant contributors, but I think it is worth devoting a paragraph or two to a few famous names.

I’ve already written on this blog about Jakob Bernoulli, whose famous book on probability was (probably) written during the 1690s. But Jakob was just one member of an extraordinary Swiss family that produced at least 11 important figures in the history of mathematics.  Among them was Daniel Bernoulli who was born in 1700.  Along with the other members of his famous family, he had interests that ranged from astronomy to zoology. He is perhaps most famous for his work on fluid flows which forms the basis of much of modern hydrodynamics, especially Bernouilli’s principle, which accounts for changes in pressure as a gas or liquid flows along a pipe of varying width.
But the elder Jakob’s work on gambling clearly also had some effect on Daniel, as in 1735 the younger Bernoulli published an exceptionally clever study involving the application of probability theory to astronomy. It had been known for centuries that the orbits of the planets are confined to the same part in the sky as seen from Earth, a narrow band called the Zodiac. This is because the Earth and the planets orbit in approximately the same plane around the Sun. The Sun’s path in the sky as the Earth revolves also follows the Zodiac. We now know that the flattened shape of the Solar System holds clues to the processes by which it formed from a rotating cloud of cosmic debris that formed a disk from which the planets eventually condensed, but this idea was not well established in the time of Daniel Bernouilli. He set himself the challenge of figuring out what the chance was that the planets were orbiting in the same plane simply by chance, rather than because some physical processes confined them to the plane of a protoplanetary disk. His conclusion? The odds against the inclinations of the planetary orbits being aligned by chance were, well, astronomical.

The next “famous” figure I want to mention is not at all as famous as he should be. John Michell was a Cambridge graduate in divinity who became a village rector near Leeds. His most important idea was the suggestion he made in 1783 that sufficiently massive stars could generate such a strong gravitational pull that light would be unable to escape from them.  These objects are now known as black holes (although the name was coined much later by John Archibald Wheeler). In the context of this story, however, he deserves recognition for his use of a statistical argument that the number of close pairs of stars seen in the sky could not arise by chance. He argued that they had to be physically associated, not fortuitous alignments. Michell is therefore credited with the discovery of double stars (or binaries), although compelling observational confirmation had to wait until William Herschel’s work of 1803.

It is impossible to overestimate the importance of the role played by Pierre Simon, Marquis de Laplace in the development of statistical theory. His book A Philosophical Essay on Probabilities, which began as an introduction to a much longer and more mathematical work, is probably the first time that a complete framework for the calculation and interpretation of probabilities ever appeared in print. First published in 1814, it is astonishingly modern in outlook.

Laplace began his scientific career as an assistant to Antoine Laurent Lavoiser, one of the founding fathers of chemistry. Laplace’s most important work was in astronomy, specifically in celestial mechanics, which involves explaining the motions of the heavenly bodies using the mathematical theory of dynamics. In 1796 he proposed the theory that the planets were formed from a rotating disk of gas and dust, which is in accord with the earlier assertion by Daniel Bernouilli that the planetary orbits could not be randomly oriented. In 1776 Laplace had also figured out a way of determining the average inclination of the planetary orbits.

A clutch of astronomers, including Laplace, also played important roles in the establishment of the Gaussian or normal distribution.  I have also mentioned Gauss’s own part in this story, but other famous astronomers played their part. The importance of the Gaussian distribution owes a great deal to a mathematical property called the Central Limit Theorem: the distribution of the sum of a large number of independent variables tends to have the Gaussian form. Laplace in 1810 proved a special case of this theorem, and Gauss himself also discussed it at length.

A general proof of the Central Limit Theorem was finally furnished in 1838 by another astronomer, Friedrich Wilhelm Bessel– best known to physicists for the functions named after him – who in the same year was also the first man to measure a star’s distance using the method of parallax. Finally, the name “normal” distribution was coined in 1850 by another astronomer, John Herschel, son of William Herschel.

I hope this gets the message across that the histories of statistics and astronomy are very much linked. Aspiring young astronomers are often dismayed when they enter research by the fact that they need to do a lot of statistical things. I’ve often complained that physics and astronomy education at universities usually includes almost nothing about statistics, because that is the one thing you can guarantee to use as a researcher in practically any branch of the subject.

Over the years, statistics has become regarded as slightly disreputable by many physicists, perhaps echoing Rutherford’s comment along the lines of “If your experiment needs statistics, you ought to have done a better experiment”. That’s a silly statement anyway because all experiments have some form of error that must be treated statistically, but it is particularly inapplicable to astronomy which is not experimental but observational. Astronomers need to do statistics, and we owe it to the memory of all the great scientists I mentioned above to do our statistics properly.

A Question of Entropy

Posted in Bad Statistics with tags , , on August 10, 2015 by telescoper

We haven’t had a poll for a while so here’s one for your entertainment.

An article has appeared on the BBC Website entitled Web’s random numbers are too weak, warn researchers. The piece is about the techniques used to encrypt data on the internet. It’s a confusing piece, largely because of the use of the word “random” which is tricky to define; see a number of previous posts on this topic. I’ll steer clear of going over that issue again. However, there is a paragraph in the article that talks about entropy:

An unshuffled pack of cards has a low entropy, said Mr Potter, because there is little surprising or uncertain about the order the cards would be dealt. The more a pack was shuffled, he said, the more entropy it had because it got harder to be sure about which card would be turned over next.

I won’t prejudice your vote by saying what I think about this statement, but here’s a poll so I can try to see what you think.

Of course I also welcome comments via the box below…

Falisifiability versus Testability in Cosmology

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , on July 24, 2015 by telescoper

A paper came out a few weeks ago on the arXiv that’s ruffled a few feathers here and there so I thought I would make a few inflammatory comments about it on this blog. The article concerned, by Gubitosi et al., has the abstract:

Inflation_falsifiabiloty

I have to be a little careful as one of the authors is a good friend of mine. Also there’s already been a critique of some of the claims in this paper here. For the record, I agree with the critique and disagree with the original paper, that the claim below cannot be justfied.

…we illustrate how unfalsifiable models and paradigms are always favoured by the Bayes factor.

If I get a bit of time I’ll write a more technical post explaining why I think that. However, for the purposes of this post I want to take issue with a more fundamental problem I have with the philosophy of this paper, namely the way it adopts “falsifiablity” as a required characteristic for a theory to be scientific. The adoption of this criterion can be traced back to the influence of Karl Popper and particularly his insistence that science is deductive rather than inductive. Part of Popper’s claim is just a semantic confusion. It is necessary at some point to deduce what the measurable consequences of a theory might be before one does any experiments, but that doesn’t mean the whole process of science is deductive. As a non-deductivist I’ll frame my argument in the language of Bayesian (inductive) inference.

Popper rejects the basic application of inductive reasoning in updating probabilities in the light of measured data; he asserts that no theory ever becomes more probable when evidence is found in its favour. Every scientific theory begins infinitely improbable, and is doomed to remain so. There is a grain of truth in this, or can be if the space of possibilities is infinite. Standard methods for assigning priors often spread the unit total probability over an infinite space, leading to a prior probability which is formally zero. This is the problem of improper priors. But this is not a killer blow to Bayesianism. Even if the prior is not strictly normalizable, the posterior probability can be. In any case, given sufficient relevant data the cycle of experiment-measurement-update of probability assignment usually soon leaves the prior far behind. Data usually count in the end.

I believe that deductvism fails to describe how science actually works in practice and is actually a dangerous road to start out on. It is indeed a very short ride, philosophically speaking, from deductivism (as espoused by, e.g., David Hume) to irrationalism (as espoused by, e.g., Paul Feyeraband).

The idea by which Popper is best known is the dogma of falsification. According to this doctrine, a hypothesis is only said to be scientific if it is capable of being proved false. In real science certain “falsehood” and certain “truth” are almost never achieved. The claimed detection of primordial B-mode polarization in the cosmic microwave background by BICEP2 was claimed by some to be “proof” of cosmic inflation, which it wouldn’t have been even if it hadn’t subsequently shown not to be a cosmological signal at all. What we now know to be the failure of BICEP2 to detect primordial B-mode polarization doesn’t disprove inflation either.

Theories are simply more probable or less probable than the alternatives available on the market at a given time. The idea that experimental scientists struggle through their entire life simply to prove theorists wrong is a very strange one, although I definitely know some experimentalists who chase theories like lions chase gazelles. The disparaging implication that scientists live only to prove themselves wrong comes from concentrating exclusively on the possibility that a theory might be found to be less probable than a challenger. In fact, evidence neither confirms nor discounts a theory; it either makes the theory more probable (supports it) or makes it less probable (undermines it). For a theory to be scientific it must be capable having its probability influenced in this way, i.e. amenable to being altered by incoming data “i.e. evidence”. The right criterion for a scientific theory is therefore not falsifiability but testability. It follows straightforwardly from Bayes theorem that a testable theory will not predict all things with equal facility. Scientific theories generally do have untestable components. Any theory has its interpretation, which is the untestable penumbra that we need to supply to make it comprehensible to us. But whatever can be tested can be regared as scientific.

So I think the Gubitosi et al. paper starts on the wrong foot by focussing exclusively on “falsifiability”. The issue of whether a theory is testable is complicated in the context of inflation because prior probabilities for most observables are difficult to determine with any confidence because we know next to nothing about either (a) the conditions prevailing in the early Universe prior to the onset of inflation or (b) how properly to define a measure on the space of inflationary models. Even restricting consideration to the simplest models with a single scalar field, initial data are required for the scalar field (and its time derivative) and there is also a potential whose functional form is not known. It is therfore a far from trivial task to assign meaningful prior probabilities on inflationary models and thus extremely difficult to determine the relative probabilities of observables and how these probabilities may or may not be influenced by interactions with data. Moreover, the Bayesian approach involves comparing probabilities of competing theories, so we also have the issue of what to compare inflation with…

The question of whether cosmic inflation (whether in general concept or in the form of a specific model) is testable or not seems to me to boil down to whether it predicts all possible values of relevant observables with equal ease. A theory might be testable in principle, but not testable at a given time if the available technology at that time is not able to make measurements that can distingish between that theory and another. Most theories have to wait some time for experiments can be designed and built to test them. On the other hand a theory might be untestable even in principle, if it is constructed in such a way that its probability can’t be changed at all by any amount of experimental data. As long as a theory is testable in principle, however, it has the right to be called scientific. If the current available evidence can’t test it we need to do better experiments. On other words, there’s a problem with the evidence not the theory.

Gubitosi et al. are correct in identifying the important distinction between the inflationary paradigm, which encompasses a large set of specific models each formulated in a different way, and an individual member of that set. I also agree – in contrast to many of my colleagues – that it is actually difficult to argue that the inflationary paradigm is currently falsfiable testable. But that doesn’t necessarily mean that it isn’t scientific. A theory doesn’t have to have been tested in order to be testable.