Archive for the The Universe and Stuff Category

The Essence of Cosmology is Statistics

Posted in The Universe and Stuff with tags , , on September 8, 2015 by telescoper

I’m grateful to Licia Verde for sending this picture of me in action at last week’s conference in Castiglioncello.

McVittie

The quote is one I use quite regularly, as the source is quite surprising. It is by George McVittie and appears in the Preface to the Proceedings of the Third Bekeley Symposium on Mathematical Statistics and Probability, which took place in 1956. It is surprising for two reasons. One is that McVittie is more strongly associated with theoretical cosmology than with statistics. In fact I have one of his books, the first edition of which was published in 1937:

DSC_0170

There’s a bit in the book about observational cosmology, but basically it’s wall-to-wall Christoffel symbols!

The other surprising thing is that way back in 1956 there was precious little statistical information relevant to cosmology anyway, a far cry from the situation today with our plethora of maps and galaxy surveys. What he was saying though was that statistics is all about making inferences based on partial or incomplete data. Given that the subject of cosmology is the entire Universe, it is obvious we will never have complete data (i.e. we will never know everything). Hence cosmology is essentially statistical. This is true of other fields too, but in cosmology it is taken to an extreme. George McVittie passed away in 1988, so didn’t really live long enough to see this statement fulfilled, but it certainly has been over the last couple of decades!

P.S. Although he spent much of his working life in the East End of London (at Queen Mary College), George McVittie should not be confused with the even more famous, or rather infamous, Jack McVitie.

“Credit” needn’t mean “Authorship”

Posted in Science Politics, The Universe and Stuff with tags , , , on September 4, 2015 by telescoper

I’ve already posted about the absurdity of scientific papers with ridiculously long author lists but this issue has recently come alive again with the revelation that the compilers of the Times Higher World University Rankings decided to exclude such papers entirely from their analysis of citation statistics.

Large collaborations involving not only scientists but engineers, instrument builders, computer programmers and data analysts –  are the norm in some fields of science – especially (but not exclusively) experimental particle physics – so the arbitrary decision to omit such works from bibliometric analysis is not only idiotic but also potentially damaging to a number of disciplines. The “logic” behind this decision is that papers with “freakish” author lists might distort analyses of citation impact, even allowing – heaven forbid – small institutions with a strong involvement in world-leading studies such as those associated with the Large Hadron Collider to do well compared with larger institutions that are not involved in such collaborations.  If what you do doesn’t fit comfortably within a narrow and simplistic method of evaluating research, then it must be excluded even if it is the best in the world. A sensible person would realise that if the method doesn’t give proper credit then you need a better method, but the bean counters at the Times Higher have decided to give no credit at all to research conducted in this way. The consequences of putting the bibliometric cart in front of the scientific horse could be disastrous, as insitutions find their involvement in international collaborations dragging them down the league tables. I despair of the obsession with league tables because these rankings involve trying to shoehorn a huge amount of complicated information into a single figure of merit. This is not only pointless, but could also drive behaviours that are destructive to entire disciplines.

That said, there is no denying that particle physicists, cosmology and other disciplines that operate through large teams must share part of the blame. Those involved in these collaborations have achieved brilliant successes through the imagination and resourcefulness of the people involved. Where imagination has failed however is to carry on insisting that the only way to give credit to members of a consortium is by making them all authors of scientific papers. In the example I blogged about a few months ago this blinkered approach generated a paper with more than 5000 authors; of the 33 pages in the article, no fewer than 24 were taken up with the list of authors.

Papers just don’t have five thousand “authors”. I even suspect that only about 1% of these “authors” have even read the paper. That doesn’t mean that the other 99% didn’t do immensely valuable work. It does mean that pretending that they participated in writing the article that describes their work isn’t be the right way to acknowledge their contribution. How are young scientists supposed to carve out a reputation if their name is always buried in immensely long author lists? The very system that attempts to give them credit renders that credit worthless. Instead of looking at publication lists, appointment panels have to rely on reference letters instead and that means early career researchers have to rely on the power of patronage.

As science evolves it is extremely important that the methods for disseminating scientific results evolve too. The trouble is that they aren’t. We remain obsessed with archaic modes of publication, partly because of innate conservatism and partly because the lucrative publishing industry benefits from the status quo. The system is clearly broken, but the scientific community carries on regardless. When there are so many brilliant minds engaged in this sort of research, why are so few willing to challenge an orthodoxy that has long outlived its usefulness. Change is needed, not to make life simpler for the compilers of league tables, but for the sake of science itself.

I’m not sure what is to be done, but it’s an urgent problem which looks set to develop very rapidly into an emergency. One idea appears in a paper on the arXiv with the abstract:

Science and engineering research increasingly relies on activities that facilitate research but are not currently rewarded or recognized, such as: data sharing; developing common data resources, software and methodologies; and annotating data and publications. To promote and advance these activities, we must develop mechanisms for assigning credit, facilitate the appropriate attribution of research outcomes, devise incentives for activities that facilitate research, and allocate funds to maximize return on investment. In this article, we focus on addressing the issue of assigning credit for both direct and indirect contributions, specifically by using JSON-LD to implement a prototype transitive credit system.

I strongly recommend this piece. I don’t think it offers a complete solution, but certainly contains  many interesting ideas. For the situation to improve, however, we have to accept that there is a problem. As things stand, far too many senior scientists are in denial. This has to change.

 

 

The Best of Times

Posted in Literature, The Universe and Stuff with tags , , on September 3, 2015 by telescoper

Well, I made it to Pisa Airport in time to sample the Club Europe lounge, which offers free wifi access (as well as other luxuries).  It seems I have a bit of time before my flight so thought I’d do a quick post about this morning’s events. I had the honour to be asked, along with Rocky Kolb, to deliver the concluding remarks for the meeting I’ve been at since Monday. Rocky and I had a quick discussion yesterday about what we should do and we agreed that we shouldn’t try to do a conventional “summary” of the workshop, but instead try something different. In the end we turned to Charles Dickens for inspiration and based the closing remarks on the following text:

IT WAS the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us, we were all going direct to Heaven, we were all going direct the other way…

This is of course part of the famous opening paragraph of Book 1 of A Tale of Two Cities.

The idea was to use the text to discuss different perspectives on the state of the Universe, or at least of cosmology. For example, the fact that we now have huge amounts of cosmological data might lead you to view that this is the  “the best of times” for cosmology. On the other hand, this has made life much harder for theorists as everything is now so heavily constrained that it is much more difficult to generate viable alternative models than it was thirty years ago.  We also now have a standard cosmological model which some physicists believe in very strongly, whereas others are much more skeptical.  This the “epoch of belief” for some but the “epoch of incredulity” for others. Now that the field is dominated by large collaborations in which it is hard for younger researchers to establish themselves, is this a “winter of despair” or do the opportunities presented by the new era of cosmology offer a “spring of hope”.

I haven’t got time to summarize all the points that came up, but it was fun acting as a “feed” for Rocky who had a stream of amusing and insightful comments. There aren’t any “right” or “wrong” answers of course, but it seemed to be an interesting way to get people thinking about where on the “best of times/worst of times” axis they would position themselves.

My own opinion is that cosmology has changed since I started (thirty years ago) and the challenges by the current generation are different from, and in many ways tougher than, those faced by my generation, who were lucky to have been able to learn quite a lot using relatively simple ideas and techniques. Now most of the “easy” stuff has been done, but solving difficult puzzles is immensely rewarding not only for the scientific insights the answers reveal, but also for their own sake. The time to despair about a field is not when it gets tough, but when it stops being fun. I’m glad to say we’re a long way from that situation in cosmology.

 

Adventures with the One-Point Distribution Function

Posted in Bad Statistics, Books, Talks and Reviews, Talks and Reviews, The Universe and Stuff with tags , , on September 1, 2015 by telescoper

As I promised a few people, here are the slides I used for my talk earlier today at the meeting I am attending. Actually I was given only 30 minutes and used up a lot of that time on two things that haven’t got much to do with the title. One was a quiz to identify the six famous astronomers (or physicists) who had made important contributions to statistics (Slide 2) and the other was on some issues that arose during the discussion session yesterday evening. I didn’t in the end talk much about the topic given in the title, which was about how, despite learning a huge amount about certain aspects of galaxy clustering, we are still far from a good understanding of the one-point distribution of density fluctuations. I guess I’ll get the chance to talk more about that in the near future!

P.S. I think the six famous faces should be easy to identify, so there are no prizes but please feel free to guess through the comments box!

Statistics in Astronomy

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , , , , , on August 29, 2015 by telescoper

A few people at the STFC Summer School for new PhD students in Cardiff last week asked if I could share the slides. I’ve given the Powerpoint presentation to the organizers so presumably they will make the presentation available, but I thought I’d include it here too. I’ve corrected a couple of glitches I introduced trying to do some last-minute hacking just before my talk!

As you will inferfrom the slides, I decided not to compress an entire course on statistical methods into a one-hour talk. Instead I tried to focus on basic principles, primarily to get across the importance of Bayesian methods for tackling the usual tasks of hypothesis testing and parameter estimation. The Bayesian framework offers the only mathematically consistent way of tackling such problems and should therefore be the preferred method of using data to test theories. Of course if you have data but no theory or a theory but no data, any method is going to struggle. And if you have neither data nor theory you’d be better off getting one of the other before trying to do anything. Failing that, you could always go down the pub.

Rather than just leave it at that I thought I’d append some stuff  I’ve written about previously on this blog, many years ago, about the interesting historical connections between Astronomy and Statistics.

Once the basics of mathematical probability had been worked out, it became possible to think about applying probabilistic notions to problems in natural philosophy. Not surprisingly, many of these problems were of astronomical origin but, on the way, the astronomers that tackled them also derived some of the basic concepts of statistical theory and practice. Statistics wasn’t just something that astronomers took off the shelf and used; they made fundamental contributions to the development of the subject itself.

The modern subject we now know as physics really began in the 16th and 17th century, although at that time it was usually called Natural Philosophy. The greatest early work in theoretical physics was undoubtedly Newton’s great Principia, published in 1687, which presented his idea of universal gravitation which, together with his famous three laws of motion, enabled him to account for the orbits of the planets around the Sun. But majestic though Newton’s achievements undoubtedly were, I think it is fair to say that the originator of modern physics was Galileo Galilei.

Galileo wasn’t as much of a mathematical genius as Newton, but he was highly imaginative, versatile and (very much unlike Newton) had an outgoing personality. He was also an able musician, fine artist and talented writer: in other words a true Renaissance man.  His fame as a scientist largely depends on discoveries he made with the telescope. In particular, in 1610 he observed the four largest satellites of Jupiter, the phases of Venus and sunspots. He immediately leapt to the conclusion that not everything in the sky could be orbiting the Earth and openly promoted the Copernican view that the Sun was at the centre of the solar system with the planets orbiting around it. The Catholic Church was resistant to these ideas. He was hauled up in front of the Inquisition and placed under house arrest. He died in the year Newton was born (1642).

These aspects of Galileo’s life are probably familiar to most readers, but hidden away among scientific manuscripts and notebooks is an important first step towards a systematic method of statistical data analysis. Galileo performed numerous experiments, though he certainly didn’t carry out the one with which he is most commonly credited. He did establish that the speed at which bodies fall is independent of their weight, not by dropping things off the leaning tower of Pisa but by rolling balls down inclined slopes. In the course of his numerous forays into experimental physics Galileo realised that however careful he was taking measurements, the simplicity of the equipment available to him left him with quite large uncertainties in some of the results. He was able to estimate the accuracy of his measurements using repeated trials and sometimes ended up with a situation in which some measurements had larger estimated errors than others. This is a common occurrence in many kinds of experiment to this day.

Very often the problem we have in front of us is to measure two variables in an experiment, say X and Y. It doesn’t really matter what these two things are, except that X is assumed to be something one can control or measure easily and Y is whatever it is the experiment is supposed to yield information about. In order to establish whether there is a relationship between X and Y one can imagine a series of experiments where X is systematically varied and the resulting Y measured.  The pairs of (X,Y) values can then be plotted on a graph like the example shown in the Figure.

XY

In this example on it certainly looks like there is a straight line linking Y and X, but with small deviations above and below the line caused by the errors in measurement of Y. This. You could quite easily take a ruler and draw a line of “best fit” by eye through these measurements. I spent many a tedious afternoon in the physics labs doing this sort of thing when I was at school. Ideally, though, what one wants is some procedure for fitting a mathematical function to a set of data automatically, without requiring any subjective intervention or artistic skill. Galileo found a way to do this. Imagine you have a set of pairs of measurements (xi,yi) to which you would like to fit a straight line of the form y=mx+c. One way to do it is to find the line that minimizes some measure of the spread of the measured values around the theoretical line. The way Galileo did this was to work out the sum of the differences between the measured yi and the predicted values mx+c at the measured values x=xi. He used the absolute difference |yi-(mxi+c)| so that the resulting optimal line would, roughly speaking, have as many of the measured points above it as below it. This general idea is now part of the standard practice of data analysis, and as far as I am aware, Galileo was the first scientist to grapple with the problem of dealing properly with experimental error.

error

The method used by Galileo was not quite the best way to crack the puzzle, but he had it almost right. It was again an astronomer who provided the missing piece and gave us essentially the same method used by statisticians (and astronomy) today.

Gauss_11Karl Friedrich Gauss (left) was undoubtedly one of the greatest mathematicians of all time, so it might be objected that he wasn’t really an astronomer. Nevertheless he was director of the Observatory at Göttingen for most of his working life and was a keen observer and experimentalist. In 1809, he developed Galileo’s ideas into the method of least-squares, which is still used today for curve fitting.

This approach involves basically the same procedure but involves minimizing the sum of [yi-(mxi+c)]2 rather than |yi-(mxi+c)|. This leads to a much more elegant mathematical treatment of the resulting deviations – the “residuals”.  Gauss also did fundamental work on the mathematical theory of errors in general. The normal distribution is often called the Gaussian curve in his honour.

After Galileo, the development of statistics as a means of data analysis in natural philosophy was dominated by astronomers. I can’t possibly go systematically through all the significant contributors, but I think it is worth devoting a paragraph or two to a few famous names.

I’ve already written on this blog about Jakob Bernoulli, whose famous book on probability was (probably) written during the 1690s. But Jakob was just one member of an extraordinary Swiss family that produced at least 11 important figures in the history of mathematics.  Among them was Daniel Bernoulli who was born in 1700.  Along with the other members of his famous family, he had interests that ranged from astronomy to zoology. He is perhaps most famous for his work on fluid flows which forms the basis of much of modern hydrodynamics, especially Bernouilli’s principle, which accounts for changes in pressure as a gas or liquid flows along a pipe of varying width.
But the elder Jakob’s work on gambling clearly also had some effect on Daniel, as in 1735 the younger Bernoulli published an exceptionally clever study involving the application of probability theory to astronomy. It had been known for centuries that the orbits of the planets are confined to the same part in the sky as seen from Earth, a narrow band called the Zodiac. This is because the Earth and the planets orbit in approximately the same plane around the Sun. The Sun’s path in the sky as the Earth revolves also follows the Zodiac. We now know that the flattened shape of the Solar System holds clues to the processes by which it formed from a rotating cloud of cosmic debris that formed a disk from which the planets eventually condensed, but this idea was not well established in the time of Daniel Bernouilli. He set himself the challenge of figuring out what the chance was that the planets were orbiting in the same plane simply by chance, rather than because some physical processes confined them to the plane of a protoplanetary disk. His conclusion? The odds against the inclinations of the planetary orbits being aligned by chance were, well, astronomical.

The next “famous” figure I want to mention is not at all as famous as he should be. John Michell was a Cambridge graduate in divinity who became a village rector near Leeds. His most important idea was the suggestion he made in 1783 that sufficiently massive stars could generate such a strong gravitational pull that light would be unable to escape from them.  These objects are now known as black holes (although the name was coined much later by John Archibald Wheeler). In the context of this story, however, he deserves recognition for his use of a statistical argument that the number of close pairs of stars seen in the sky could not arise by chance. He argued that they had to be physically associated, not fortuitous alignments. Michell is therefore credited with the discovery of double stars (or binaries), although compelling observational confirmation had to wait until William Herschel’s work of 1803.

It is impossible to overestimate the importance of the role played by Pierre Simon, Marquis de Laplace in the development of statistical theory. His book A Philosophical Essay on Probabilities, which began as an introduction to a much longer and more mathematical work, is probably the first time that a complete framework for the calculation and interpretation of probabilities ever appeared in print. First published in 1814, it is astonishingly modern in outlook.

Laplace began his scientific career as an assistant to Antoine Laurent Lavoiser, one of the founding fathers of chemistry. Laplace’s most important work was in astronomy, specifically in celestial mechanics, which involves explaining the motions of the heavenly bodies using the mathematical theory of dynamics. In 1796 he proposed the theory that the planets were formed from a rotating disk of gas and dust, which is in accord with the earlier assertion by Daniel Bernouilli that the planetary orbits could not be randomly oriented. In 1776 Laplace had also figured out a way of determining the average inclination of the planetary orbits.

A clutch of astronomers, including Laplace, also played important roles in the establishment of the Gaussian or normal distribution.  I have also mentioned Gauss’s own part in this story, but other famous astronomers played their part. The importance of the Gaussian distribution owes a great deal to a mathematical property called the Central Limit Theorem: the distribution of the sum of a large number of independent variables tends to have the Gaussian form. Laplace in 1810 proved a special case of this theorem, and Gauss himself also discussed it at length.

A general proof of the Central Limit Theorem was finally furnished in 1838 by another astronomer, Friedrich Wilhelm Bessel– best known to physicists for the functions named after him – who in the same year was also the first man to measure a star’s distance using the method of parallax. Finally, the name “normal” distribution was coined in 1850 by another astronomer, John Herschel, son of William Herschel.

I hope this gets the message across that the histories of statistics and astronomy are very much linked. Aspiring young astronomers are often dismayed when they enter research by the fact that they need to do a lot of statistical things. I’ve often complained that physics and astronomy education at universities usually includes almost nothing about statistics, because that is the one thing you can guarantee to use as a researcher in practically any branch of the subject.

Over the years, statistics has become regarded as slightly disreputable by many physicists, perhaps echoing Rutherford’s comment along the lines of “If your experiment needs statistics, you ought to have done a better experiment”. That’s a silly statement anyway because all experiments have some form of error that must be treated statistically, but it is particularly inapplicable to astronomy which is not experimental but observational. Astronomers need to do statistics, and we owe it to the memory of all the great scientists I mentioned above to do our statistics properly.

A Very Clever Experimental Test of a Bell Inequality

Posted in The Universe and Stuff with tags , , on August 26, 2015 by telescoper

Travelling and very busy most of today so not much time to post. I did, however, however get time to peruse a very nice paper I saw on the arXiv with the following abstract:

For more than 80 years, the counterintuitive predictions of quantum theory have stimulated debate about the nature of reality. In his seminal work, John Bell proved that no theory of nature that obeys locality and realism can reproduce all the predictions of quantum theory. Bell showed that in any local realist theory the correlations between distant measurements satisfy an inequality and, moreover, that this inequality can be violated according to quantum theory. This provided a recipe for experimental tests of the fundamental principles underlying the laws of nature. In the past decades, numerous ingenious Bell inequality tests have been reported. However, because of experimental limitations, all experiments to date required additional assumptions to obtain a contradiction with local realism, resulting in loopholes. Here we report on a Bell experiment that is free of any such additional assumption and thus directly tests the principles underlying Bell’s inequality. We employ an event-ready scheme that enables the generation of high-fidelity entanglement between distant electron spins. Efficient spin readout avoids the fair sampling assumption (detection loophole), while the use of fast random basis selection and readout combined with a spatial separation of 1.3 km ensure the required locality conditions. We perform 245 trials testing the CHSH-Bell inequality S≤2 and find S=2.42±0.20. A null hypothesis test yields a probability of p=0.039 that a local-realist model for space-like separated sites produces data with a violation at least as large as observed, even when allowing for memory in the devices. This result rules out large classes of local realist theories, and paves the way for implementing device-independent quantum-secure communication and randomness certification.

While there’s nothing particularly surprising about the result – the nonlocality of quantum physics is pretty well established – this is a particularly neat experiment so I encourage you to read the paper!

Perhaps some day someone will carry out this, even neater, experiment!

PS Anyone know where I can apply to for a randomness certificate?

When scientists help to sell pseudoscience: The many worlds of woo

Posted in The Universe and Stuff on August 25, 2015 by telescoper

Since Professor Moriarty has mentioned me by name (not once, but twice) in his latest post, the least I can do is reblog it. In fact I agree wholeheartedly with his demolition of the pathological industry that has grown up around “Quantum Woo”…

Why traditional scientific journals are redundant

Posted in Open Access, The Universe and Stuff with tags , , , , on August 20, 2015 by telescoper

Was it really six years ago that I first blogged about the Academic Journal Racket which siphons off millions from hard-pressed research budgets into the coffers of profiteering publishing houses?

Change is coming much more slowly over the last few years than I had anticipated when I wrote that piece, but at least there are signs that other disciplines are finally cottoning on to the fact that the old-style model of learned journals is way past its sell-by date. This has been common knowledge in Physics and Astronomy for some time, as I’ve explained many times on this blog. But, although most wouldn’t like to admit it, academics are really a very conservative bunch.

Question: How many academics does it take to change a lightbulb?

Answer: Change!!???

Today I came across a link to a paper on the arXiv which I should have known about before; it’s as old as my first post on this subject. It’s called Citing and Reading Behaviours in High-Energy Physics. How a Community Stopped Worrying about Journals and Learned to Love Repositories, and it basically demonstrates that in High-Energy Physics there is a massive advantage in publishing papers in open repositories, specifically the arXiv.Here is the killer plot:

citations_arXivThis contains fairly old data (up to 2009) but I strongly suspect the effect is even more marked than it was six years ago.

I’d take the argument further, in fact. I’d say that journals are completely unnecessary. I find all my research papers on the arXiv and most of my colleagues do the same. We don’t need journals yet we keep paying for them. The only thing that journals provide is peer review, but that is done free of charge by academics anyway. The profits of their labour go entirely to the publishers.

Fortunately, things will start to change in my own field of astrophysics – for which the picture is very similar to high-energy physics. All we need to do is to is dispense with the old model of a journal and replace it with a reliable and efficient reviewing system that interfaces with the arXiv. Then we’d have a genuinely useful thing. And it’s not as far off as you might think.

Watch this space.

Have we reached Peak Physics?

Posted in Education, The Universe and Stuff with tags , , , , on August 17, 2015 by telescoper

One of the interesting bits of news I picked up concerning last week’s A-level results is a piece from the Institute of Physics about the number of students taking A-level physics. The opening paragraph reads:

Although there was an overall rise of 2% in the number of A-level entries, the number taking physics fell to 36,287 compared with 36,701 last year – the first time numbers have fallen since 2006. The number of girls taking physics rose by 0.5%, however.

The decline is slight, of course, and it’s obviously too early to decide whether we’ve reached Peak Physics or not. It remains the case however that Physics departments in UK universities are competing for a very small pool of students with A-levels in that discipline. With some universities, e.g. Newcastle, opening up physics programmes that they had previously closed, competition  is going to be intense to recruit students across the sector unless the pool of qualified applicants increases substantially.

The article goes on to speculate that students may be put off doing physics by the perception that it is harder than other subjects. It may even be that some schools – mindful of the dreaded league tables – are deliberately discouraging all but the brightest pupils from studying physics in case their precious league table position is affected.

That’s not a line I wish to pursue here, but I will take the opportunity to rehearse an argument that I have made on this blog before. The idea is one that joins two threads of discussion that have appeared on a number of occasions on this blog. The first is that, despite strenuous efforts by many parties, the fraction of female students taking A-level Physics has flat-lined at 20% for over a decade. This is the reason why the proportion of female physics students at university is the same, i.e. 20%. In short, the problem lies within our school system. This year’s modest increase doesn’t change the picture significantly.

The second line of argument is that A-level Physics is simply not a useful preparation for a Physics degree anyway because it does not develop the sort of problem-solving skills, or the ability to express physical concepts in mathematical language, on both of abilities which university physics depends. Most physics admissions tutors that I know care much more about the performance of students at A-level Mathematics than Physics when it comes to selecting “near misses” during clearing, for example.

Hitherto, most of the effort that has been expended on the first problem has been directed at persuading more girls to do Physics A-level. Since all universities require a Physics A-level for entry into a degree programme, this makes sense but it has not been successful.

I now believe that the only practical way to improve the gender balance on university physics course is to drop the requirement that applicants have A-level Physics entirely and only insist on Mathematics (which has a much more even gender mix at entry). I do not believe that this would require many changes to course content but I do believe it would circumvent the barriers that our current school system places in the way of aspiring female physicists. Not all UK universities seem very interested in widening participation, but those that are should seriously consider this approach.

I am grateful to fellow astronomer Jonathan Pritchard for pointing out to me that a similar point has been made to drop A-level Physics as an entry requirement to  Civil Engineering degrees, which have a similar problem with gender bias.

Fourier Series, Epicycles and Haemorrhoids

Posted in The Universe and Stuff with tags , , , , on August 13, 2015 by telescoper

My attention was drawn to this little video some time ago by esteemed Professor George Ellis. I don’t know why it has taken me so long to share it here. It’s a nice illustration of the principles of Fourier series, by which any periodic function can be decomposed into a series of sine and cosine functions.

This reminds me of a point I’ve made a few times in popular talks about Astronomy. It’s a common view that Kepler’s laws of planetary motion  according to which which the planets move in elliptical motion around the Sun, is a completely different formulation from the previous Ptolemaic system which involved epicycles and deferents and which is generally held to have been much more complicated.

The video demonstrates however that epicycles and deferents can be viewed as parts the construction of a Fourier series. Since elliptical orbits are periodic, it is perfectly valid to present them in the form a Fourier series. Therefore, in a sense, there’s nothing so very wrong with epicycles. I admit, however, that a closed form expression for such an orbit is considerably more compact and elegant than a Fourier representation and also encapsulates a deeper level of physical understanding.

It’s nore entirely relevant to the rest of this post but I discovered last week – by reading a book – that Johannes Kepler suffered so badly from haemorrhoids (piles) that he did all his calculations standing up. I just thought I’d share that with you.