No time for a proper post today as I’ve got a lot to do before this afternoon’s meeting of Senate. It’s such a cold and miserable day I thought it would be an idea to post this which I bookmarked some time ago but have never got round to posting. If you enjoy it half as much as I did then I enjoyed it twice as much as you…
Follow @telescoperAuthor Archive
Victor Borge at the Opera
Posted in Opera with tags Caro Nome, Marilyn Mulvey, Rigoletto, Victor Borge on November 20, 2013 by telescoperSussex Astronomy Research – The Videos!
Posted in The Universe and Stuff with tags astronomy, Astrophysics, Chris Byrnes, Ilian Iliev, Kathy Romer, PhD, PhD Opportunities, Seb Oliver, University of Sussex on November 19, 2013 by telescoperAs autumn turns to winter the thoughts of many an undergraduate turn to the task of applying for PhDs. Nowadays this involves a lot of trawling through webpages looking for interesting projects and suitable funding opportunities.
In order to help prospective postgraduates this year, the Astronomy Centre at the University of Sussex has produced a number of videos to give some information about the available projects. To start with, here are four examples, covering topics in theoretical, computational and observational astrophysics:
For information, we’re expecting to offer at least six PhD studentships in Astronomy for September 2014 entry. Also there’s a University-wide postgraduate open day coming up on December 4th..
Follow @telescoperAutumn Leaves (Les Feuilles Mortes)
Posted in Music with tags Autumn Leaves, Edith Piaf, Les Feuilles Mortes on November 18, 2013 by telescoperThe Open Journal for Astrophysics is Open for Test Submissions!
Posted in Open Access with tags Open Access, Open Access Publishing., Open Journal for Astrophysics, The Open Journal for Astrophysics on November 17, 2013 by telescoperJust a quick announcement that we’re stepping up the testing phase of the Open Journal for Astrophysics and would really appreciate it if astrophysicists and cosmologists out there would help us out by submitting papers for us to run through our swish new refereeing system.
Just to remind you The Open Journal for Astrophysics is completely free both for submission and for access; there are no Author Processing Charges and no subscription payments. All papers will be fully peer-reviewed using a system which is, as far as I’m concerned, far better than any professional astrophysical journal currently offers. All this is provided free by members of the astrophysics community as a service to the astrophysics community.
I know that many will be nervous about submitting the results of their research to such a new venture, but I hope there will be plenty among you who agree with me that the only way we can rid ourselves of the enormous and unnecessary financial burdens placed on us by the academic publishing industry is by proving that we can do the job better by ourselves without their intervention.
The project has changed a little since I suggest the idea last year, but the submission procedure is basically that which I originally envisaged. All you have to do is submit your paper to the arXiv and let us know its reference when this has been accomplished. Our software will then pick up the arXiv posting automatically and put it into our refereeing pipeline.
In future we will have our own latex template to produce a distinctive style for papers, but this is not needed for the testing phase so feel free to use any latex style you wish for your submission.
For the time being the OJFA website and associated repositories are not publicly available, but that’s just so we can test it thoroughly before it goes fully live, probably early in the new year; at that point all the papers passing peer review during the test phase will be published. I’m really excited about the forthcoming launch which will, I hope, generate quite a lot of publicity about the whole issue of open access publishing.
If anyone has any questions about this please feel free to ask via the comments box. Also please pass this on via twitter, etc. The more, and the more varied, papers we get to handle over the next couple of months the quicker we can get on with the revolution! So what are you waiting for? Let’s have your papers!
Follow @telescoperSunset over Falmer Campus
Posted in Brighton, Poetry with tags A. E. Housman, Brighton, Falmer, University of Sussex on November 15, 2013 by telescoperEnsanguining the skies
How heavily it dies
Into the west away;
Past touch and sight and sound
Not further to be found,
How hopeless under ground
Falls the remorseful day.
Guest Post
Posted in Uncategorized with tags guest post on November 15, 2013 by telescoperI don’t think I’ll have time to write anything today so until I get a spare moment here’s a guest post:
Follow @telescoper
Gracias a la Vida
Posted in Music with tags Gracias a la Vida, Mercedes Sosa on November 14, 2013 by telescoperToo busy for anything else, I’m going to post a piece of music I first heard only recently (on Radio 3) but which has been in my head ever since. It’s sung by Mercedes Sosa, an Argentinian singer with roots in the folk music of her native land but with an appeal through South America. This, Gracias a la Vida probably the most famous song she performed and when I first heard it on the radio it knocked me sideways; it’s so lyrical and so beautifully sung that it had me close to tears. I can’t really speak Spanish, but my schoolboy knowledge of Latin is enough to translate most of the words reasonably easily; the first line “Gracias a la Vida que me ha dado tanto” means “Thanks to life which has given me so much”.
The whole of the first verse is:
Gracias a la vida que me ha dado tanto
Me dio dos luceros que cuando los abro
Perfecto distingo lo negro del blanco
Y en el alto cielo su fondo estrellado
Y en las multitudes el hombre que yo amo
Hmm. Gorgeous. Latin languages have those lovely open vowels that make poetry seem so natural.
This isn’t just a song about counting your blessings, though. It’s the dark undertone of tragic irony which makes it so powerful. The song was actually written by Violeta Parra, a Chilean composer and songwriter, who took her own life in 1967.
Follow @telescoperWould Scottish Independence be Good for English Science?
Posted in Politics, Science Politics with tags David Willetts, England, Independence, Politics, Scotland on November 13, 2013 by telescoperOn Monday the Minister for Universities and Science, David Willetts, visited Edinburgh where he took in, among other things, the UK Astronomy Technology Centre and was treated to an explanation of how adaptive optics work. There being less than a year to go before the forthcoming referendum on Scottish independence, the visit was always likely to generate political discussion and this turned out to be the case.
According to a Guardian piece
Scientists and academics in Scotland would lose access to billions of pounds in grants and the UK’s world-leading research programmes if it became independent, the Westminster government has warned.
David Willetts, the UK science minister, said Scottish universities were “thriving” because of the UK’s generous and highly integrated system for funding scientific research, winning far more funding per head than the UK average.
Unveiling a new UK government paper on the impact of independence on scientific research, Willetts said that despite its size the UK was second only to the United States for the quality of its research.
“We do great things as a single, integrated system and a single integrated brings with it great strengths,” he said.
Overall spending on scientific research and development in Scottish universities from government, charitable and industry sources was more than £950m in 2011, giving a per capita spend of £180 compared to just £112 per head across the UK as a whole.
It is indeed notable that Scottish universities outperform those in the rest of the United Kingdom when it comes to research, but it always struck me that using this as an argument against independence is difficult to sustain. In fact it’s rather similar to the argument that the UK does well out of European funding schemes so that is a good argument for remaining in the European Union. The point is that, whether or not a given country benefits from the funding system, it still has to do so by following an agenda that isn’t necessarily its own. Scotland benefits from UK Research Council funding, but their priorities are set by the Westminster government, just as the European Research Council sets (sometimes rather bizarre) policies for its schemes. Who’s to say that Scotland wouldn’t do even better than it does currently by taking control of its own research funding rather than forcing its institutions to pander to Whitehall?
It’s also interesting to look at the flipside of this argument. If Scotland were to become independent, would the “billions” of research funding it would lose (according to Willetts) benefit science in what’s left of the United Kingdom? There are many in England and Wales who think the existing research budget is already spread far too thinly and who would welcome an increase south of the border. If this did happen you could argue that, from a very narrow perspective, Scottish independence would be good for English science.
For what it’s worth, I am a complete agnostic about Scottish independence – I really think its for the Scots to decide – but I don’t think it would benefit the rest of the UK from the point of view of science funding. I think it’s much more likely that if Scotland were to leave the United Kingdom then the part of the science budget it currently receives would be cancelled rather than redistributed, which would leave us no better off at all.
Follow @telescoperThe Curse of P-values
Posted in Bad Statistics with tags Bayesian statistics, frequentist statistics, nature, p-values on November 12, 2013 by telescoperYesterday evening I noticed a news item in Nature that argues that inappropriate statistical methodology may be undermining the reporting of scientific results. The article focuses on lack of “reproducibility” of results.
The article focuses on the p-value, a frequentist concept that corresponds to the probability of obtaining a value at least as large as that obtained for a test statistic under the null hypothesis. To give an example, the null hypothesis might be that two variates are uncorrelated; the test statistic might be the sample correlation coefficient r obtained from a set of bivariate data. If the data were uncorrelated then r would have a known probability distribution, and if the value measured from the sample were such that its numerical value would be exceeded with a probability of 0.05 then the p-value (or significance level) is 0.05.
Anyway, whatever the null hypothesis happens to be, you can see that the way a frequentist would proceed would be to calculate what the distribution of measurements would be if it were true. If the actual measurement is deemed to be unlikely (say that it is so high that only 1% of measurements would turn out that big under the null hypothesis) then you reject the null, in this case with a “level of significance” of 1%. If you don’t reject it then you tacitly accept it unless and until another experiment does persuade you to shift your allegiance.
But the p-value merely specifies the probability that you would reject the null-hypothesis if it were correct. This is what you would call making a Type I error. It says nothing at all about the probability that the null hypothesis is actually a correct description of the data. To make that sort of statement you would need to specify an alternative distribution, calculate the distribution based on it, and hence determine the statistical power of the test, i.e. the probability that you would actually reject the null hypothesis when it is correct. To fail to reject the null hypothesis when it’s actually incorrect is to make a Type II error.
If all this stuff about p-values, significance, power and Type I and Type II errors seems a bit bizarre, I think that’s because it is. It’s so bizarre, in fact, that I think most people who quote p-values have absolutely no idea what they really mean.
The Nature story mentioned above argues that in fact that results quoted with a p-value of 0.05 turn out to be wrong about 25% of the time. There are a number of reasons why this could be the case, including that the p-value is being calculated incorrectly, perhaps because some assumption or other turns out not to be true; a widespread example is assuming that the variates concerned are normally distributed. Unquestioning application of off-the-shelf statistical methods in inappropriate situations is a serious problem in many disciplines, but is particularly prevalent in the social sciences when samples are typically rather small.
While I agree with the Nature piece that there’s a problem, I don’t agree with the suggestion that it can be solved simply by choosing stricter criteria, i.e. a p-value of 0.005 rather than 0.05. While it is true that this would throw out a lot of flaky `two-sigma’ results, it doesn’t alter the basic problem which is that the frequentist approach to hypothesis testing is intrinsically confusing compared to the logically clearer Bayesian approach. In particular, most of the time the p-value is an answer to a question which is quite different from that which a scientist would want to ask, which is what the data have to say about a given hypothesis. I’ve banged on about Bayesian methods quite enough on this blog so I won’t repeat the arguments here, except that such approaches focus on the probability of a hypothesis being right given the data, rather than on properties that the data might have given the hypothesis. If I had my way I’d ban p-values altogether.
Not that it’s always easy to implement a Bayesian approach. Coincidentally a recent paper on the arXiv discussed an interesting apparent paradox in hypothesis testing that arises in the context of high energy physics, which I thought I’d share here. Here is the abstract:
The Jeffreys-Lindley paradox displays how the use of a p-value (or number of standard deviations z) in a frequentist hypothesis test can lead to inferences that are radically different from those of a Bayesian hypothesis test in the form advocated by Harold Jeffreys in the 1930’s and common today. The setting is the test of a point null (such as the Standard Model of elementary particle physics) versus a composite alternative (such as the Standard Model plus a new force of nature with unknown strength). The p-value, as well as the ratio of the likelihood under the null to the maximized likelihood under the alternative, can both strongly disfavor the null, while the Bayesian posterior probability for the null can be arbitrarily large. The professional statistics literature has many impassioned comments on the paradox, yet there is no consensus either on its relevance to scientific communication or on the correct resolution. I believe that the paradox is quite relevant to frontier research in high energy physics, where the model assumptions can evidently be quite different from those in other sciences. This paper is an attempt to explain the situation to both physicists and statisticians, in hopes that further progress can be made.
Rather than tell you what I think about this paradox, I thought I’d invite discussion through the comments box…
Follow @telescoper

