Archive for the The Universe and Stuff Category

Jazz and Quantum Entanglement

Posted in Jazz, The Universe and Stuff with tags , , , , , on May 28, 2015 by telescoper

As regular readers of this blog (Sid and Doris Bonkers) will know, among the various things I write about apart from The Universe and Stuff is my love of Jazz. I don’t often get the chance to combine music with physics in a post so I’m indebted to George Ellis for drawing my attention to this fascinating little video showing a visualisation of the effects of quantum entanglement:

The experiment shown involves pairs of entangled photons. Here is an excerpt from the blurb on Youtube:

The video shows images of single photon patterns, recorded with a triggered intensified CCD camera, where the influence of a measurement of one photon on its entangled partner photon is imaged in real time. In our experiment the immediate change of the monitored mode pattern is a result of the polarization measurement on the distant partner photon.

You can find out more by clicking through to the Youtube page.

While most of my colleagues were completely absorbed by the pictures, I was fascinated by the choice of musical accompaniment. It is in fact Blue Piano Stomp, a wonderful example of classic Jazz from the 1920s featuring the great Johnny Dodds on clarinet (who also wrote the tune) and the great Lil Armstrong (née Hardin) on piano, who just happened to be the first wife of a trumpet player by the name of Louis Armstrong.

So at last I’ve found an example of Jazz entangled with Physics!

P.S. We often bemoan the shortage of female physicists, but Jazz is another field in which women are under-represented and insufficiently celebrated. Lil Hardin was a great piano player and deserves to be much more widely appreciated for her contribution to Jazz history.

 

Phlogiston, Dark Energy and Modified Levity

Posted in History, The Universe and Stuff with tags , , on May 21, 2015 by telescoper

What happens when something burns?

Had you aslked a seventeenth-century scientist that question and the chances are the answer would  have involved the word phlogiston, a name derived from the Greek  φλογιστόν, meaning “burning up”. This “fiery principle” or “element” was supposed to be present in all combustible materials and the idea was that it was released into air whenever any such stuff was ignited. The act of burning was thought to separate the phlogiston from the dephlogisticated “true” form of the material, also known as calx.

The phlogiston theory held sway until  the late 18th Century, when Antoine Lavoisier demonstrated that combustion results in an increase in weight of the material being burned. This poses a serious problem if burning also involves the loss of phlogiston unless phlogiston has negative weight. However, many serious scientists of the 18th Century, such as Georg Ernst Stahl, had already suggested that phlogiston might have negative weight or, as he put it, “levity”. Nowadays we would probably say “anti-gravity”.

Eventually, Joseph Priestley discovered what actually combines with materials during combustion:  oxygen. Instead of becoming dephlogisticated, things become oxidised by fixing oxygen from air, which is why their weight increases. It’s worth mentioning, though, the name that Priestley used for oxygen was in fact “dephlogisticated air” (because it was capable of combining more extensively with phlogiston than ordinary air). He  remained a phlogistonian longer after making the discovery that should have killed the theory.

So why am I rambling on about a scientific theory that has been defunct for more than two centuries?

Well,   there just might be a lesson from history about the state of modern cosmology. Not long ago I gave a talk in the fine city of Bath on the topic of Dark Energy and its Discontents. For the cosmologically uninitiated, the standard cosmological model involves the hypothesis that about 75% of the energy budget of the Universe is in the form of this “dark energy”.

Dark energy is needed to reconcile three basic measurements: (i) the brightness distant supernovae that seem to indicate the Universe is accelerating (which is where the anti-gravity comes in); (ii) the cosmic microwave background that suggests the Universe has flat spatial sections; and (iii) the direct estimates of the mass associated with galaxy clusters that accounts for about 25% of the mass needed to close the Universe. A universe without dark energy appears not to be able to account for these three observations simultaneously within our current understanding of gravity as obtained from Einstein’s theory of general relativity.

We don’t know much about what this dark energy is, except that in order to make our current understanding work out it has to produce an effect something like anti-gravity, vaguely reminiscent of the “negative weight” hypothesis mentioned above. In most theories, the dark energy component does this by violating the strong energy condition of general relativity. Alternatively, it might also be accounted for by modifying our theory of gravity in such a way that accounts for anti-gravity in some other way. In the light of the discussion above maybe what we need is a new theory of levity? In other words, maybe we’re taking gravity too seriously?

Anyway, I don’t mind admitting how uncomfortable this dark energy makes me feel. It makes me even more uncomfortable that such an enormous  industry has grown up around it and that its existence is accepted unquestioningly by so many modern cosmologists. Isn’t there a chance that, with the benefit of hindsight, future generations will look back on dark energy in the same way that we now see the phlogiston theory?

Or maybe the dark energy really is phlogiston. That’s got to be worth a paper!

One More for the Bad Statistics in Astronomy File…

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , on May 20, 2015 by telescoper

It’s been a while since I last posted anything in the file marked Bad Statistics, but I can remedy that this morning with a comment or two on the following paper by Robertson et al. which I found on the arXiv via the Astrostatistics Facebook page. It’s called Stellar activity mimics a habitable-zone planet around Kapteyn’s star and it the abstract is as follows:

Kapteyn’s star is an old M subdwarf believed to be a member of the Galactic halo population of stars. A recent study has claimed the existence of two super-Earth planets around the star based on radial velocity (RV) observations. The innermost of these candidate planets–Kapteyn b (P = 48 days)–resides within the circumstellar habitable zone. Given recent progress in understanding the impact of stellar activity in detecting planetary signals, we have analyzed the observed HARPS data for signatures of stellar activity. We find that while Kapteyn’s star is photometrically very stable, a suite of spectral activity indices reveals a large-amplitude rotation signal, and we determine the stellar rotation period to be 143 days. The spectral activity tracers are strongly correlated with the purported RV signal of “planet b,” and the 48-day period is an integer fraction (1/3) of the stellar rotation period. We conclude that Kapteyn b is not a planet in the Habitable Zone, but an artifact of stellar activity.

It’s not really my area of specialism but it seemed an interesting conclusions so I had a skim through the rest of the paper. Here’s the pertinent figure, Figure 3,

bad_stat_figure

It looks like difficult data to do a correlation analysis on and there are lots of questions to be asked  about  the form of the errors and how the bunching of the data is handled, to give just two examples.I’d like to have seen a much more comprehensive discussion of this in the paper. In particular the statistic chosen to measure the correlation between variates is the Pearson product-moment correlation coefficient, which is intended to measure linear association between variables. There may indeed be correlations in the plots shown above, but it doesn’t look to me that a straight line fit characterizes it very well. It looks to me in some of the  cases that there are simply two groups of data points…

However, that’s not the real reason for flagging this one up. The real reason is the following statement in the text:

bad_stat_text

Aargh!

No matter how the p-value is arrived at (see comments above), it says nothing about the “probability of no correlation”. This is an error which is sadly commonplace throughout the scientific literature, not just astronomy.  The point is that the p-value relates to the probability that the given value of the test statistic (in this case the Pearson product-moment correlation coefficient, r) would arise by chace in the sample if the null hypothesis H (in this case that the two variates are uncorrelated) were true. In other words it relates to P(r|H). It does not tells us anything directly about the probability of H. That would require the use of Bayes’ Theorem. If you want to say anything at all about the probability of a hypothesis being true or not you should use a Bayesian approach. And if you don’t want to say anything about the probability of a hypothesis being true or not then what are you trying to do anyway?

If I had my way I would ban p-values altogether, but it people are going to use them I do wish they would be more careful about the statements make about them.

A scientific paper with 5000 authors is absurd, but does science need “papers” at all?

Posted in History, Open Access, Science Politics, The Universe and Stuff with tags , , , , , , , , , on May 17, 2015 by telescoper

Nature News has reported on what appears to be the paper with the longest author list on record. This article has so many authors – 5,154 altogether – that 24 pages (out of a total of 33 in the paper) are devoted just to listing them, and only 9 to the actual science. Not, surprisingly the field concerned is experimental particle physics and the paper emanates from the Large Hadron Collider; it involves combining data from the CMS and ATLAS detectors to estimate the mass of the Higgs Boson. In my own fields of astronomy and cosmology, large consortia such as the Planck collaboration are becoming the rule rather than exception for observational work. Large ollaborations  have achieved great things not only in physics and astronomy but also in other fields. A for  paper in genomics with over a thousand authors has recently been published and the trend for ever-increasing size of collaboration seems set to continue.

I’ve got nothing at all against large collaborative projects. Quite the opposite, in fact. They’re enormously valuable not only because frontier research can often only be done that way, but also because of the wider message they send out about the benefits of international cooperation.

Having said that, one thing these large collaborations do is expose the absurdity of the current system of scientific publishing. The existence of a paper with 5000 authors is a reductio ad absurdum proof  that the system is broken. Papers simply do not have 5000  “authors”. In fact, I would bet that no more than a handful of the “authors” listed on the record-breaking paper have even read the article, never mind written any of it. Despite this, scientists continue insisting that constributions to scientific research can only be measured by co-authorship of  a paper. The LHC collaboration that kicked off this piece includes all kinds of scientists: technicians, engineers, physicists, programmers at all kinds of levels, from PhD students to full Professors. Why should we insist that the huge range of contributions can only be recognized by shoe-horning the individuals concerned into the author list? The idea of a 100-author paper is palpably absurd, never mind one with fifty times that number.

So how can we assign credit to individuals who belong to large teams of researchers working in collaboration?

For the time being let us assume that we are stuck with authorship as the means of indicating a contribution to the project. Significant issues then arise about how to apportion credit in bibliometric analyses, e.g. through citations. Here is an example of one of the difficulties: (i) if paper A is cited 100 times and has 100 authors should each author get the same credit? and (ii) if paper B is also cited 100 times but only has one author, should this author get the same credit as each of the authors of paper A?

An interesting suggestion over on the e-astronomer a while ago addressed the first question by suggesting that authors be assigned weights depending on their position in the author list. If there are N authors the lead author gets weight N, the next N-1, and so on to the last author who gets a weight 1. If there are 4 authors, the lead gets 4 times as much weight as the last one.

This proposal has some merit but it does not take account of the possibility that the author list is merely alphabetical which actually was the case in all the Planck publications, for example. Still, it’s less draconian than another suggestion I have heard which is that the first author gets all the credit and the rest get nothing. At the other extreme there’s the suggestion of using normalized citations, i.e. just dividing the citations equally among the authors and giving them a fraction 1/N each. I think I prefer this last one, in fact, as it seems more democratic and also more rational. I don’t have many publications with large numbers of authors so it doesn’t make that much difference to me which you measure happen to pick. I come out as mediocre on all of them.

No suggestion is ever going to be perfect, however, because the attempt to compress all information about the different contributions and roles within a large collaboration into a single number, which clearly can’t be done algorithmically. For example, the way things work in astronomy is that instrument builders – essential to all observational work and all work based on analysing observations – usually get appended onto the author lists even if they play no role in analysing the final data. This is one of the reasons the resulting papers have such long author lists and why the bibliometric issues are so complex in the first place.

Having thousands of authors who didn’t write a single word of the paper seems absurd, but it’s the only way our current system can acknowledge the contributions made by instrumentalists, technical assistants and all the rest. Without doing this, what can such people have on their CV that shows the value of the work they have done?

What is really needed is a system of credits more like that used in the television or film. Writer credits are assigned quite separately from those given to the “director” (of the project, who may or may not have written the final papers), as are those to the people who got the funding together and helped with the logistics (production credits). Sundry smaller but still vital technical roles could also be credited, such as special effects (i.e. simulations) or lighting (photometic calibration). There might even be a best boy. Many theoretical papers would be classified as “shorts” so they would often be written and directed by one person and with no technical credits.

The point I’m trying to make is that we seem to want to use citations to measure everything all at once but often we want different things. If you want to use citations to judge the suitability of an applicant for a position as a research leader you want someone with lots of directorial credits. If you want a good postdoc you want someone with a proven track-record of technical credits. But I don’t think it makes sense to appoint a research leader on the grounds that they reduced the data for umpteen large surveys. Imagine what would happen if you made someone director of a Hollywood blockbuster on the grounds that they had made the crew’s tea for over a hundred other films.

Another question I’d like to raise is one that has been bothering me for some time. When did it happen that everyone participating in an observational programme expected to be an author of a paper? It certainly hasn’t always been like that.

For example, go back about 90 years to one of the most famous astronomical studies of all time, Eddington‘s measurement of the bending of light by the gravitational field of the Sun. The paper that came out from this was this one

A Determination of the Deflection of Light by the Sun’s Gravitational Field, from Observations made at the Total Eclipse of May 29, 1919.

Sir F.W. Dyson, F.R.S, Astronomer Royal, Prof. A.S. Eddington, F.R.S., and Mr C. Davidson.

Philosophical Transactions of the Royal Society of London, Series A., Volume 220, pp. 291-333, 1920.

This particular result didn’t involve a collaboration on the same scale as many of today’s but it did entail two expeditions (one to Sobral, in Brazil, and another to the Island of Principe, off the West African coast). Over a dozen people took part in the planning,  in the preparation of of calibration plates, taking the eclipse measurements themselves, and so on.  And that’s not counting all the people who helped locally in Sobral and Principe.

But notice that the final paper – one of the most important scientific papers of all time – has only 3 authors: Dyson did a great deal of background work getting the funds and organizing the show, but didn’t go on either expedition; Eddington led the Principe expedition and was central to much of the analysis;  Davidson was one of the observers at Sobral. Andrew Crommelin, something of an eclipse expert who played a big part in the Sobral measurements received no credit and neither did Eddington’s main assistant at Principe.

I don’t know if there was a lot of conflict behind the scenes at arriving at this authorship policy but, as far as I know, it was normal policy at the time to do things this way. It’s an interesting socio-historical question why and when it changed.

I’ve rambled off a bit so I’ll return to the point that I was trying to get to, which is that in my view the real problem is not so much the question of authorship but the idea of the paper itself. It seems quite clear to me that the academic journal is an anachronism. Digital technology enables us to communicate ideas far more rapidly than in the past and allows much greater levels of interaction between researchers. I agree with Daniel Shanahan that the future for many fields will be defined not in terms of “papers” which purport to represent “final” research outcomes, but by living documents continuously updated in response to open scrutiny by the community of researchers. I’ve long argued that the modern academic publishing industry is not facilitating but hindering the communication of research. The arXiv has already made academic journals virtually redundant in many of branches of  physics and astronomy; other disciplines will inevitably follow. The age of the academic journal is drawing to a close. Now to rethink the concept of “the paper”…

A Problems Class in Complex Analysis

Posted in Education, The Universe and Stuff with tags , , , , , on May 15, 2015 by telescoper

My theoretical physics examination is coming up on Monday and the students are hard at working revising for it (or at least they should be) so I thought I’d lend a hand by deploying some digital technology in the form of the following online interactive video-based learning resource on Complex Analysis:

R.I.P. Sir Sam Edwards

Posted in Biographical, Education, The Universe and Stuff with tags , , , , , , on May 12, 2015 by telescoper

I’ve only found out this morning that Professor Sir Sam Edwards passed away last week, on 7th May 2015 at the age of 87. Although I didn’t really know him at all on a personal level, I did come across him when I was an undergraduate student at the University of Cambridge in the 1980s, so I thought I would post a brief item to mark his passing and to pay my respects.

Sam Edwards taught a second-year course at Cambridge to Physics students,entitled Analytical Dynamics as a component of Part IB Advanced Physics. It would have been in 1984 that I took it. If memory serves, which is admittedly rather unlikely, this lecture course was optional and intended for those of us who intended to follow theoretical physics Part II, i.e. in the third year.
I have to admit that Sam Edwards was far from the best lecturer I’ve ever had, and I know I’m not alone in that opinion. In fact, not to put too fine a point on it, his lectures were largely incomprehensible and attendance at them fell sharply after the first few. They were, however, based on an excellent set of typewritten notes from which I learned a lot. It wasn’t at all usual for lecturers to hand out printed lecture notes in those days, but I am glad he did. In fact, I still have them now. Here is the first page:

Sam_Edwards

It’s quite heavy stuff, but enormously useful. I have drawn on a few of the examples contained in his handout for my own lectures on related concepts in theoretical physics, so in a sense my students are gaining some benefit from his legacy.

At the time I was an undergraduate student I didn’t know much about the research interests of the lecturers, but I was fascinated to read in his Guardian obituary how much he contributed to the theoretical development of the field of soft condensed matter, which includes the physics of polymers. In those days – I was at Cambridge from 1982 to 1985 – this was a relatively small part of the activity in the Cavendish laboratory but it has grown substantially over the years.

I feel a bit guilty that I didn’t appreciate more at the time what a distinguished physicist he was, but he undoubtedly played a significant part in the environment at Cambridge that gave me such a good start in my own scientific career and was held in enormously high regard by friends and colleagues at Cambridge and beyond.

Rest in peace, Sir Sam Edwards (1928-2015).

Ned Wright’s Dark Energy Piston

Posted in The Universe and Stuff with tags , , , , on April 29, 2015 by telescoper

Since Ned Wright picked up on the fact that I borrowed his famous Dark Energy Piston for my talk I thought I’d include it here in all its animated glory to explain a little bit better why I think it was worth taking the piston.

The two important things about dark energy that enable it to reconcile apparently contradictory observations within the framework of general relativity are: (i) that its energy-density does not decrease with the expansion of the Universe (as do other forms of energy, such as radiation); and (ii) that it has negative pressure which, among other things, means that it causes the expansion of the universe to accelerate.

piston-animThe Dark Energy Piston (above) shows how these two aspects are related. Suppose the chamber of the piston is filled with “stuff” that has the attributes described above. As the piston moves out the energy density of dark energy does not decrease, but its volume does, so the total amount of energy in the chamber must increase. Since the system depicted here consists only of the piston and the chamber, this extra energy must have been supplied as work done by the piston on the contents of the chamber. For this to have happened the stuff inside must have resisted being expanded, i.e. it must be in tension. In other words it has to have negative pressure.

Compare the case of “ordinary” matter, in the form of an ideal gas. In such a case the stuff inside the piston does work pushing it out, and the energy density inside the chamber would therefore decrease.

If it seems strange to you that something that is often called “vacuum energy” has the property that its density does not decrease when it subjected to expansion, then just consider that a pretty good definition of a vacuum is something that, when you do dilute it, you don’t any less!

So how does this dark vacuum energy stuff with negative pressure cause the expansion of the Universe to accelerate?

Well, here’s the equation that governs the dynamical evolution of the Universe:

DecelerationI’ve included a cosmological constant term (Λ) but ignore this for now. Note that if the pressure p is small (e.g. how it would be for cold dark matter) and the energy density ρ is positive (which it is for all forms of energy we know of) then in the absence of Λ the acceleration is always negative, i.e. the universe decelerates. This is in accord with intuition: because gravity always pulls we expect the expansion to slow down by the mutual attraction of all the matter. However, if the pressure is negative, the combination in brackets can be negative so can imply accelerated expansion.

In fact if dark energy stuff has an equation of state of the form p=-ρc2 then the combination in brackets leads to a fluid with precisely the same effect that a cosmological constant would have, so this is the simplest kind of dark energy.

When Einstein introduced the cosmological constant in 1915/6 he did it by modifying the left hand side of his field equations, essentially modifying the law of gravitation. This discussion shows that he could instead have modified the right hand side by introducing a vacuum energy with an equation of state p=-ρc2. A more detailed discussion of this can be found here.

Anyway, which way you like to think of dark energy the fact of the matter is that we don’t know how to explain it from a fundamental point of view. The only thing I can be sure of is that whatever it is in itself, dark energy is a truly terrible name for it.

I’d go for “persistent tension”…

Dark Energy and its Discontents – the Talk

Posted in Biographical, Books, The Universe and Stuff with tags , , , on April 28, 2015 by telescoper

Yet another very busy day, so I just have time to post the slides of the talk I gave last week, on  Friday 24th April 2015, entitled Dark Energy and its Discontents, at the very posh-sounding Bath Royal Literary and Scientific Institution. Here is the poster

 

Bath_lecture

 

And here are the slides – though I didn’t get through them all on the night!…

Astronomy and Forensic Science – The Herschel Connection

Posted in History, The Universe and Stuff with tags , , , , , , on April 27, 2015 by telescoper

When I was in Bath on Friday evening I made a point of visiting the Herschel Museum, which is located in the house in which Sir William Herschel lived for a time, before moving to Slough.
image

Unfortunately I got there too late to go inside. It did remind me however of an interesting connection between astronomy and forensic science, through a certain William Herschel..

When I give popular talks about Cosmology,  I sometimes look for appropriate analogies or metaphors in detective fiction or television programmes about forensic science. I think cosmology is methodologically similar to forensic science because it is generally necessary in both these fields to proceed by observation and inference, rather than experiment and deduction: cosmologists have only one Universe;  forensic scientists have only one scene of the crime. They can collect trace evidence, look for fingerprints, establish or falsify alibis, and so on. But they can’t do what a laboratory physicist or chemist would typically try to do: perform a series of similar experimental crimes under slightly different physical conditions. What we have to do in cosmology is the same as what detectives do when pursuing an investigation: make inferences and deductions within the framework of a hypothesis that we continually subject to empirical test. This process carries on until reasonable doubt is exhausted, if that ever happens.

Of course there is much more pressure on detectives to prove guilt than there is on cosmologists to establish the truth about our Cosmos. That’s just as well, because there is still a very great deal we do not know about how the Universe works. I have a feeling that I’ve stretched this analogy to breaking point but at least it provides some kind of excuse for mentioning the Herschel connection.

In fact the Herschel connection comes through William James Herschel, the grandson of William Herschel and the eldest son of John Herschel, both of whom were eminent astronomers. William James Herschel was not an astronomer, but an important figure in the colonial establishment in India. In the context relevant to this post, however, his claim to fame is that he is credited with being the first European to have recognized the importance of fingerprints for the purposes of identifying individuals. William James Herschel started using fingerprints in this way in India in 1858; some examples are shown below (taken from the wikipedia page).

Fingerprints_taken_by_William_James_Herschel_1859-1860

Later,  in 1877 at Hooghly (near Calcutta) he instituted the use of fingerprints on contracts and deeds to prevent the then-rampant repudiation of signatures and he registered government pensioners’ fingerprints to prevent the collection of money by relatives after a pensioner’s death. Herschel also fingerprinted prisoners upon sentencing to prevent various frauds that were attempted in order to avoid serving a prison sentence.

The use of fingerprints in solving crimes was to come much later, but there’s no doubt that Herschel’s work on this was an important step.

A Happy Hubble Coincidence

Posted in Biographical, Books, The Universe and Stuff with tags , , , on April 25, 2015 by telescoper

image

Preoccupied with getting ready for my talk in Bath  I forgot t post an item pointing out that yesterday was the 25th anniversary of the launch of the Hubble Space Telescope. Can it really be so long?

Anyway, many happy returns to Hubble. I did manage to preempt the celebrations however by choosing the above picture of the Hubble Ultra Deep Field as the background fo the poster advertising the talk.

Anyway it went reasonably well. There was a full house and questions went on quite a while. Thanks to Bath Royal Literary and Scientific Institution for the invitation!