Archive for the The Universe and Stuff Category

Summer Science

Posted in The Universe and Stuff with tags , , on June 29, 2009 by telescoper

Just time for a very quick post today, owing to the hectic nature of the past (and future) few days.

Yesterday (Sunday) morning, I clambered on board a large van full of expensive and bulky gear and we lumbered away from Cardiff, down the M4 and all the way to London. The reason is the Royal Society Summer Science Exhibition, which involves various research groups setting up exhibits and demonstrating their wares to the general public in the splendid environs of the Royal Society building in Carlton House Terrace, just off Pall Mall.

Yesterday and today we’ve been setting up our exhibit, which is about Herschel and Planck  (both of which are still working perfectly, in case you wanted to ask). Unloading the van in the sweltering heat yesterday wasn’t that much fun but everyone was very helpful and we got through it.  We had temporary flooring to put down, lots of rigging and large flat monitors needed to be hoisted on to gantries. I felt a bit like a sort of up-market roadie. Most of the heavy work was done yesterday, though, and we spent today putting the computers and other electronic exhibits together and generally making it all work. I chipped in as best I could, despite my legendary incompetence with practical things. They didn’t really let me near anything really valuable anyway.

By about 2pm today we had finished, and I have to say it looks very impressive. Credit to Chris North, and the others who spent ages designing it and organizing the logistics of what is a very complicated exhibit. There are scale models of Planck and Herschel, and a full-size model of the instrument SPIRE which is on Herschel and which was designed and built by the Cardiff team. The complexity of the optical system is quite amazing. Incidentally, I heard a rumour that some test images from SPIRE are going to be released soon.. I hear they’re stunning. Watch this space.

As well as these other bits there’s an infrared camera attached to a monitor to show your hot bits, and another monitor with a wii attachment so you can see anywhere on the sky at any wavelength you wish. There are also two touch-screen displays that can take visitors through the science and technology behind these two wonderful  satellites.  It’s all very interactive, and I think it’s going to be a hit for the hands-on visitors.

To back this all up, we’ve also got mountains of leaflets, mugs, pens and other assorted memorabilia. I think they’ve overestimated how much of this stuff we can dispense in a week, but I’m sure it will come in handy in the future anyway.

An extensive rota has been organized to set the exhibit up and  keep it staffed. I had an all-day shift yesterday and was signed up for 8-3 today. Since we actually got everything done a bit early, however, I was given permission to leave. At 3pm today there was a “press preview” of the exhibition which I could’t stay for, so I figured I might as well leave before the reptiles started to arrive.

I’ll be on the stand tomorrow, trying to be nice to the public, and back again on Wednesday doing the same. The shifts are only 4 hours at a go, which is good because it’s quite tiring keeping up the enthusiasm. It’s also forecast to be extremely hot on the weather front which is another reason to keep the shifts short. I was longing for a beer by the time I finished yesterday.

I’ve also been invited to a “soirée” on Wednesday evening, which is a swanky black tie function at which sundry VIPs view the exhibits and chat with the exhibitors over champagne and canapés. ‘m quite looking forward to the chance to indulge myself and hang out with the big nobs, but I can’t say I’m looking forward to wearing the penguin suit when it’s 30C. Still, as long as the champagne is chilled I’m sure I’ll survive.

Toodle pip.

Preview from Herschel

Posted in The Universe and Stuff with tags , , on June 20, 2009 by telescoper

I thought you might like to see this image from Herschel, which I got from the ESA website. The Spitzer/MIPS and the Herschel/PACS images of M51 at 160 µm are shown above. The advantage of the larger size of the Herschel telescope is clearly reflected in the much higher resolution of the image: Herschel reveals structures that cannot be discerned in the Spitzer image.

By golly, it seems to work!

Multiversalism

Posted in The Universe and Stuff with tags , , on June 17, 2009 by telescoper

The word “cosmology” is derived from the Greek κόσμος (“cosmos”) which means, roughly speaking, “the world as considered as an orderly system”. The other side of the coin to “cosmos” is Χάος (“chaos”). In one world-view the Universe comprised two competing aspects: the orderly part that was governed by laws and which could (at least in principle) be predicted, and the “random” part which was disordered and unpredictable. To make progress in scientific cosmology we do need to assume that the Universe obeys laws. We also assume that these laws apply everywhere and for all time or, if they vary, then they vary in accordance with another law.  This is the cosmos that makes cosmology possible.  However, with the rise of quantum theory, and its applications to the theory of subatomic particles and their interactions, the field of cosmology has gradually ceded some of its territory to chaos.

In the early twentieth century, the first mathematical world models were constructed based on Einstein’s general theory of relativity. This is a classical theory, meaning that it describes a system that evolves smoothly with time. It is also entirely deterministic. Given sufficient information to specify the state of the Universe at a particular epoch, it is possible to calculate with certainty what its state will be at some point in the future. In a sense the entire evolutionary history described by these models is not a succession of events laid out in time, but an entity in itself. Every point along the space-time path of a particle is connected to past and future in an unbreakable chain. If ever the word cosmos applied to anything, this is it.

But as the field of relativistic cosmology matured it was realised that these simple classical models could not be regarded as complete, and consequently that the Universe was unlikely to be as predictable as was first thought. The Big Bang model gradually emerged as the favoured cosmological theory during the middle of the last century, between the 1940s and the 1960s. It was not until the 1960s, with the work of Hawking and Penrose, that it was realised that expanding world models based on general relativity inevitably involve a break-down of known physics at their very beginning. The so-called singularity theorems demonstrate that in any plausible version of the Big Bang model, all physical parameters describing the Universe (such as its density, pressure and temperature) all become infinite at the instant of the Big Bang. The existence of this “singularity” means that we do not know what laws if any apply at that instant. The Big Bang contains the seeds of its own destruction as a complete theory of the Universe. Although we might be able to explain how the Universe subsequently evolves, we have no idea how to describe the instant of its birth. This is a major embarrassment. Lacking any knowledge of the laws we don’t even have any rational basis to assign probabilities. We are marooned with a theory that lets in water.

The second important development was the rise of quantum theory and its incorporation into the description of the matter and energy contained within the Universe. Quantum mechanics (and its development into quantum field theory) entails elements of unpredictability. Although we do not know how to interpret this feature of the theory, it seems that any cosmological theory based on quantum theory must include things that can’t be predicted with certainty.

As particle physicists built ever more complete descriptions of the microscopic world using quantum field theory, they also realised that the approaches they had been using for other interactions just wouldn’t work for gravity. Mathematically speaking, general relativity and quantum field theory just don’t fit together. It might have been hoped that quantum gravity theory would help us plug the gap at the very beginning of the Universe, but that has not happened yet because there isn’t such a theory. What we can say about the origin of the Universe is correspondingly extremely limited and mostly speculative, but some of these speculations have had a powerful impact on the subject.

One thing that has changed radically since the early twentieth century is the possibility that our Universe may actually be part of a much larger “collection” of Universes. The potential for semantic confusion here is enormous. The Universe is, by definition, everything that exists. Obviously, therefore, there can only be one Universe. The name given to a Universe that consists of bits and pieces like this is the multiverse.

 There are various ways a multiverse can be realised. In the “Many Worlds” interpretation of quantum mechanics there is supposed to be a plurality of versions of our Universe, but their ontological status is far from clear (at least to me). Do we really have to accept that each of the many worlds is “out there”, or can we get away with using them as inventions to help our calculations?

 On the other hand, some plausible models based on quantum field theory do admit the possibility that our observable Universe is part of collection of mini-universes, each of which “really” exists. It’s hard to explain precisely what I mean by that, but I hope you get my drift. These mini-universes form a classical ensemble in different domains of a single-space time, which is not what happens in quantum multiverses.

According to the Big Bang model, the Universe (or at least the part of it we know about) began about fourteen billion years ago. We do not know whether the Universe is finite or infinite, but we do know that if it has only existed for a finite time we can only observe a finite part of it. We can’t possibly see light from further away than fourteen billion light years because any light signal travelling further than this distance would have to have set out before the Universe began. Roughly speaking, this defines our “horizon”: the maximum distance we are in principle able to see. But the fact that we can’t observe anything beyond our horizon does not mean that such remote things do not exist at all. Our observable “patch” of the Universe might be a tiny part of a colossal structure that extends much further than we can ever hope to see. And this structure might be not at all homogeneous: distant parts of the Universe might be very different from ours, even if our local piece is well described by the Cosmological Principle.

Some astronomers regard this idea as pure metaphysics, but it is motivated by plausible physical theories. The key idea was provided by the theory of cosmic inflation, which I have blogged about already. In the simplest versions of inflation the Universe expands by an enormous factor, perhaps 1060, in a tiny fraction of a second. This may seem ridiculous, but the energy available to drive this expansion is inconceivably large. Given this phenomenal energy reservoir, it is straightforward to show that such a boost is not at all unreasonable. With inflation, our entire observable Universe could thus have grown from a truly microscopic pre-inflationary region. It is sobering to think that everything galaxy, star, and planet we can see might from a seed that was smaller than an atom. But the point I am trying to make is that the idea of inflation opens up ones mind to the idea that the Universe as a whole may be a landscape of unimaginably immense proportions within which our little world may be little more than a pebble. If this is the case then we might plausibly imagine that this landscape varies haphazardly from place to place, producing what may amount to an ensemble of mini-universes. I say “may” because there is yet no theory that tells us precisely what determines the properties of each hill and valley or the relative probabilities of the different types of terrain.

Many theorists believe that such an ensemble is required if we are to understand how to deal probabilistically with the fundamentally uncertain aspects of modern cosmology. I don’t think this is the case. It is, at least in principle, perfectly possible to apply probabilistic arguments to unique events like the Big Bang using Bayesian inference. If there is an ensemble, of course, then we can discuss proportions within it, and relate these to probabilities too. Bayesians can use frequencies if they are available but do not require them. It is one of the greatest fallacies in science that probabilities need to be interpreted as frequencies.

At the crux of many related arguments is the question of why the Universe appears to be so well suited to our existence within it. This fine-tuning appears surprising based on what (little) we know about the origin of the Universe and the many other ways it might apparently have turned out. Does this suggest that it was designed to be so or do we just happen to live in a bit of the multiverse nice enough for us to have evolved and survived in?  

Views on this issue are often boiled down into a choice between a theistic argument and some form of anthropic selection.  A while ago I gave a talk at a meeting in Cambridge called God or Multiverse? that was an attempt to construct a dialogue between theologians and cosmologists. I found it interesting, but it didn’t alter my view that science and religion don’t really overlap very much at all on this, in the sense that if you believe in God it doesn’t mean you have to reject the multiverse, or vice-versa. If God can create a Universe, he could create a multiverse to0. As it happens, I’m agnostic about both.

So having, I hope, opened up your mind to the possibility that the Universe may be amenable to a frequentist interpretation, I should confess that I think one can actually get along quite nicely without it.  In any case, you will probably have worked out that I don’t really like the multiverse. One reason I don’t like it is that it accepts that some things have no fundamental explanation. We just happen to live in a domain where that’s the way things are. Of course, the Universe may turn out to be like that –  there definitely will be some point at which our puny monkey brains  can’t learn anything more – but if we accept that then we certainly won’t find out if there is really a better answer, i.e. an explanation that isn’t accompanied by an infinite amount of untestable metaphysical baggage. My other objection is that I think it’s cheating to introduce an infinite thing to provide an explanation of fine tuning. Infinity is bad.

Old Talk

Posted in Books, Talks and Reviews, The Universe and Stuff with tags on June 16, 2009 by telescoper

I just stumbled upon a post about a talk I did last year at the Multi-Faith centre at the University of Derby. I’ll let you follow the link to see how the talk and discussion went, but here’s a copy of a photograph of me trying to talk with my mouth full.

Telescope Wars

Posted in Science Politics, The Universe and Stuff with tags , , , on June 13, 2009 by telescoper

Over the last few months the Science and Technology Facilities Council has been setting up a review of its ground-based astronomy programme. The panel conducting the review has produced a consultation document, and is asking for input via an online questionnaire. There will also be a (rather short) public meeting in London on July 9th. The consultation period closes on July 31st.

Reviews of this kind would be necessary in the best of times in order to establish long-term scientific priorities and try to align the provision of facilities with those strategic objectives. Unfortunately, we don’t live in the best of times so the backdrop to the current review is a shrinking pot of money available for “traditional” ground-based astronomy and the consequent need to target planned programmes for the chop.

Andy and Sarah have already blogged about this -and they both know a lot more than me about ground-based astronomy – so I won’t try to cover the same ground as them. I would however, like to make a  couple of points.

The review has to help STFC strike a balance between current facilities and projects for the future. The largest elements of the current ground-based programme include the subscription to ESO (including associated costs for ALMA, which amounts to over £200 Million), the twin 8m telescopes known as Gemini (North and South, about £60 Million), E-Merlin (about £24 Million), UKIRT and JCMT (about £34 Million); figures represent costs over the next 10 years or so. The two biggest projects that the UK would like to get involved in are a European Extremely Large Telescope (E-ELT), an optical telescope currently aimed to be about 42m in diameter, and the Square Kilometre Array, a futuristic radio telescope. Each of these would cost the UK over £100 Million over the next decade.

The consultation document puts it quite succintly:

It would be unrealistic to imagine that in 2020 the UK would have a large stake in large facilities like E-ELT and SKA, and would also retain all its current ground-based facilities. It is always hard to forego a workhorse facility that has supported an active and successful science programme, in order to start construction of some future facility many years hence. But our bid for the capital costs for E-ELT and/or SKA would not be credible if we do not show that we are willing to do this.

 

I agree that it maintaining the current programme as well as acquiring an interest in both E-ELT and SKA is completely implausible. The more relevant question though is how deep we have to cut the ongoing astronomy programme in order to afford either of these, or whether we can do that at all. It seems quite likely to me that future funding of the ground-based programme is likely to suffer drastically, both because of cuts to the overall STFC grant that appear inevitable in the next comprehensive spending review and also the current STFC leadership’s bias in favour of space technology at the expense of science. On the latter point, it is worth noting that it is specifically the ground-based astronomy programme that is being lined up against the wall here; space-based projects of negligible scientific value, such as Moonlite and BEPI-Columbo are not to going be weighed in the same balance. At the very least, future involvement in a next-generation X-ray telescope  should certainly have been in the mixer with other observatory-type facilities on the ground. I fear that the STFC Executive sees the current UK ground-based programme as significantly too large, and would like to squeeze it all into the box marked ESO. I would like to be able to sound more optimisitic, but I think that the most likely outcome of this review is therefore that the only current facilities that will survive into the medium term will be those provided through ESO  membership. JCMT and UKIRT are nearing the end of their useful life anyway, but the writing is definitely on the wall for both Gemini and E-Merlin. Not that it hasn’t been before now…

If this the way things go, then the remaining issue is whether we can afford to be involved in both E-ELT and  SKA, which seems to me to be most unlikely. If we have to pick one, which should it be? That is clearly going to be the topic of much debate. In the spirit of the drive for rationalisation I touched on above, it may well be that we don’t do anything at all outside the ESO umbrella. In that case the United Kingdom ends up with a ground-based astronomy programme consisting of the ESO facilities plus a share in the E-ELT (itself an ESO proposal). I think this would be a tragedy because  I find the scientific case for SKA much stronger than that for E-ELT; it would have been a closer call if the ELT were still the 100m optical telescope as originally proposed many years ago (and which I used to call the FLT). I’m sure many will disagree for legitimate scientific reasons (rather than the desire to play “mine’s bigger than yours” with the Americans, who are currently developing a 30m telescope).

I’m sure there will also be many astronomers who would rather have neither SKA nor E-ELT if it means losing access to the suite of smaller telescopes that continue to produce many interesting scientific results. If it came to a vote I’m not sure what the result would be, which is why I want to encourage anyone who has any input to fill in the questionnaire!

A final little wrinkle on this question is the following. Suppose STFC decides  not to support future involvement in SKA – I hope this isn’t the way things turn out, but in our dire financial circumstances it might be – does this make continued funding for E-Merlin more likely or less likely? Answers on a postcard (or even via the comments box)..

Notes from the North

Posted in Biographical, The Universe and Stuff with tags , , on June 8, 2009 by telescoper

Just time for a quick post today. I’m in Copenhagen for a short meeting entitled “Cosmology and Astroparticle Physics from the LHC to Planck“. The meeting only lasts today and tomorrow morning, but it’s been a lot of fun so far and has offered me the chance to chat with a lot of people I don’t often get the chance to talk to.

I suppose the only thing from the meeting I really want to mention in this short post is the  current status of Planck, which is currently about a million km from Earth. Both instruments (the High Frequency Instrument HFI and Low Frequency Instrument LFI) are still performing fine and the satellite,  having now been injected into its rather large orbit around L2, is  cooling down to its operating temperature. So far so good. There will be more tests at the beginning of July, after which it will start its real business of scanning the sky to make maps of the primordial temperature  fluctuations.

Today I gave my (usual) talk about cosmic anomalies (which I’ve blogged about before), but there were also interesting talks about possible interpretation of the positron excess observed in the direction of the Galactic Centre,  on a model of anisotropic dark energy  and a wacky contribution by Igor Novikov about semi-traversible wormholes.

Meanwhile, over lunch and dinner the various European participants of the meeting mulled over the results from the elections to the European parliament which completed yesterday.

The results generally showed a move to the right across Europe. In the United Kingdom this also happened, as the Labour Party’s share of the vote collapsed to just under 16%. I’m not going to shed any tears for them, but I am shamed to admit that my country will now be represented in the European Parliament by two members of the British National Party – a bunch of neo-Nazi thugs who are doing the best they can for their own ends to exploit peoples’ discontent with the mainstream parties. Fortunately their share of the vote (about 6%, on a very low turnout) remained relatively small and was, in fact, less than that of the Green Party. Nevertheless with the 65th anniversary of D-Day only a few days ago, it is depressing that so many people have forgotten the sacrifices that previous generations made to save this country from exactly that  kind of fascist. I hope this disaster is not repeated at the next general election. This kind of monstrosity makes the arcane world of cosmology suddenly seem so irrelevant.

Returning to Lognormality

Posted in Biographical, Science Politics, The Universe and Stuff with tags , , , on June 7, 2009 by telescoper

I’m off later today for a short trip to Copenhagen, a place I always enjoy visiting. I particularly remember a very nice time I had there back in 1990 when I was invited by Bernard Jones, who used to work at the Niels Bohr Institute.  I stayed there several weeks over the May/June period which is the best time of year  for Denmark; it’s sufficiently far North that the summer days are very long, and when it’s light until almost midnight it’s very tempting to spend a lot of time out late at night.

As well as being great fun, that little visit also produced my most-cited paper. I’ve never been very good at grabbing citations – I’m more likely to fall off bandwagons rather than jump onto them – but this little paper seems to keep getting citations. It hasn’t got that many by the standards of some papers, but it’s carried on being referred to for almost twenty years, which I’m quite proud of; you can see the citations per year statistics are fairly flat. The model we proposed turned out to be extremely useful in a range of situations, hence the long half-life.

nph-ref_history

I don’t think this is my best paper, but it’s definitely the one I had most fun working on. I remember we had the idea of doing something with lognormal distributions over coffee one day,  and just a few weeks later the paper was  finished. In some ways it’s the most simple-minded paper I’ve ever written – and that’s up against some pretty stiff competition – but there you go.

Picture1

The lognormal seemed an interesting idea to explore because it applies to non-linear processes in much the same way as the normal distribution does to linear ones. What I mean is that if you have a quantity Y which is the sum of n independent effects, Y=X1+X2+…+Xn, then the distribution of Y tends to be normal by virtue of the Central Limit Theorem regardless of what the distribution of the Xi is  If, however, the process is multiplicative so  Y=X1×X2×…×Xn then since log Y = log X1 + log X2 + …+log Xn then the Central Limit Theorem tends to make log Y normal, which is what the lognormal distribution means.

The lognormal is a good distribution for things produced by multiplicative processes, such as hierarchical fragmentation or coagulation processes: the distribution of sizes of the pebbles on Brighton beach  is quite a good example. It also crops up quite often in the theory of turbulence.

I;ll mention one other thing  about this distribution, just because it’s fun. The lognormal distribution is an example of a distribution that’s not completely determined by knowledge of its moments. Most people assume that if you know all the moments of a distribution then that has to specify the distribution uniquely, but it ain’t necessarily so.

If you’re wondering why I mentioned citations, it’s because it looks like they’re going to play a big part in the Research Excellence Framework, yet another new bureaucratical exercise to attempt to measure the quality of research done in UK universities. Unfortunately, using citations isn’t straightforward. Different disciplines have hugely different citation rates, for one thing. Should one count self-citations?. Also how do you aportion citations to multi-author papers? Suppose a paper with a thousand citations has 25 authors. Does each of them get the thousand citations, or should each get 1000/25? Or, put it another way, how does a single-author paper with 100 citations compare to a 50 author paper with 101?

Or perhaps the REF should use the logarithm of the number of citations instead?

Ninety Years On…

Posted in Books, Talks and Reviews, The Universe and Stuff with tags , , , , , , on May 28, 2009 by telescoper

The 29th May 2009 is a very special day that should be marked by anyone interested in the theory of relativity as it is the 90th anniversary of one of the most famous experiments of all time.

On 29th May 1919, measurements were made during total eclipse of the Sun that have gone down in history as vindicating Einstein’s (then) new general theory of relativity. I’ve written quite a lot about this in past years, including a little book and a slightly more technical paper. I decided, though, to post this little piece that is based on an article I wrote for Firstscience.

The Eclipse that Changed the Universe

A total eclipse of the Sun is a moment of magic: a scant few minutes when our perceptions of the whole Universe are turned on their heads. The Sun’s blinding disc is replaced by ghostly pale tentacles surrounding a black heart – an eerie experience witnessed by hundreds of millions of people throughout Europe and the Near East last August.

But one particular eclipse of the Sun, eighty years ago, challenged not only people’s emotional world. It was set to turn the science of the Universe on its head. For over two centuries, scientists had believed Sir Isaac Newton’s view of the Universe. Now his ideas had been challenged by a young German-Swiss scientist, called Albert Einstein. The showdown – Newton vs Einstein – would be the total eclipse of 29 May 1919.

Newton’s position was set out in his monumental Philosophiae Naturalis Principia Mathematica, published in 1687. The Principia – as it’s familiarly known – laid down a set of mathematical laws that described all forms of motion in the Universe. These rules applied as much to the motion of planets around the Sun as to more mundane objects like apples falling from trees.

At the heart of Newton’s concept of the Universe were his ideas about space and time. Space was inflexible, laid out in a way that had been described by the ancient Greek mathematician Euclid in his laws of geometry. To Newton, space was the immovable and unyielding stage on which bodies acted out their motions. Time was also absolute, ticking away inexorably at the same rate for everyone in the Universe.

Sir Isaac Newton
Sir Isaac Newton by Sir Godfrey Kneller
Courtesy of the National Portrait Gallery, London Sir Isaac Newton proposed the first theory of gravity.

For over 200 years, scientists saw the Cosmos through Newton’s eyes. It was a vast clockwork machine, evolving by predetermined rules through regular space, against the beat of an absolute clock. This edifice totally dominated scientific thought, until it was challenged by Albert Einstein.

In 1905, Einstein dispensed with Newton’s absolute nature of space and time. Although born in Germany, during this period of his life he was working as a patent clerk in Berne, Switzerland. He encapsulated his new ideas on motion, space and time in his special theory of relativity. But it took another ten years for Einstein to work out the full consequences of his ideas, including gravity. The general theory of relativity, first aired in 1915, was as complete a description of motion as Newton had prescribed in his Principia. But Einstein’s description of gravity required space to be curved. Whereas for Newton space was an inflexible backdrop, for Einstein it had to bend and flex near massive bodies. This warping of space, in turn, would be responsible for guiding objects such as planets along their orbits.

Einstein and Eddington
Royal Observatory Greenwich Albert Einstein and Arthur Eddington: the father of relativity and the man who proved him right.

By the time he developed his general theory, Einstein was back in Germany, working in Berlin. But a copy of his general theory of relativity was soon smuggled through war-torn Europe to Cambridge. There it was read by Arthur Stanley Eddington, Britain’s leading astrophysicist. Eddington realised that Einstein’s theory could be tested. If space really was distorted by gravity, then light passing through it would not travel in a straight line, but would follow a curved path. The stronger the force of gravity, the more the light would be bent. The bending would be largest for light passing very close to a very massive body, such as the Sun.

Unfortunately, the most massive objects known to astronomers at the time were also very bright. This was before black holes were seriously considered, and stars provided the strongest gravitational fields known. The Sun was particularly useful, being a star right on our doorstep. But it is impossible to see how the light from faint background stars might be bent by the Sun’s gravity, because the Sun’s light is so bright it completely swamps the light from objects beyond it.

Click here for enlarged version
Royal Observatory Greenwich Scientist’s sketch of the path of the vital 1919 eclipse.

Eddington realised the solution. Observe during a total eclipse, when the Sun’s light is blotted out for a few minutes, and you can see distant stars that appear close to the Sun in the sky. If Einstein was right, the Sun’s gravity would shift these stars to slightly different positions, compared to where they are seen in the night sky at other times of the year when the Sun far away from them. The closer the star appears to the Sun during totality, the bigger the shift would be.

Eddington began to put pressure on the British scientific establishment to organise an experiment. The Astronomer Royal of the time, Sir Frank Watson Dyson, realised that the 1919 eclipse was ideal. Not only was totality unusually long (around six minutes, compared with the two minutes we experienced in 1999) but during totality the Sun would be right in front of the Hyades, a cluster of bright stars.

But at this point the story took a twist. Eddington was a Quaker and, as such, a pacifist. In 1917, after disastrous losses during the Somme offensive, the British government introduced conscription to the armed forces. Eddington refused the draft and was threatened with imprisonment. In the end, Dyson’s intervention was crucial persuading the government to spare Eddington. His conscription was postponed under the condition that, if the war had finished by 1919, Eddington himself would lead an expedition to measure the bending of light by the Sun. The rest, as they say, is history.

The path of totality of the 1919 eclipse passed from northern Brazil, across the Atlantic Ocean to West Africa. In case of bad weather (amongst other reasons) two expeditions were organised: one to Sobral, in Brazil, and the other to the island of Principe, in the Gulf of Guinea close to the West African coast. Eddington himself went to Principe; the expedition to Sobral was led by Andrew Crommelin from the Royal Observatory at Greenwich.

Click for enlarged version
Royal Observatory Greenwich British scientists in the field at Sobral in 1919.

The expeditions did not go entirely according to plan. When the day of the eclipse (29 May) dawned on Principe, Eddington was greeted with a thunderstorm and torrential rain. By mid-afternoon the skies had partly cleared and he took some pictures through cloud.

Meanwhile, at Sobral, Crommelin had much better weather – but he had made serious errors in setting up his equipment. He focused his main telescope the night before the eclipse, but did not allow for the distortions that would take place as the temperature climbed during the day. Luckily, he had taken a backup telescope along, and this in the end provided the best results of all.

After the eclipse, Eddington himself carefully measured the positions of the stars that appeared near the Sun’s eclipsed image, on the photographic plates exposed at both Sobral and Principe. He then compared them with reference positions taken previously when the Hyades were visible in the night sky. The measurements had to be incredibly accurate, not only because the expected deflections were small. The images of the stars were also quite blurred, because of problems with the telescopes and because they were seen through the light of the Sun’s glowing atmosphere, the solar corona.

Before long the results were ready. Britain’s premier scientific body, the Royal Society, called a special meeting in London on 6 November. Dyson, as Astronomer Royal took the floor, and announced that the measurements did not support Newton’s long-accepted theory of gravity. Instead, they agreed with the predictions of Einstein’s new theory.

Image from Sobral
Royal Observatory Greenwich The final proof: the small red line shows how far the position of the star has been shifted by the Sun’s gravity.

The press reaction was extraordinary. Einstein was immediately propelled onto the front pages of the world’s media and, almost overnight, became a household name. There was more to this than purely the scientific content of his theory. After years of war, the public embraced a moment that moved mankind from the horrors of destruction to the sublimity of the human mind laying bare the secrets of the Cosmos. The two pacifists in the limelight – the British Eddington and the German-born Einstein – were particularly pleased at the reconciliation between their nations brought about by the results.

But the popular perception of the eclipse results differed quite significantly from the way they were viewed in the scientific establishment. Physicists of the day were justifiably cautious. Eddington had needed to make significant corrections to some of the measurements, for various technical reasons, and in the end decided to leave some of the Sobral data out of the calculation entirely. Many scientists were suspicious that he had cooked the books. Although the suspicion lingered for years in some quarters, in the end the results were confirmed at eclipse after eclipse with higher and higher precision.

Image from Hubble

NASA In this cosmic ‘gravitational lens,’ a huge cluster of galaxies distorts the light from more distant galaxies into a pattern of giant arcs.

Nowadays astronomers are so confident of Einstein’s theory that they rely on the bending of light by gravity to make telescopes almost as big as the Universe. When the conditions are right, gravity can shift an object’s position by far more than a microscopic amount. The ideal situation is when we look far out into space, and centre our view not on an individual star like the Sun, but on a cluster of hundreds of galaxies – with a total mass of perhaps 100 million million suns. The space-curvature of this immense ‘gravitational lens’ can gather the light from more remote objects, and focus them into brilliant curved arcs in the sky. From the size of the arcs, astronomers can ‘weigh’ the cluster of galaxies.

Einstein didn’t live long enough to see through a gravitational lens, but if he had he would definitely have approved….

A Unified Quantum Theory of the Sexual Interaction

Posted in The Universe and Stuff with tags , , , on May 20, 2009 by telescoper

Recent changes to the criteria for allocating research funding require particle physicists  and astronomers to justify the wider social, cultural and economic impact of their science. In view of the directive to engage in work more directly relevant to the person in the street, I’ve decided to share with you my latest results, which involve the application of ideas from theoretical physics in the wider field of human activity. That is, if you’re one of those people who likes to have sex in a field.

In the simplest theories of the sexual interaction, the eigenstates of the Hamiltonian describing all allowed forms of two-body coupling are identified with the conventional gender states, “Male” and “Female”  denoted |M> and |F> in the Dirac bra-ket notation; note that the bra is superfluous in this context so, as usual, we dispense with it at the outset. Interactions between |M> and |F> states are assumed to be attractive while those between |M> and |M> or |F> and |F> are supposed either to be repulsive or, in some theories, entirely forbidden.

Observational evidence, however, strongly  suggests that two-body interactions involving either F-F or M-M coupling, though suppressed in many  situations, are by no means ruled out  in the manner one would expect from the simplest theory outlined above. Furthermore, experiments indicate that the relevant channel for M-M interactions appears to have a comparable cross-section to that of the standard M-F variety, so a similar form of tunneling is presumably involved. This suggests that a more complete theory could be obtained by a  relatively simple modification of the  version presented above.

Inspired by the recent Nobel prize awarded for the theory of quark mixing, we are now able to present a new, unified theory of the sexual interaction. In our theory the “correct” eigenstates for sexual behaviour are not the conventional |M> and |F> gender states but linear combinations of the form

|M>=cosθ|S> + sinθ|G>

|F>=-sinθ|G>+cosθ|S>

where θ is the Cabibbo mixing angle or, more appropriately in this context, the sexual orientation (measured in degrees). Extension to three states is in principle possible (but a bit complicated) and we will not discuss this issue further.

In this theory each |M> or |F> state is regarded as a linear combination of heterosexual (straight, S)  and homosexual (gay, G) states represented by a rotation of the basis by an angle θ, exactly the same mechanism that accounts for the charge-changing weak interactions between quarks.

For a purely heterosexual state, this angle is zero, in which case we recover the simple theory outlined above. At θ=90° only the G component manifests itself; in this state only classically forbidden interactions are permitted. The general state is however, one with a value of the orientation angle somewhere between these two limits and this permits all forms of interaction, at least with some probability.

Note added in proof:  the |G> states do not appear in standard QFT but are motivated by some versions of string theory, expecially those involving G-strings.

One immediate consequence of this theory is that a “pure” gender state should be generally regarded as a quantum superposition of “straight” and “gay” states. This differs from a classical theory in that the true state can not be known with certainty; only the relative frequency of straight and gay behaviour (over a large number of interactions) can be predicted, perhaps explaining the large number of married men to be found on gaydar. The state at any given time is thus entirely determined by a sum over histories up to that moment, taking into account the appropriate action. In the Copenhagen interpretation, collapse one way or another  occurs only when a measurement is made (or when enough Carlsberg is drunk).

If there is a difference in energy of the basis states a pure |M> state can oscillate between |S> and |G> according to a time-dependent phase factor arising when the two states interfere with each other:

|M(t)>=cosθ|S>exp(-iE1t) + sinθ|G>exp(-iE2t);

(obviously we are using natural units here, so that it all looks cleverer than it actually is). This equation is the origin of the expressions  “it’s just a phase he’s going through” and “he swings both ways”. In physics parlance this means that the eigenstates of the sexual interaction do not coincide with the conventional gender types, indicating that sexual behaviour is not necessarily time-invariant for a given body.

Whether single-body phenomena (i.e. self-interactions) can provide insights into this theory  depends, as can be seen from the equation,  on the energies of the relevant states (as is also the case  in neutrino oscillations). If they are equal then there is no oscillation. However,  a detailed discussion of the role of degeneracy is beyond the scope of this analysis.

Self- interactions involving a solitary phase are generally difficult to observe,  although examples have been documented that involve short-lived but highly-excited states  accompanied by various forms of stimulated emission. Unfortunately, however, the resulting fluxes are  not often well measured. This form of interaction also appears to be the current preoccupation of string theorists.

More definitive evidence for the theory might emerge from situations involving some form of entanglement, such as in the examples of M-M and F-F coupling mentioned above.  Non-local interactions of a sexual type are possible in principle, but causality and simultaneity issues exist and most researchers consequently prefer to focus on local interactions, which are generally supposed to be more satisfactory from the point-of-view of reproducibility.

Although the theory is qualitatively successful we need more experimental data to pin down the parameters needed for a robust fit. It is not known, for example, whether the rates of M-M and F-F coupling are similar or, indeed, whether the peak intensity of these interactions, when resonance is reached, is similar to those of the standard M-F form. It is generally accepted, however, that the rate of decay from peak intensity is rather slower for processes involving |F> states than for|M> which is not so easy to model in this theory, although with a bit of renormalization we can probably explain anything.

Answers to these questions can perhaps be gleaned from observations of many-body processes  (i.e. those with N≥3),  especially if they involve a multiplicity of hardon states (i.e. collective excitations). Only these permit a full exploration of all possible degrees of freedom, although higher-order Feynman diagrams are needed to depict them and they require more complicated group theoretical techniques.  Examples like the one  shown above  – representing a threesome – are not well understood, but undoubtedly contribute significantly to the bi-spectrum.

One might also speculate that in these and other highly excited states,  the sexual interaction may be described by something more like the  electroweak theory in which all forms of interaction occur in a much more symmetric fashion and at much higher rates than at lower energies. That sounds like some kind of party…

It is worth remarking that there may be finer structure than this model takes into account. For example, the |G> state is generally associated with  singlet configurations like those shown on the right. However, G-G coupling is traditionally described in terms of  “top” |t> and “bottom” |b> states, with b-t coupling the preferred mode,  leading to the possibility of doublets or even triplets. It may be even prove  necessary to introduce a further mixing angle φ of the form

|G>=cosφ |t> + sinφ |b>

so that the general state of |G>  is “versatile”. However, whether G-G interactions can be adequately described even in this extended theory is a matter for debate until the intensity of t-t and b-b  coupling is more accurately measured.

Finally, we should like to point out the difference between our model and that of the usual quark sextet, in which interacting states are described in terms of three pairs: the bottom (b) and top (t) which we have mentioned already; the strange (s) and charmed (c); and the up (u) and down (d). While it is clear that |b> and |t> do exhibit strong interactions and it appears plausible that |s> and |c> might do likewise, the sexual interaction clearly breaks the isospin symmetry between the |u> and the |d> in both M-M and M-F cases. The “up” state is definitely preferred in all forms of coupling and, indeed, the “down” has only ever been known to engage in weak interactions.

We have recently submitted an application to the Science and Technology Facilities Council for a modest sum (£754 million) to build a large-scale  UK facility  in order to carry out hands-on experimental tests of some aspects of the theory. We hope we can rely on the support of the physics community in agreeing to close down their labs and quit their jobs in order to release the funding needed to support it.

Neophlogistonianism

Posted in The Universe and Stuff with tags , , on May 18, 2009 by telescoper

What happens when something burns?

Ask a seventeenth century scientist that question and the chances are the answer would  have involved the word phlogiston, a name derived from the Greek  φλογιστόν, meaning “burning up”. This “fiery principle” or “element” was supposed to be present in all combustible materials and the idea was that it was released into air whenever any such stuff was ignited. The act of burning separated the phlogiston from the dephlogisticated “true” form of the material, also known as calx.

The phlogiston theory held sway until  the late 18th Century, when Antoine Lavoisier demonstrated that combustion results in an increase in weight of the material being burned. This poses a serious problem if burning also involves the loss of phlogiston unless phlogiston has negative weight. However, many serious scientists of the 18th Century, such as Georg Ernst Stahl, had already suggested that phlogiston might have negative weight or, as he put it, “levity”. Nowadays we would probably say “anti-gravity”.

Eventually, Joseph Priestley discovered what actually combines with materials during combustion:  oxygen. Instead of becoming dephlogisticated, things become oxidised by fixing oxygen from air, which is why their weight increases. It’s worth mentioning, though, the name that Priestley used for oxygen was in fact “dephlogisticated air” (because it was capable of combining more extensively with phlogiston than ordinary air). He  remained a phlogistonian longer after making the discovery that should have killed the theory.

So why am I rambling on about a scientific theory that has been defunct for more than two centuries?

Well,  it’s because there just might be a lesson from history about the state of modern cosmology…

The standard cosmological model involves the hypothesis that about 75% of the energy budget of the Universe is in the form of “dark energy”. We don’t know much about what this is, except that in order to make our current understanding work out it has to act like a source of anti-gravity. It does this by violating the strong energy condition of general relativity.

Dark energy is needed to reconcile three basic measurements: (i) the brightness distant supernovae that seem to indicate the Universe is accelerating (which is where the anti-gravity comes in); (ii) the cosmic microwave background that suggests the Universe has flat spatial sections; and (iii) the direct estimates of the mass associated with galaxy clusters that accounts for about 25% of the mass needed to close the Universe.

A universe without dark energy appears not to be able to account for these three observations simultaneously within our current understanding of gravity as obtained from Einstein’s theory of general relativity.

I’ve blogged before, with some levity of my own, about how uncomfortable this dark energy makes me feel. It makes me even more uncomfortable that such an enormous  industry has grown up around it and that its existence is accepted unquestioningly by so many modern cosmologists.

Isn’t there a chance that, with the benefit of hindsight, future generations will look back on dark energy in the same way that we now see the phlogiston theory?

Or maybe the dark energy really is phlogiston. That’s got to be worth a paper! At least I prefer the name to quintessence.