Archive for the The Universe and Stuff Category

Cosmology, Escher and the Field of Screams

Posted in Art, Education, The Universe and Stuff with tags , , , , , on March 20, 2012 by telescoper

Up early this morning for yet another busy day I thought I’d post a quick follow-up to my recent item about analogies for teaching physics (especially cosmology).

Another concept related to the cosmic microwave background that people sometimes have problems understanding is that of last scattering surface.

Various analogies are useful for this. For example, when you find yourself in thick fog you may have the impression that you are surrounded by an impenetrable wall at some specific distance around you. It’s not a physical barrier, of course, it’s just the distance at which there sufficient water droplets in the air to prevent light from penetrating further. In more technical terms the optical depth of the fog exceeds unity at the distance at which this wall is seen.

Another more direct analogy is provided by the Sun. Here’s a picture of said object, taken through an H-α filter..

What’s surprising to the uninitiated about an image such as this is that the Sun appears to have a distinct edge, like a solid object. The Sun, however, is far from solid. It’s just a ball of hot gas whose density and temperature fall off with distance from its centre. In the inner parts the Sun is basically opaque, and photons of light diffuse outwards extremely slowly because they are efficiently scattered by the plasma. At a certain radius, however, the material becomes transparent and photons travel without hindrance. What you see is the photosphere which is a sharp edge defined by this transition from opaque to transparent.

The physics defining the Sun’s photosphere is much the same as in the Big Bang, except that in the case of the Sun we are outside looking in whereas we are inside the Universe trying to look out. Take a look at this image from M.C. Escher:

The universe isn’t actually made of Angels and Demons – at least not in the standard model – but if you imagine you are in the centre of the picture  it nicely represents what it is like looking out through an expanding cosmology. Since light travels with finite speed, the further you look out the further you look back into the past when things were denser (and hotter). Eventually you reach a point where the whole Universe was as hot as the surface of a star, this is the cosmic photosphere or the last scattering surface, which is a spherical surface centred on the observer. We can’t see any further than this because what’s beyond is hidden from us by an impenetrable curtain,  but if we could just a little bit further we’d see the Big Bang itself where the density is infinite, not as a point in space but all around us.

Although it looks like we’re in a special place (in the middle) of the image, in the Big Bang theory everywhere is equivalent; any observer would see a cosmic photosphere forming a sphere around them.

And while I’m on about last scattering, here’s another analogy which might be useful if the others aren’t. I call this one the Field of Screams.

Imagine you’re in the middle of a very large, perhaps infinite, field crammed full of people, furnished with synchronised watches, each of whom is screaming at the top of their voice. At a certain instant, say time T, everyone everywhere stops screaming.

What do you hear?

Well , you’ll obviously  notice that it gets quieter straight away as the people closest to you have stopped screaming.  But you will still hear a sound because some of the sound entering your ear set out at a time before t=T. The speed of sound is 300 m/s or so, so after 1 second you will still hear the sound arriving from people further than 300 metres away. It might be faint, but it would be there. After two seconds you’d still be hearing from people further than 600 metres away,. and so on. At any time there’ll be circle around you, defined by the distance sound can have travelled since the screaming stopped – the Circle of Last Screaming. It would appear that you are in the centre of this circle, but anyone anywhere in the field would form the same impression about what’s happening around them.

Change sound to light, and move from two dimensions to three, and you can see how last scattering produces a spherical surface around you. Simples.

 

Failed Physics Teaching Analogies

Posted in Education, The Universe and Stuff with tags , , , , , , , on March 18, 2012 by telescoper

Last week I deputized for a colleague who was skiving off away at an important meeting so, for the first time ever in my current job, I actually got to give a proper lecture on cosmology. As the only out-and-out specialist in cosmology research in the School of Physics and Astronomy at Cardiff, I’ve always thought it a bit strange that I’ve never been asked to teach this subject to undergraduates, but there you are. Ours not to reason why, etc. Anyway, the lecture I gave was about the cosmic microwave background, and since I have taught cosmology elsewhere in the past it was quite easy to cobble something together.

As a lecturer you find, over the years, that various analogies come to mind that you think will help students understand the physical concepts underpinning what’s going on, and that you hope will complement the way they are developed in a more mathematical language. Sometimes these seem to work well during the lecture, but only afterwards do you find out they didn’t really serve their intended purpose. Sadly it also  sometimes turns out that they can also confuse rather than enlighten…

For instance, the two key ideas behind the production of the cosmic microwave background are recombination and the consequent decoupling of matter and radiation. In the early stages of the Big Bang there was a hot plasma consisting mainly of protons and electrons in an intense radiation field. Since it  was extremely hot back then  the plasma was more-or-less  fully ionized, which is to say that the equilibrium for the formation of neutral hydrogen atoms via

p+e^{-} \rightarrow H+ \gamma

lay firmly to the left hand side. The free electrons scatter radiation very efficiently via Compton  scattering

\gamma +e^{-} \rightarrow \gamma + e^{-}

thus establishing thermal equilibrium between the matter and the radiation field. In effect, the plasma is opaque so that the radiation field acquires an accurate black-body spectrum (as observed). As long as the rate of collisions between electrons and photons remains large the radiation temperature adjusts to that of the matter and equilibrium is preserved because matter and radiation are in good thermal contact.

Eventually, however, the temperature falls to a point at which electrons begin to bind with protons to form hydrogen atoms. When this happens the efficiency of scattering falls dramatically and as a consequence the matter and radiation temperatures are no longer coupled together, i.e. decoupling occurs; collisions can longer keep everything in thermal equilibrium. The matter in the Universe then becomes transparent, and the radiation field propagates freely as a kind of relic of the time that it was last in thermal equilibrium. We see that radiation now, heavily redshifted, as the cosmic microwave background.

So far, so good, but I’ve always thought that everyday analogies are useful to explain physics like this so I thought of the following. When people are young and energetic, they interact very effectively with everyone around them and that process allows them to keep in touch with all the latest trends in clothing, music, books, and so on. As you get older you don’t get about so much , and may even get married (which is just like recombination, in that it dramatically  reduces your cross-section for interaction with the outside world). Changing trends begin to pass you buy and eventually you become a relic, surrounded by records and books you acquired in the past when you were less introverted, and wearing clothes that went out of fashion years ago.

I’ve used this analogy in the past and students generally find it quite amusing even if it has modest explanatory value. I wasn’t best pleased, however, when a few years ago I set an examination question which asked the students to explain the processes of recombination and decoupling. One answer said “Decoupling explains Prof. Coles’ terrible fashion sense”. Grrr.

An even worse example happened when I was teaching particle physics some time ago. I had to explain neutrino oscillations, a process in which neutrinos (which have three distinct flavour states, associated with the electron, mu and tau leptons) can change flavour as they propagate. It’s quite a weird thing to spring on students who previously thought that lepton number was always conserved so I decided to start with an analogy based on more familiar physics.

A charged fermion such as an electron (or in fact anything that has a magnetic moment, which would include, e.g. the neutron)  has spin and, according to standard quantum mechanics, the component of this in any direction can  can be described in terms of two basis states, say |\uparrow> and |\downarrow> for spin in the z direction. In general, however, the spin state will be a superposition of these, e.g.

\frac{1}{\sqrt{2}} \left( |\uparrow> + |\downarrow>\right)

In this example, as long as the particle is travelling through empty space, the probability of finding it with spin “up” is  50%, as is the probability of finding it in the spin “down” state. Once a measurement is made, the state collapses into a definite “up” or “down” wherein it remains until something else is done to it.

If, on the other hand, the particle  is travelling through a region where there is a  magnetic field the “spin-up” and “spin-down” states can acquire different energies owing to the interaction between the spin and the magnetic field. This is important because it means the bits of the wave function describing the up and down states evolve at different rates, and this  has measurable consequences: measurements made at different positions yield different probabilities of finding the spin pointing in different directions. In effect, the spin vector of the  particle performs  a sort of oscillation, similar to the classical phenomenon called  precession.

The mathematical description of neutrino oscillations is very similar to this, except it’s not the spin part of the wavefunction being affected by an external field that breaks the symmetry between “up” and “down”. Instead the flavour part of the wavefunction is “precessing” because the flavour states don’t coincide with the eigenstates of the Hamiltonian that describes the neutrinos’ evolution. However, it does require that different neutrino types have intrinsically different energies  (which, in turn, means that the neutrinos must have different masses), in quite  a similar way similar to the spin-precession example.

Although this isn’t a perfect analogy I thought it was a good way of getting across the basic idea. Unfortunately, however, when I subsequently asked an examination question about neutrino oscillations I got a significant number of answers that said “neutrino oscillations happen when a neutrino travels through a magnetic field….”. Sigh. Neutrinos don’t interact with  magnetic fields, you see…

Anyhow, I’m sure there’s more than one reader out there who has had a similar experience with an analogy that wasn’t perhaps as instructive as hoped. Feel free to share through the comments box…

Research Opportunities in the Philosophy of Cosmology

Posted in The Universe and Stuff with tags , , , , , , on March 16, 2012 by telescoper

I got an email this morning telling me about the following interesting opportunities for research fellowships. They are in quite an unusual area – the philosophy of cosmology – and one I’m quite interested in myself so I thought it might ahieve wider circulation if I posted the advertisement on here.

–0–

Applications are invited for two postdoctoral fellowships in the area of philosophy of cosmology, one to be held at Cambridge University and one to be held at Oxford University, starting 1 Jan 2013 to run until 31 Aug 2014. The two positions have similar job-descriptions and the deadline for applications is the same: 18 April 2012.

For more details, see here, for the Cambridge fellowship and  here for the Oxford fellowship.

Applicants are encouraged to apply for both positions. The Oxford group is led by Joe Silk, Simon Saunders and David Wallace, and that at Cambridge by John Barrow and Jeremy Butterfield.

These appointments are part of the initiative ‘establishing the philosophy of cosmology’, involving a consortium of universities in the UK and USA, funded by the John Templeton Foundation. Its aim is to identify, define and explore new foundational questions in cosmology. Key questions already identified concern:

  • The issue of measure, including potential uses of anthropic reasoning
  • Space-time structure, both at very large and very small scales
  • The cosmological constant problem
  • Entropy, time and complexity, in understanding the various arrows of time
  • Symmetries and invariants, and the nature of the description of the universe as a whole

Applicants with philosophical interests in cosmology outside these areas will also be considered.

For more background on the initiative, see here and the project website (still under construction).

Volumina

Posted in Music, The Universe and Stuff with tags , , on March 15, 2012 by telescoper

I forgot to mention that, at the end of my talk on Monday evening,  a gentleman in the audience who is apparently a regular reader of this blog asked if I was aware of that composer György Ligeti had written a piece of music called Volumina  inspired by the Big Bang.  I was indeed  aware of this piece, and have a recording of it, but his question gives me the excuse to post a version here.  I’m sure at least some of you will have heard some of it before, in fact, as an excerpt  featured in the original radio series of The Hitchhiker’s Guide to the Galaxy which I listened to on the wireless many moons ago.

You might find Volumina a bit perplexing, but I can tell you that in surround sound with the volume up it’s absolutely amazing. My neighbours clearly agree, and were banging on the wall last night to show their appreciation.

Big Bang Acoustics

Posted in The Universe and Stuff with tags , , , , , , on March 12, 2012 by telescoper

It’s National Science and Engineering Week this week and as part of the programme of events in Cardiff we have an open evening at the School of Physics & Astronomy tonight. This will comprise a series of public talks followed by an observing session using the School’s Observatory. I’m actually giving a (short) talk myself, which means it will be a long day, so I’m going to save time by recycling the following from an old blog post on the subject of my talk.

As you probably know the Big Bang theory involves the assumption that the entire Universe – not only the matter and energy but also space-time itself – had its origins in a single event a finite time in the past and it has been expanding ever since. The earliest mathematical models of what we now call the  Big Bang were derived independently by Alexander Friedman and George Lemaître in the 1920s. The term “Big Bang” was later coined by Fred Hoyle as a derogatory description of an idea he couldn’t stomach, but the phrase caught on. Strictly speaking, though, the Big Bang was a misnomer.

Friedman and Lemaître had made mathematical models of universes that obeyed the Cosmological Principle, i.e. in which the matter was distributed in a completely uniform manner throughout space. Sound consists of oscillating fluctuations in the pressure and density of the medium through which it travels. These are longitudinal “acoustic” waves that involve successive compressions and rarefactions of matter, in other words departures from the purely homogeneous state required by the Cosmological Principle. The Friedman-Lemaitre models contained no sound waves so they did not really describe a Big Bang at all, let alone how loud it was.

However, as I have blogged about before, newer versions of the Big Bang theory do contain a mechanism for generating sound waves in the early Universe and, even more importantly, these waves have now been detected and their properties measured.

The above image shows the variations in temperature of the cosmic microwave background as charted by the Wilkinson Microwave Anisotropy Probe about a decade years ago. The average temperature of the sky is about 2.73 K but there are variations across the sky that have an rms value of about 0.08 milliKelvin. This corresponds to a fractional variation of a few parts in a hundred thousand relative to the mean temperature. It doesn’t sound like much, but this is evidence for the existence of primordial acoustic waves and therefore of a Big Bang with a genuine “Bang” to it.

A full description of what causes these temperature fluctuations would be very complicated but, roughly speaking, the variation in temperature you see in the CMB corresponds directly to variations in density and pressure arising from sound waves.

So how loud was it?

The waves we are dealing with have wavelengths up to about 200,000 light years and the human ear can only actually hear sound waves with wavelengths up to about 17 metres. In any case the Universe was far too hot and dense for there to have been anyone around listening to the cacophony at the time. In some sense, therefore, it wouldn’t have been loud at all because our ears can’t have heard anything.

Setting aside these rather pedantic objections – I’m never one to allow dull realism to get in the way of a good story- we can get a reasonable value for the loudness in terms of the familiar language of decibels. This defines the level of sound (L) logarithmically in terms of the rms pressure level of the sound wave Prms relative to some reference pressure level Pref

L=20 log10[Prms/Pref]

(the 20 appears because of the fact that the energy carried goes as the square of the amplitude of the wave; in terms of energy there would be a factor 10).

There is no absolute scale for loudness because this expression involves the specification of the reference pressure. We have to set this level by analogy with everyday experience. For sound waves in air this is taken to be about 20 microPascals, or about 2×10-10 times the ambient atmospheric air pressure which is about 100,000 Pa.  This reference is chosen because the limit of audibility for most people corresponds to pressure variations of this order and these consequently have L=0 dB. It seems reasonable to set the reference pressure of the early Universe to be about the same fraction of the ambient pressure then, i.e.

Pref~2×10-10 Pamb

The physics of how primordial variations in pressure translate into observed fluctuations in the CMB temperature is quite complicated, and the actual sound of the Big Bang contains a mixture of wavelengths with slightly different amplitudes so it all gets a bit messy if you want to do it exactly, but it’s quite easy to get a rough estimate. We simply take the rms pressure variation to be the same fraction of ambient pressure as the averaged temperature variation are compared to the average CMB temperature,  i.e.

Prms~ a few ×10-5Pamb

If we do this, scaling both pressures in logarithm in the equation in proportion to the ambient pressure, the ambient pressure cancels out in the ratio, which turns out to be a few times 10-5

With our definition of the decibel level we find that waves corresponding to variations of one part in a hundred thousand of the reference level  give roughly L=100dB while part in ten thousand gives about L=120dB. The sound of the Big Bang therefore peaks at levels just a bit less than  120 dB. As you can see in the Figure to the left, this is close to the threshold of pain,  but it’s perhaps not as loud as you might have guessed in response to the initial question. Many rock concerts are actually louder than the Big Bang, so I suspect any metalheads in the audience will be distinctly unimpressed.

A useful yardstick is the amplitude  at which the fluctuations in pressure are comparable to the mean pressure. This would give a factor of about 1010 in the logarithm and is pretty much the limit that sound waves can propagate without distortion. These would have L≈190 dB. It is estimated that the 1883 Krakatoa eruption produced a sound level of about 180 dB at a range of 100 miles. By comparison the Big Bang was little more than a whimper.

PS. If you would like to read more about the actual sound of the Big Bang, have a look at John Cramer’s webpages. You can also download simulations of the actual sound. If you listen to them you will hear that it’s more of  a “Roar” than a “Bang” because the sound waves don’t actually originate at a single well-defined event but are excited incoherently all over the Universe.

PPS. If you would like to hear a series of increasingly sophisticated computer simulations showing how our idea of the sounds accompanying the start of the Universe has evolved over the past few years, please take a look at the following video. It’s amazing how crude the 1995 version seems, compared with that describing the new era of precision cosmology.

A Piece on a Paradox

Posted in The Universe and Stuff with tags , , , , , , , , , on March 7, 2012 by telescoper

Not long ago I posted a short piece about the history of cosmology which got some interesting comments, so I thought I’d try again with a little article I wrote a while ago on the subject of Olbers’ Paradox. This is discussed in almost every astronomy or cosmology textbook, but the resolution isn’t always made as clear as it might be. The wikipedia page on this topic is unusually poor by the standards of wikipedia, and appears to have suffered a severe attack of the fractals.

I’d be interested in any comments on the following attempt.

One of the most basic astronomical observations one can make, without even requiring a telescope, is that the night sky is dark. This fact is so familiar to us that we don’t imagine that it is difficult to explain, or that anything important can be deduced from it. But quite the reverse is true. The observed darkness of the sky at night was regarded for centuries by many outstanding intellects as a paradox that defied explanation: the so-called Olbers’ Paradox.

The starting point from which this paradox is developed is the assumption that the Universe is static, infinite, homogeneous, and Euclidean. Prior to twentieth century developments in observation (Hubble’s Law) and theory  (Cosmological Models based on General Relativity), all these assumptions would have appeared quite reasonable to most scientists. In such a Universe, the intensity of light received by an observer from a source falls off as the inverse square of the distance between the two. Consequently, more distant stars or galaxies appear fainter than nearby ones. A star infinitely far away would appear infinitely faint, which suggests that Olbers’ Paradox is avoided by the fact that distant stars (or galaxies) are simply too faint to be seen. But one has to be more careful than this.

Imagine, for simplicity, that all stars shine with the same brightness. Now divide the Universe into a series of narrow concentric spherical shells, in the manner of an onion. The light from each source within a shell of radius r  falls off as r^{-2}, but the number of sources increases in the same manner. Each shell therefore produces the same amount of light at the observer, regardless of the value of r.  Adding up the total light received from all the shells, therefore, produces an infinite answer.

In mathematical form, this is

I = \int_{0}^{\infty} I(r) n dV =  \int_{0}^{\infty} \frac{L}{4\pi r^2} 4\pi r^{2} n dr \rightarrow \infty

where L is the luminosity of a source, n is the number density of sources and I(r) is the intensity of radiation received from a source at distance r.

In fact the answer is not going to be infinite in practice because nearby stars will block out some of the light from stars behind them. But in any case the sky should be as bright as the surface of a star like the Sun, as each line of sight will eventually end on a star. This is emphatically not what is observed.

It might help to think of this in another way, by imagining yourself in a very large forest. You may be able to see some way through the gaps in the nearby trees, but if the forest is infinite every possible line of sight will end with a tree.

As is the case with many other famous names, this puzzle was not actually first discussed by Olbers. His discussion was published relatively recently, in 1826. In fact, Thomas Digges struggled with this problem as early as 1576. At that time, however, the mathematical technique of adding up the light from an infinite set of narrow shells, which relies on the differential calculus, was not known. Digges therefore simply concluded that distant sources must just be too faint to be seen and did not worry about the problem of the number of sources. Johannes Kepler was also interested in this problem, and in 1610 he suggested that the Universe must be finite in spatial extent. Edmund Halley (of cometary fame) also addressed the  issue about a century later, in 1720, but did not make significant progress. The first discussion which would nowadays be regarded as a  correct formulation of the problem was published in 1744, by Loys de Chéseaux. Unfortunately, his resolution was not correct either: he imagined that intervening space somehow absorbed the energy carried by light on its path from source to observer. Olbers himself came to a similar conclusion in the piece that forever associated his name with this cosmological conundrum.

Later students of this puzzle included Lord Kelvin, who speculated that the extra light may be absorbed by dust. This is no solution to the problem either because, while dust may initially simply absorb optical light, it would soon heat up and re-radiate the energy at infra-red wavelengths. There would still be a problem with the total amount of electromagnetic radiation reaching an observer. To be fair to Kelvin, however, at the time of his writing it was not known that heat and light were both forms of the same kind of energy and it was not obvious that they could be transformed into each other in this way.

To show how widely Olbers’ paradox was known in the nineteenth Century, it is worth also mentioning that Friedrich Engels, Manchester factory owner and co-author with Karl Marx of the Communist Manifesto also considered it in his book The Dialectics of Nature. In this discussion he singles out Kelvin for particular criticism, mainly for the reason that Kelvin was a member of the aristocracy.

In fact, probably the first inklings of a correct resolution of the Olbers’ Paradox were contained not in a dry scientific paper, but in a prose poem entitled Eureka published in 1848 by Edgar Allan Poe. Poe’s astonishingly prescient argument is based on the realization that light travels with a finite speed. This in itself was not a new idea, as it was certainly known to Newton almost two centuries earlier. But Poe did understand its relevance to Olbers’ Paradox.  Light just arriving from distant sources must have set out a very long time ago; in order to receive light from them now, therefore, they had to be burning in the distant past. If the Universe has only lasted for a finite time then one can’t add shells out to infinite distances, but only as far as the distance given by the speed of light multiplied by the age of the Universe. In the days before scientific cosmology, many believed that the Universe had to be very young: the biblical account of the creation made it only a few thousand years old, so the problem was definitely avoided.

Of course, we are now familiar with the ideas that the Universe is expanding (and that light is consequently redshifted), that it may not be infinite, and that space may not be Euclidean. All these factors have to be taken into account when one calculates the brightness of the sky in different cosmological models. But the fundamental reason why the paradox is not a paradox does boil down to the finite lifetime, not necessarily of the Universe, but of the individual structures that can produce light. According to the theory Special Relativity, mass and energy are equivalent. If the density of matter is finite, so therefore is the amount of energy it can produce by nuclear reactions. Any object that burns matter to produce light can therefore only burn for a finite time before it fizzles out.

Imagine that the Universe really is infinite. For all the light from all the sources to arrive at an observer at the same time (i.e now) they would have to have been switched on at different times – those furthest away sending their light towards us long before those nearby had switched on. To make this work we would have to be in the centre of a carefully orchestrated series of luminous shells switching on an off in sequence in such a way that their light all reached us at the same time. This would not only put us  in a very special place in the Universe but also require the whole complicated scheme to be contrived to make our past light cone behave in this peculiar way.

With the advent of the Big Bang theory, cosmologists got used to the idea that all of matter was created at a finite time in the past anyway, so  Olber’s Paradox receives a decisive knockout blow, but it was already on the ropes long before the Big Bang came on the scene.

As a final remark, it is worth mentioning that although Olbers’ Paradox no longer stands as a paradox, the ideas behind it still form the basis of important cosmological tests. The brightness of the night sky may no longer be feared infinite, but there is still expected to be a measurable glow of background light produced by distant sources too faint to be seen individually. In principle,  in a given cosmological model and for given assumptions about how structure formation proceeded, one can calculate the integrated flux of light from all the sources that can be observed at the present time, taking into account the effects of redshift, spatial geometry and the formation history of sources. Once this is done, one can compare predicted light levels with observational limits on the background glow in certain wavebands which are now quite strict .

Heart of Darkness

Posted in Astrohype, The Universe and Stuff with tags , , , , , on March 6, 2012 by telescoper

Now here’s a funny thing. I’ve been struggling to keep up with matters astronomical recently owing to pressure of other things, but I could resist a quick post today about an interesting object, a galaxy cluster called Abell 520. New observations of this complex system – which appears to involve a collision between two smaller clusters, hence its nickname “The Train Wreck Cluster” – have led to a flurry of interest all over the internet, because the dark matter in the cluster isn’t behaving entirely as expected. Here is the abstract of the paper (by Jee et al., now published in the Astrophysical Journal):

We present a Hubble Space Telescope/Wide Field Planetary Camera 2 weak-lensing study of A520, where a previous analysis of ground-based data suggested the presence of a dark mass concentration. We map the complex mass structure in much greater detail leveraging more than a factor of three increase in the number density of source galaxies available for lensing analysis. The “dark core” that is coincident with the X-ray gas peak, but not with any stellar luminosity peak is now detected with more than 10 sigma significance. The ~1.5 Mpc filamentary structure elongated in the NE-SW direction is also clearly visible. Taken at face value, the comparison among the centroids of dark matter, intracluster medium, and galaxy luminosity is at odds with what has been observed in other merging clusters with a similar geometric configuration. To date, the most remarkable counter-example might be the Bullet Cluster, which shows a distinct bow-shock feature as in A520, but no significant weak-lensing mass concentration around the X-ray gas. With the most up-to-date data, we consider several possible explanations that might lead to the detection of this peculiar feature in A520. However, we conclude that none of these scenarios can be singled out yet as the definite explanation for this puzzle.

Here’s a pretty picture in which the dark matter distribution (inferred from gravitational lensing measurements) is depicted by the bluey-green colours and which seems to be more concentrated in the middle of the picture than the galaxies, although the whole thing is clearly in a rather disturbed state:

Credit: NASA, ESA, CFHT, CXO, M.J. Jee (University of California, Davis), and A. Mahdavi (San Francisco State University)

The three main components of a galaxy cluster are: (i) its member galaxies; (ii) an extended distribution of hot X-ray emitting gas and (iii) a dark matter halo. In a nutshell, the main finding of this study is that the dark matter seems to be stuck in the middle of the cluster with the X-ray gas, while the  visible galaxies seem to be sloshing about all over the place.

No doubt there will be people jumping to the conclusion that this cluster proves that the theory of dark matter is all wrong, but I think that it simply demonstrates that this is a complicated object and we don’t really understand what’s going on. The paper gives a long list of possible explanations, but there’s no way of knowing at the moment which (if any) is correct.

The Universe is like that. Most of it is a complete mess.

Fairytale Physics

Posted in The Universe and Stuff with tags , on March 1, 2012 by telescoper

It’s been far too long since I last posted an example from the Vault of Vixra, but I’m glad that my research students are keeping sufficiently up to date with developments that they’ve got time to pass on news of particularly exciting papers.

Today Geraint drew my attention to this one, with the following promising-looking abstract:

Answers to ten simple questions reveals that the standard theory of physics defies logic or reason similar to the fairy tales.

Here’s an example question:

Q02: What actually happens when heavy atom was split into two lighter atoms in fission?
Ans: Fission is splitting the atom of a heavy element into the atoms of lighter elements. The underlying process expands the uranium nucleus; as a result a certain amount of energy will be released. Expansion of the matter releases the energy and the resultant products measure less mass. Compressed material contains more energy and measures more gravity. We observe the effect of mass deficit only when an object expands in size [1, 2].

Hmmm. The other 9 are almost as good. You can download the whole paper here.

Coincidentally, I gave a lecture this morning about nuclear fission. If only I’d known then that the standard theory was so wrong I wouldn’t have been forced to spend the best part of an hour struggling to find a whiteboard marker that worked.

Neutrino Timing Glitch?

Posted in The Universe and Stuff with tags , , , , on February 23, 2012 by telescoper

You may recall the kerfuffle last September when physicists connected with the OPERA experiment at the Gran Sasso National Laboratory in Italy produced a paper suggesting that neutrinos might travel at speeds greater than that of light. I posted on that story myself and even composed a poem specially for the occasion at no extra charge:

Do neutrinos go faster than light?
Some physicists think that they might.
In the cold light of day,
I am sorry to say,
The story is probably shite

Well news began to break last night that OPERA scientists had identified an error. The first story I read was a bit shaky on the question of attribution, so I decided to sleep on it and see whether anything emerged that seemed sounder before posting on here. Later on last night an item in Nature News appeared which looks a bit better grounded:

But according to a statement OPERA began circulating today, two possible problems have now been found with its set-up. As many physicists had speculated might be the case, both are related to the experiment’s pioneering use of Global Positioning System (GPS) signals to synchronize atomic clocks at each end of its neutrino beam. First, the passage of time on the clocks between the arrival of the synchronizing signal has to be interpolated and OPERA now says this may not have been done correctly. Second, there was a possible faulty connection between the GPS signal and the OPERA master clock.

We should wait for a more definitive announcement from OPERA about these possible errors, but if it does turn out that technical glitches are responsible for the neutrino speed result then it won’t be entirely unexpected. A faulty cable connection does sound a bit lame, however. I hope they weren’t relying on a USB connection….

Anyway, as I mentioned in a comment elsewhere the arXiv paper from OPERA has now received about 230 citations, although it has not appeared in a refereed journal.  If it turns out to have been a completely wrong result, what does that tell you about the use of citations to measure “quality”?

UPDATE: There is now an official press release from CERN, confirming the unofficial reports mentioned above:

The OPERA collaboration has informed its funding agencies and host laboratories that it has identified two possible effects that could have an influence on its neutrino timing measurement. These both require further tests with a short pulsed beam. If confirmed, one would increase the size of the measured effect, the other would diminish it. The first possible effect concerns an oscillator used to provide the time stamps for GPS synchronizations. It could have led to an overestimate of the neutrino’s time of flight. The second concerns the optical fibre connector that brings the external GPS signal to the OPERA master clock, which may not have been functioning correctly when the measurements were taken. If this is the case, it could have led to an underestimate of the time of flight of the neutrinos. The potential extent of these two effects is being studied by the OPERA collaboration. New measurements with short pulsed beams are scheduled for May.

Brian Cox up the Exclusion Principle

Posted in The Universe and Stuff with tags , , on February 22, 2012 by telescoper

I know a few students of Quantum Mechanics read this blog so here’s a little challenge. View the following video segment featuring Sir Brian of Cox and see if you can spot the deliberate (?) mistake contained therein on the subject of the Pauli Exclusion Principle.

When you’ve made up your mind, you can take a peek at the objection that’s been exercising armchair physicists around the twittersphere, and also a more technical argument supporting Prof. Cox’s interpretation from a university in the Midlands.

UPDATE: 23/2/2012 Meanwhile, over the pond, Sean Carroll is on the case.