Archive for Physics

Flame Academy

Posted in Biographical, The Universe and Stuff with tags , , , , , , , on September 2, 2009 by telescoper

I heard on the radio this morning from that nice Mr Cowan that today is the anniversary of the start of the Great Fire of London which burned for four days in 1666. That provides for a bit of delayed synchronicity with yesterday’s post about the dreadful fires in the outskirts of Los Angeles and a similar conflagration in Athens (which now thankfully appears to be under control).

Fires are of course terrifying phenomena, and it must be among most people’s nightmares to be caught in one. The cambridge physicist Steve Gull experienced this at first hand when his boat exploded and caught fire recently. I’ll take this opportunity to wish him a speedy recovery from his injuries.

But frightening as such happenings are, a flame (the visible, light emitting part of a fire) can also be a very beautiful and fascinating spectacle. Flames are stable long-lived phenomena involving combustion in which a “fuel”, often some kind of hydrocarbon, reacts with an oxidizing element which, in the case of natural wildfires at any rate, is usually oxygen. However, along the way, many intermediate radicals are generated and the self-sustaining nature of the flame is maintained by intricate reaction kinetics.

The shape and colour of a flame is determined not just by its temperature but also, in a complicated way, by diffusion, convection and gravity. In a diffusion flame, the fuel and the oxidizing agent diffuse into each other and the rate of diffusion consequently limits the rate at which the flame spreads. Usually combustion takes place only at the edge of the flame: the interior contains unburnt fuel. A candle flame is usually relatively quiescent because the flow of material in it is predominantly laminar. However, at higher speeds you can find turbulent flames, like in the picture below!

Sometimes convection carries some of the combustion products away from the source of the flame. In a candle flame, for example, incomplete combustion forms soot particles which are convected upwards and then incandesce inside the flame giving it a yellow colour. Gravity limits the motion of heavier products away from the source. In a microgravity environment, flames look very different!

All this stuff about flames also gives me the opportunity to mention the great Russian physicist Yakov Borisovich Zel’dovich. To us cosmologists he is best known for his work on the large-scale structure of the Universe, but he only started to work on that subject relatively late in his career during the 1960s.  He in fact began his career as a physical chemist and arguably his greatest contribution to science was that he developed the first completely physically based theory of flame propagation (together with Frank-Kamenetskii). No doubt he used insights gained from this work, together with his studies of detonation and shock waves, in the Soviet nuclear bomb programme in which he was a central figure.

But one thing even Zel’dovich couldn’t explain is why fires are such fascinating things to look at. I remember years ago having a fire in my back garden to get rid of garden rubbish. The more it burned the more things  I wanted to throw on it,  to see how well they would burn rather than to get rid of them. I ended up spending hours finding things to burn, building up a huge inferno, before finally retiring indoors, blackened with soot.

I let the fire die down, but it smouldered for three days.

A Unified Quantum Theory of the Sexual Interaction

Posted in The Universe and Stuff with tags , , , on May 20, 2009 by telescoper

Recent changes to the criteria for allocating research funding require particle physicists  and astronomers to justify the wider social, cultural and economic impact of their science. In view of the directive to engage in work more directly relevant to the person in the street, I’ve decided to share with you my latest results, which involve the application of ideas from theoretical physics in the wider field of human activity. That is, if you’re one of those people who likes to have sex in a field.

In the simplest theories of the sexual interaction, the eigenstates of the Hamiltonian describing all allowed forms of two-body coupling are identified with the conventional gender states, “Male” and “Female”  denoted |M> and |F> in the Dirac bra-ket notation; note that the bra is superfluous in this context so, as usual, we dispense with it at the outset. Interactions between |M> and |F> states are assumed to be attractive while those between |M> and |M> or |F> and |F> are supposed either to be repulsive or, in some theories, entirely forbidden.

Observational evidence, however, strongly  suggests that two-body interactions involving either F-F or M-M coupling, though suppressed in many  situations, are by no means ruled out  in the manner one would expect from the simplest theory outlined above. Furthermore, experiments indicate that the relevant channel for M-M interactions appears to have a comparable cross-section to that of the standard M-F variety, so a similar form of tunneling is presumably involved. This suggests that a more complete theory could be obtained by a  relatively simple modification of the  version presented above.

Inspired by the recent Nobel prize awarded for the theory of quark mixing, we are now able to present a new, unified theory of the sexual interaction. In our theory the “correct” eigenstates for sexual behaviour are not the conventional |M> and |F> gender states but linear combinations of the form

|M>=cosθ|S> + sinθ|G>

|F>=-sinθ|G>+cosθ|S>

where θ is the Cabibbo mixing angle or, more appropriately in this context, the sexual orientation (measured in degrees). Extension to three states is in principle possible (but a bit complicated) and we will not discuss this issue further.

In this theory each |M> or |F> state is regarded as a linear combination of heterosexual (straight, S)  and homosexual (gay, G) states represented by a rotation of the basis by an angle θ, exactly the same mechanism that accounts for the charge-changing weak interactions between quarks.

For a purely heterosexual state, this angle is zero, in which case we recover the simple theory outlined above. At θ=90° only the G component manifests itself; in this state only classically forbidden interactions are permitted. The general state is however, one with a value of the orientation angle somewhere between these two limits and this permits all forms of interaction, at least with some probability.

Note added in proof:  the |G> states do not appear in standard QFT but are motivated by some versions of string theory, expecially those involving G-strings.

One immediate consequence of this theory is that a “pure” gender state should be generally regarded as a quantum superposition of “straight” and “gay” states. This differs from a classical theory in that the true state can not be known with certainty; only the relative frequency of straight and gay behaviour (over a large number of interactions) can be predicted, perhaps explaining the large number of married men to be found on gaydar. The state at any given time is thus entirely determined by a sum over histories up to that moment, taking into account the appropriate action. In the Copenhagen interpretation, collapse one way or another  occurs only when a measurement is made (or when enough Carlsberg is drunk).

If there is a difference in energy of the basis states a pure |M> state can oscillate between |S> and |G> according to a time-dependent phase factor arising when the two states interfere with each other:

|M(t)>=cosθ|S>exp(-iE1t) + sinθ|G>exp(-iE2t);

(obviously we are using natural units here, so that it all looks cleverer than it actually is). This equation is the origin of the expressions  “it’s just a phase he’s going through” and “he swings both ways”. In physics parlance this means that the eigenstates of the sexual interaction do not coincide with the conventional gender types, indicating that sexual behaviour is not necessarily time-invariant for a given body.

Whether single-body phenomena (i.e. self-interactions) can provide insights into this theory  depends, as can be seen from the equation,  on the energies of the relevant states (as is also the case  in neutrino oscillations). If they are equal then there is no oscillation. However,  a detailed discussion of the role of degeneracy is beyond the scope of this analysis.

Self- interactions involving a solitary phase are generally difficult to observe,  although examples have been documented that involve short-lived but highly-excited states  accompanied by various forms of stimulated emission. Unfortunately, however, the resulting fluxes are  not often well measured. This form of interaction also appears to be the current preoccupation of string theorists.

More definitive evidence for the theory might emerge from situations involving some form of entanglement, such as in the examples of M-M and F-F coupling mentioned above.  Non-local interactions of a sexual type are possible in principle, but causality and simultaneity issues exist and most researchers consequently prefer to focus on local interactions, which are generally supposed to be more satisfactory from the point-of-view of reproducibility.

Although the theory is qualitatively successful we need more experimental data to pin down the parameters needed for a robust fit. It is not known, for example, whether the rates of M-M and F-F coupling are similar or, indeed, whether the peak intensity of these interactions, when resonance is reached, is similar to those of the standard M-F form. It is generally accepted, however, that the rate of decay from peak intensity is rather slower for processes involving |F> states than for|M> which is not so easy to model in this theory, although with a bit of renormalization we can probably explain anything.

Answers to these questions can perhaps be gleaned from observations of many-body processes  (i.e. those with N≥3),  especially if they involve a multiplicity of hardon states (i.e. collective excitations). Only these permit a full exploration of all possible degrees of freedom, although higher-order Feynman diagrams are needed to depict them and they require more complicated group theoretical techniques.  Examples like the one  shown above  – representing a threesome – are not well understood, but undoubtedly contribute significantly to the bi-spectrum.

One might also speculate that in these and other highly excited states,  the sexual interaction may be described by something more like the  electroweak theory in which all forms of interaction occur in a much more symmetric fashion and at much higher rates than at lower energies. That sounds like some kind of party…

It is worth remarking that there may be finer structure than this model takes into account. For example, the |G> state is generally associated with  singlet configurations like those shown on the right. However, G-G coupling is traditionally described in terms of  “top” |t> and “bottom” |b> states, with b-t coupling the preferred mode,  leading to the possibility of doublets or even triplets. It may be even prove  necessary to introduce a further mixing angle φ of the form

|G>=cosφ |t> + sinφ |b>

so that the general state of |G>  is “versatile”. However, whether G-G interactions can be adequately described even in this extended theory is a matter for debate until the intensity of t-t and b-b  coupling is more accurately measured.

Finally, we should like to point out the difference between our model and that of the usual quark sextet, in which interacting states are described in terms of three pairs: the bottom (b) and top (t) which we have mentioned already; the strange (s) and charmed (c); and the up (u) and down (d). While it is clear that |b> and |t> do exhibit strong interactions and it appears plausible that |s> and |c> might do likewise, the sexual interaction clearly breaks the isospin symmetry between the |u> and the |d> in both M-M and M-F cases. The “up” state is definitely preferred in all forms of coupling and, indeed, the “down” has only ever been known to engage in weak interactions.

We have recently submitted an application to the Science and Technology Facilities Council for a modest sum (£754 million) to build a large-scale  UK facility  in order to carry out hands-on experimental tests of some aspects of the theory. We hope we can rely on the support of the physics community in agreeing to close down their labs and quit their jobs in order to release the funding needed to support it.

How Loud was the Big Bang?

Posted in The Universe and Stuff with tags , , , , , , on April 26, 2009 by telescoper

The other day I was giving a talk about cosmology at Cardiff University’s Open Day for prospective students. I was talking, as I usually do on such occasions, about the cosmic microwave background, what we have learnt from it so far and what we hope to find out from it from future experiments, assuming they’re not all cancelled.

Quite a few members of staff listened to the talk too and, afterwards, some of them expressed surprise at what I’d been saying, so I thought it would be fun to try to explain it on here in case anyone else finds it interesting.

As you probably know the Big Bang theory involves the assumption that the entire Universe – not only the matter and energy but also space-time itself – had its origins in a single event a finite time in the past and it has been expanding ever since. The earliest mathematical models of what we now call the  Big Bang were derived independently by Alexander Friedman and George Lemaître in the 1920s. The term “Big Bang” was later coined by Fred Hoyle as a derogatory description of an idea he couldn’t stomach, but the phrase caught on. Strictly speaking, though, the Big Bang was a misnomer.

Friedman and Lemaître had made mathematical models of universes that obeyed the Cosmological Principle, i.e. in which the matter was distributed in a completely uniform manner throughout space. Sound consists of oscillating fluctuations in the pressure and density of the medium through which it travels. These are longitudinal “acoustic” waves that involve successive compressions and rarefactions of matter, in other words departures from the purely homogeneous state required by the Cosmological Principle. The Friedman-Lemaitre models contained no sound waves so they did not really describe a Big Bang at all, let alone how loud it was.

However, as I have blogged about before, newer versions of the Big Bang theory do contain a mechanism for generating sound waves in the early Universe and, even more importantly, these waves have now been detected and their properties measured.

The above image shows the variations in temperature of the cosmic microwave background as charted by the Wilkinson Microwave Anisotropy Probe about five years ago. The average temperature of the sky is about 2.73 K but there are variations across the sky that have an rms value of about 0.08 milliKelvin. This corresponds to a fractional variation of a few parts in a hundred thousand relative to the mean temperature. It doesn’t sound like much, but this is evidence for the existence of primordial acoustic waves and therefore of a Big Bang with a genuine “Bang” to it.

A full description of what causes these temperature fluctuations would be very complicated but, roughly speaking, the variation in temperature you corresponds directly to variations in density and pressure arising from sound waves.

So how loud was it?

The waves we are dealing with have wavelengths up to about 200,000 light years and the human ear can only actually hear sound waves with wavelengths up to about 17 metres. In any case the Universe was far too hot and dense for there to have been anyone around listening to the cacophony at the time. In some sense, therefore, it wouldn’t have been loud at all because our ears can’t have heard anything.

Setting aside these rather pedantic objections – I’m never one to allow dull realism to get in the way of a good story- we can get a reasonable value for the loudness in terms of the familiar language of decibels. This defines the level of sound (L) logarithmically in terms of the rms pressure level of the sound wave Prms relative to some reference pressure level Pref

L=20 log10[Prms/Pref]

(the 20 appears because of the fact that the energy carried goes as the square of the amplitude of the wave; in terms of energy there would be a factor 10).

There is no absolute scale for loudness because this expression involves the specification of the reference pressure. We have to set this level by analogy with everyday experience. For sound waves in air this is taken to be about 20 microPascals, or about 2×10-10 times the ambient atmospheric air pressure which is about 100,000 Pa.  This reference is chosen because the limit of audibility for most people corresponds to pressure variations of this order and these consequently have L=0 dB. It seems reasonable to set the reference pressure of the early Universe to be about the same fraction of the ambient pressure then, i.e.

Pref~2×10-10 Pamb

The physics of how primordial variations in pressure translate into observed fluctuations in the CMB temperature is quite complicated, and the actual sound of the Big Bang contains a mixture of wavelengths with slightly different amplitudes so it all gets a bit messy if you want to do it exactly, but it’s quite easy to get a rough estimate. We simply take the rms pressure variation to be the same fraction of ambient pressure as the averaged temperature variation are compared to the average CMB temperature,  i.e.

Prms~ a few ×10-5Pamb

If we do this, scaling both pressures in logarithm in the equation in proportion to the ambient pressure, the ambient pressure cancels out in the ratio, which turns out to be a few times 10-5.

AudiogramsSpeechBanana

With our definition of the decibel level we find that waves corresponding to variations of one part in a hundred thousand of the reference level  give roughly L=100dB while part in ten thousand gives about L=120dB. The sound of the Big Bang therefore peaks at levels just over  110 dB. As you can see in the Figure above, this is close to the threshold of pain,  but it’s perhaps not as loud as you might have guessed in response to the initial question. Many rock concerts are actually louder than the Big Bang, at least near the speakers!

A useful yardstick is the amplitude  at which the fluctuations in pressure are comparable to the mean pressure. This would give a factor of about 1010 in the logarithm and is pretty much the limit that sound waves can propagate without distortion. These would have L≈190 dB. It is estimated that the 1883 Krakatoa eruption produced a sound level of about 180 dB at a range of 100 miles. By comparison the Big Bang was little more than a whimper.

PS. If you would like to read more about the actual sound of the Big Bang, have a look at John Cramer’s webpages. You can also download simulations of the actual sound. If you listen to them you will hear that it’s more of  a “Roar” than a “Bang” because the sound waves don’t actually originate at a single well-defined event but are excited incoherently all over the Universe.

Budget Boost?

Posted in Science Politics with tags , , , , on April 19, 2009 by telescoper

This Wednesday (22nd April 2009) the Chancellor of the Exchequer, Alistair Darling, will deliver the UK government’s budget for this year. The background is of course the economic recession and the consequent collapse of our public finances. The government will have to borrow an estimated £175 billion over the next year, and it likely that taxes will eventually have to rise considerably to balance the books in the longer term.

Rumours are abounding about what will be in the budget and what won’t. According to today’s Observer, the centrepiece is likely to be a £50 billion scheme to revitalize the housing market.  If this is the case then I think it’s a mistake. Our economy has been run for too long on the basis of money raised from inflated property valuations, and we need to take this opportunity to change to a more sustainable way of running the country. Other schemes that may emerge include a £2 billion scheme to help unemployed young people which is a better idea, but much of it would probably be wasted in bureaucracy rather than doing real good.

My own attention will be focussed on whether there is anything in Alistair Darling’s speech that indicates some help for science, particularly fundamental science like physics and astronomy. In yesterday’s Guardian the Astronomer Royal and President of the Royal Society, Lord Martin Rees argued  for an injection of cash to stimulate science and innovation. About a month ago the BBC reported on efforts by Ministers to convince the treasury of the benefit of a £1 billion stimulus package for science along these lines. However, even if the powers that be listen to this argument (which is, in my view, unlikely), any increase in science funding would not necessarily be directed towards fundamental physics. I think if there isn’t anything for those of us working in astronomy in this budget, then we’re completely screwed.

I believe the funding crisis at the Science & Technology Facilities Council (STFC) was precipitated by a conscious government decision to move funds away from blue skies research and into more applied, technology driven areas.  The 2007 Comprehensive Spending Review was extremely tough on STFC but quite generous to some other agencies.  Moreover, within STFC itself there seems to be a shift from science-driven to technology-driven projects,  signalled by the cancellation of projects such as Clover to save a couple of million, and the allocation of funds to projects such as Moonlite which is devoid of any scientific interest and which could end up costing as much as £150 million over the next five years or so.

The true depth of the ongoing STFC crisis is only gradually becoming apparent. It was bad enough to start with, but has been exacerbated by the fall in value of sterling against the euro since 2007 which has meant that the cost of subscriptions to CERN, ESA and ESO have risen dramatically (by about 40%). These form such a large part of STFC’s expenditure – the CERN subscription alone is £70m out of a total budget of around £800m – that it cannot absorb the increased cost and it is now looking to make swingeing cuts on top of the 25% cut in research grants already implemented.

News emerged last week that STFC has abandoned plans to fund any R&D grants for ESA’s Cosmic Vision programme, and there are dark rumours circulating that it is considering cancelling all astronomy grants this year as well as clawing back money already given to universities in previous rounds. I hope these are not true, but I fear the worst.

Cuts on this scale would be devastating, demoralising, and I honestly think would destroy the United Kingdom as a place to do astronomy. They would also signal a complete breakdown of trust between scientists and the research council that is supposed to support them, if that hadn’t happened already.

Incidentally it is noticeable that STFC hasn’t bothered to report any of these matters publically through its website. Instead, the lead story on the STFC news page is about a visit by Prince Andrew to the Rutherford Appleton Lab. No sign yet, then, of the promised improvement in communication between the STFC Executive and its community.

The way I see it, the urgent issue is not whether we get a stimulus package , but whether we even get the bit of sticking plaster that is needed to  saves physics and astronomy from utter ruin. The cost would be a small fraction of the billions lavished on profligate bankers, but I’m not at all sure that the government either appreciates or cares about the scale of the problem.

Anyway, coincidentally, next week sees the Royal Astronomical Society’s National Astronomy Meeting (NAM), which is this year held jointly with the European Astronomical Society’s JENAM at the University of Hertfordshire. I won’t be going because it has unfortunately been organized in term time apparently because European astronomers refuse to attend meetings in the vacations, at least if they’re in places like Hatfield.  STFC representatives  have been invited; it remains to be seen what, if anything, they will have to say.

Perception, Piero and Pollock

Posted in Art, The Universe and Stuff with tags , , , , , on April 15, 2009 by telescoper

For some unknown reason I’ve just received an invitation to a private view at a small art gallery that’s about ten minutes’ walk from my house. Cocktails included. I shall definitely go and will blog about it next week. I’m looking forward to it already.

This invitation put me in an artistic frame of mind so, to follow up my post on randomness (and the corresponding parallel version on cosmic variance), I thought I’d develop some thoughts about the nature of perception and the perception of nature.

This famous painting is The Flagellation of Christ, by Piero della Francesca. I actually saw it many years ago on one of my many trips to Italy; it’s in an art gallery in Urbino. The first thing that strikes you when you see it is actually that the painting is surprisingly small (about 60cm by 80cm). However, that superficial reaction aside, the painting draws you into it in a way which few other works of art can. The composition is complicated and mathematically precise, but the use of linear perspective is sufficiently straightforward that your eye can quickly understand the geometry of the space depicted and locate the figures and actions within it. The Christ figure is clearly in the room to the left rear and the scene is then easily recognized as part of the story leading up to the crucifixion.

That’s what your eye always seems to do first when presented with a figurative representation: sort out what’s going on and fill in any details it can from memory and other knowledge.

But once you have made sense of the overall form, your brain immediately bombards you with questions. Who are the three characters in the right foreground? Why aren’t they paying attention to what’s going on indoors? Who is the figure with his back to us? Why is the principal subject so far in the background? Why does everyone look so detached? Why is the light coming from two different directions (from the left for the three men in the foreground but from the right for those in the interior)? Why is it all staged in such a peculiar way? And so on.

These unresolved questions lead you to question whether this is the straightforward depiction first sight led you to think it was. It’s clearly much more than that. Deeply symbolic, even cryptic, it’s effect on the viewer is eery and disconcerting. It has a dream-like quality. The individual elements of the painting add up to something, but the full meaning remains elusive. You feel there must be something you’re missing, but can’t find it.

This is such an enigmatic picture that it has sparked some extremely controversial interpretations, some of which are described in an article in the scientific journal Nature. I’m not going to pretend to know enough to comment on the theories, escept to say that some of them at least must be wrong. They are, however, natural consequences of our brain’s need to impose order on what it sees. The greatest artists know this, of course. Although it sometimes seems like they might be playing tricks on us just for fun, part of what makes art great is the way it gets inside the process of perception.

Here’s another example from quite a different artist.

This one is called Lavender Mist. It’s one of the “action paintings” made by the influential American artist Jackson Pollock. This, and many of the other paintings of its type, also get inside your head in quite a disconcerting way but it’s quite a different effect to that achieved by Piero della Francesca.

This is an abstract painting, but that doesn’t stop your eyes seeking within it some sort of point of reference to make geometrical sense of it. There’s no perspective to draw you into it so you look for clues to the depth in the layers of paint. Standing in front of one of these very large works – I find they don’t work at all in reduced form like on the screen in front of you now – you find your eyes constantly shifting around, following lines here and there, trying to find recognizable shapes and to understand what is there in terms of other things you have experienced either in the painting itself or elsewhere. Any order you can find, however, soon becomes lost. Small-scale patterns dissolve away into sea of apparent confusion. Your brain tries harder, but is doomed. One of the biggest problems is that your eyes keep focussing and unfocussing to look for depth and structure. It’s almost impossible to stop yourself doing it. You end up dizzy.

I don’t know how Pollock came to understand exactly how to make his compositions maximally disorienting, but he seems to have done so. Perhaps he had a deep instinctive understanding of how the eye copes with the interaction of structures on different physical scales. I find you can see this to some extent even in the small version of the picture on this page. Deliberately blurring your vision makes different elements stand out and then retreat, particularly the large darkish streak that lies to the left of centre at a slight angle to the vertical.

This artist has also been the subject of interest by mathematicians and physicists because his work seems to display some of the characteristic properties of fractal sets. I remember going to a very interesting talk a few years ago by Richard Taylor of the University of Oregon who claimed that fractal dimensions could be used to authenticate (or otherwise) genuine works by Pollock as he seemed to have his own unique signature.

I suppose what I’m trying to suggest is that there’s a deeper connection than you might think between the appreciation of art and the quest for scientific understanding.

Arrows and Demons

Posted in The Universe and Stuff with tags , , , , , on April 12, 2009 by telescoper

My recent post about randomness and non-randomness spawned a lot of comments over on cosmic variance about the nature of entropy. I thought I’d add a bit about that topic here, mainly because I don’t really agree with most of what is written in textbooks on this subject.

The connection between thermodynamics (which deals with macroscopic quantities) and statistical mechanics (which explains these in terms of microscopic behaviour) is a fascinating but troublesome area.  James Clerk Maxwell (right) did much to establish the microscopic meaning of the first law of thermodynamics he never tried develop the second law from the same standpoint. Those that did were faced with a conundrum.  

 

The behaviour of a system of interacting particles, such as the particles of a gas, can be expressed in terms of a Hamiltonian H which is constructed from the positions and momenta of its constituent particles. The resulting equations of motion are quite complicated because every particle, in principle, interacts with all the others. They do, however, possess an simple yet important property. Everything is reversible, in the sense that the equations of motion remain the same if one changes the direction of time and changes the direction of motion for all the particles. Consequently, one cannot tell whether a movie of atomic motions is being played forwards or backwards.

This means that the Gibbs entropy is actually a constant of the motion: it neither increases nor decreases during Hamiltonian evolution.

But what about the second law of thermodynamics? This tells us that the entropy of a system tends to increase. Our everyday experience tells us this too: we know that physical systems tend to evolve towards states of increased disorder. Heat never passes from a cold body to a hot one. Pour milk into coffee and everything rapidly mixes. How can this directionality in thermodynamics be reconciled with the completely reversible character of microscopic physics?

The answer to this puzzle is surprisingly simple, as long as you use a sensible interpretation of entropy that arises from the idea that its probabilistic nature represents not randomness (whatever that means) but incompleteness of information. I’m talking, of course, about the Bayesian view of probability.

 First you need to recognize that experimental measurements do not involve describing every individual atomic property (the “microstates” of the system), but large-scale average things like pressure and temperature (these are the “macrostates”). Appropriate macroscopic quantities are chosen by us as useful things to use because they allow us to describe the results of experiments and measurements in a  robust and repeatable way. By definition, however, they involve a substantial coarse-graining of our description of the system.

Suppose we perform an idealized experiment that starts from some initial macrostate. In general this will generally be consistent with a number – probably a very large number – of initial microstates. As the experiment continues the system evolves along a Hamiltonian path so that the initial microstate will evolve into a definite final microstate. This is perfectly symmetrical and reversible. But the point is that we can never have enough information to predict exactly where in the final phase space the system will end up because we haven’t specified all the details of which initial microstate we were in.  Determinism does not in itself allow predictability; you need information too.

If we choose macro-variables so that our experiments are reproducible it is inevitable that the set of microstates consistent with the final macrostate will usually be larger than the set of microstates consistent with the initial macrostate, at least  in any realistic system. Our lack of knowledge means that the probability distribution of the final state is smeared out over a larger phase space volume at the end than at the start. The entropy thus increases, not because of anything happening at the microscopic level but because our definition of macrovariables requires it.

ham

This is illustrated in the Figure. Each individual microstate in the initial collection evolves into one state in the final collection: the narrow arrows represent Hamiltonian evolution.

 

However, given only a finite amount of information about the initial state these trajectories can’t be as well defined as this. This requires the set of final microstates has to acquire a  sort of “buffer zone” around the strictly Hamiltonian core;  this is the only way to ensure that measurements on such systems will be reproducible.

The “theoretical” Gibbs entropy remains exactly constant during this kind of evolution, and it is precisely this property that requires the experimental entropy to increase. There is no microscopic explanation of the second law. It arises from our attempt to shoe-horn microscopic behaviour into framework furnished by macroscopic experiments.

Another, perhaps even more compelling demonstration of the so-called subjective nature of probability (and hence entropy) is furnished by Maxwell’s demon. This little imp first made its appearance in 1867 or thereabouts and subsequently led a very colourful and influential life. The idea is extremely simple: imagine we have a box divided into two partitions, A and B. The wall dividing the two sections contains a tiny door which can be opened and closed by a “demon” – a microscopic being “whose faculties are so sharpened that he can follow every molecule in its course”. The demon wishes to play havoc with the second law of thermodynamics so he looks out for particularly fast moving molecules in partition A and opens the door to allow them (and only them) to pass into partition B. He does the opposite thing with partition B, looking out for particularly sluggish molecules and opening the door to let them into partition A when they approach.

The net result of the demon’s work is that the fast-moving particles from A are preferentially moved into B and the slower particles from B are gradually moved into A. The net result is that the average kinetic energy of A molecules steadily decreases while that of B molecules increases. In effect, heat is transferred from a cold body to a hot body, something that is forbidden by the second law.

All this talk of demons probably makes this sound rather frivolous, but it is a serious paradox that puzzled many great minds. Until it was resolved in 1929 by Leo Szilard. He showed that the second law of thermodynamics would not actually be violated if entropy of the entire system (i.e. box + demon) increased by an amount every time the demon measured the speed of a molecule so he could decide whether to let it out from one side of the box into the other. This amount of entropy is precisely enough to balance the apparent decrease in entropy caused by the gradual migration of fast molecules from A into B. This illustrates very clearly that there is a real connection between the demon’s state of knowledge and the physical entropy of the system.

By now it should be clear why there is some sense of the word subjective that does apply to entropy. It is not subjective in the sense that anyone can choose entropy to mean whatever they like, but it is subjective in the sense that it is something to do with the way we manage our knowledge about nature rather than about nature itself. I know from experience, however, that many physicists feel very uncomfortable about the idea that entropy might be subjective even in this sense.

On the other hand, I feel completely comfortable about the notion:. I even think it’s obvious. To see why, consider the example I gave above about pouring milk into coffee. We are all used to the idea that the nice swirly pattern you get when you first pour the milk in is a state of relatively low entropy. The parts of the phase space of the coffee + milk system that contain such nice separations of black and white are few and far between. It’s much more likely that the system will end up as a “mixed” state. But then how well mixed the coffee is depends on your ability to resolve the size of the milk droplets. An observer with good eyesight would see less mixing than one with poor eyesight. And an observer who couldn’t perceive the difference between milk and coffee would see perfect mixing. In this case entropy, like beauty, is definitely in the eye of the beholder.

The refusal of many physicists to accept the subjective nature of entropy arises, as do so many misconceptions in physics, from the wrong view of probability.

Easter Physics Quiz

Posted in Uncategorized with tags on April 10, 2009 by telescoper

Over the Easter holidays the newspapers seem to be full of quizzes and other distractions, so I thought I’d join in with a little quiz of my own.

So for a negligible prize can anyone point out the mathematical connection between these two pictures?

 

 

 

 

 

 

 

Answers via the comments box please.

Post Mortem

Posted in Science Politics with tags , , , , on April 6, 2009 by telescoper

Finally the full details of the Physics panel’s deliberations during the 2008 Research Assessment Exercise have been published in the form of sub-profiles, showing the breakdown of the overall scores into various components, including the rating attached to “outputs” (i.e. papers), “environment” and “esteem”; for the jargon see the RAE guidelines for submissions.

 I’ve blogged about the RAE results before: here, there, elsewhere, et cetera and passim. Andy Lawrence (e-astronomer) has now written a blog post about the latest publications from HEFCE  (commenting on the Cardiff situation with a generosity that contrasts with the offensive attitude displayed by one of my former colleagues).  Andy has also produced a graph which makes for very interesting reading:

rae_21

I’ve used my meagre graphical skills to indicate the location of Cardiff on the figure between the thick solid lines. Note the enormous gap between the panel’s assessment of our outputs (2.22) compared to the score for esteem (2.74).

I’ve mentioned before that apparently not a single one of the papers submitted by Cardiff’s excellent Astronomy Instrumentation Group was graded as 4* (world leading). Among the papers submitted by this group were several highly cited ones relating to an important Cosmic Microwave Background experiment called BOOMERANG. The panel probably judged that Cardiff hadn’t played a sufficiently prominent role in this collaboration to merit a 4*, which seems to be a completely perverse conclusion. The experiment wouldn’t have been possible at all without the Cardiff group.

Notwithstanding my disgruntlement at the particularly and peculiarly harsh assessment of Cardiff’s physics submission, there is also an indication of a more general problem. Notice how at the top right, a large number of departments has an output score seriously lagging their other score (by about 0.4 or more).

The counterexample to this trend is Loughborough, which has a very small but clearly good research activity in physics, and which scored 2.66 on its outputs but only 1.1 on environment. They are easily identified on the graph as an extreme outlier below the general trend.

Although there is no reason to expect a perfect correlation between the different elements of the overall assessment, it looks to me like the Physics panel decided to let the output score for the strong departments saturate at a level of about 2.8 whereas other panels were much more generous.

Why did they do this?

Answers on a postcard (or, better, via the comments box), please.

Clover and Out

Posted in Science Politics, The Universe and Stuff with tags , , , , , , , , , on March 31, 2009 by telescoper

One of the most exciting challenges facing the current generation of cosmologists is to locate in the pattern of fluctuations in the cosmic microwave background evidence for the primordial gravitational waves predicted by models of the Universe that involve inflation.

Looking only at the temperature variation across the sky, it is not possible to distinguish between tensor  (gravitational wave) and scalar (density wave) contributions  (both of which are predicted to be excited during the inflationary epoch).  However, scattering of photons off electrons is expected to leave the radiation slightly polarized (at the level of a few percent). This gives us additional information in the form of the  polarization angle at each point on the sky and this extra clue should, in principle, enable us to disentangle the tensor and scalar components.

The polarization signal can be decomposed into two basic types depending on whether the pattern has  odd or even parity, as shown in the nice diagram (from a paper by James Bartlett)

The top row shows the E-mode (which look the same when reflected in a mirror and can be produced by either scalar or tensor modes) and the bottom shows the B-mode (which have a definite handedness that changes when mirror-reflected and which can’t be generated by scalar modes because they can’t have odd parity).

The B-mode is therefore (in principle)  a clean diagnostic of the presence of gravitational waves in the early Universe. Unfortunately, however, the B-mode is predicted to be very small, about 100 times smaller than the E-mode, and foreground contamination is likely to be a very serious issue for any experiment trying to detect it.

An experiment called Clover (involving the Universities of  Cardiff, Oxford, Cambridge and Manchester) was designed to detect the primordial B-mode signal from its vantage point in Chile. You can read more about the way it works at the dedicated webpages here at Cardiff and at Oxford. I won’t describe it in more detail here, for reasons which will become obvious.

The chance to get involved in a high-profile cosmological experiment was one of the reasons I moved to Cardiff a couple of years ago, and I was looking forward to seeing the data arriving for analysis. Although I’m primarily a theorist, I have some experience in advanced statistical methods that might have been useful in analysing the output.  It would have been fun blogging about it too.

Unfortunately, however, none of that is ever going to happen. Because of its budget crisis, and despite the fact that it has spent a large amount (£4.5M) on it already,  STFC has just decided to withdraw the funding needed to complete it (£2.5M)  and cancel the Clover experiment.

Clover wasn’t the only B-mode experiment in the game. Its rivals include QUIET and SPIDER, both based in the States. It wasn’t clear that Clover would have won the race, but now that we know  it’s a non-runner  we can be sure it won’t.

Honoured amongst bloggers…

Posted in Uncategorized with tags , , , on March 25, 2009 by telescoper

I only have time for a quickie today as I have to spend this evening getting things together for my forthcoming trip to the Irish Republic for a talk in Dublin (which I’ll no doubt ramble on about when I get back).

I hear dark rumblings about the STFC financial crisis turning into a full-scale disaster owing to inept management, but I’ll refrain from going into details until it all becomes official. Suffice to say for now that, if you thought things were bad already, just watch this space…

Anyway, at least today brought some news that flattered my ego. Ian Douglas at the Daily Telegraph has seen fit to put this blog on his list of five great physics blogs. He’s obviously a man of great taste. Quite cute too. I’ll have to revise my opinion of the Daily Telegraph.

But no.

They have boring crosswords.