Archive for the The Universe and Stuff Category

A Random Walk

Posted in The Universe and Stuff with tags , , , , , on October 24, 2009 by telescoper

In 1905 Albert Einstein had his “year of miracles” in which he published three papers that changed the course of physics. One of these is extremely famous: the paper that presented the special theory of relativity. The second was a paper on the photoelectric effect that led to the development of quantum theory. The third paper is not at all so well known. It was about the theory of Brownian motion.  In fact, Einstein spent an enormous amount of time and energy working on problems in statistical physics, something that isn’t so well appreciated these days as his work on the more glamorous topics of relativity and quantum theory.

 Brownian motion, named after the botanist Robert Brown,  is the perpetual jittering observed when small particles such as pollen grains are immersed in a fluid. It is now well known that these motions are caused by the constant bombardment of the grain by the fluid molecules. The molecules are too small to be seen directly, but their presence can be inferred from the visible effect on the much larger grain.

Brownian motion can be observed whenever  any relatively massive particles (perhaps large molecules) are immersed in a fluid comprising lighter particles. Here is a little video showing the Brownian motion observed by viewing smoke under a microscope. There is a small coherent “drift” motion in this example but superimposed on that you can clearly see the effect of gas atoms bombarding the (reddish) smoke particles:

The mathematical modelling of this process was pioneered by Einstein (and also Smoluchowski), but has now become a very sophisticated field of mathematics in its own right. I don’t want to go into too much detail about the modern approach for fear of getting far too technical, so I will concentrate on the original idea.

Einstein took the view that Brownian motion could be explained in terms of a type of stochastic process called a “random walk” (or sometimes “random flight”). I think the first person to construct a mathematical model to describethis type of phenomenon was the statistician Karl Pearson. The problem he posed concerned the famous drunkard’s walk. A man starts from the origin and takes a step of length L in a random direction. After this step he turns through a random angle and takes another step of length L. He repeats this process n times. What is the probability distribution for R, his total distance from the origin after these n steps? Pearson didn’t actually solve this problem, but posed it in a letter to Nature in 1905. Only a  week later, a reply from Lord Rayleigh was published in the same journal. He hadn’t worked it all out, written it up and sent it within a week though. It turns out that Rayleigh had solved essentially the same problem in a different context way back in 1880 so he had the answer readily available when he saw Pearson’s letter.

Pearson’s problem is a restricted case of a random walk, with each step having the same length. The more general case allows for a distribution of step lengths as well as random directions. To give a nice example for which virtually everything is known in a statistical sense, consider the case where each component of the step, i.e. x and y, are independent Gaussian variables, which have zero mean so that there is no preferred direction:

p(x)=\frac{1}{\sigma\sqrt{2\pi}} \exp \left(-\frac{x^2}{2\sigma^2}\right)  

A similar expression holds for p(y). Now we can think of the entire random walk as being two independent walks in x and y.  After n steps the total displacement in x, say, xn is given by

 p(x_n)=\frac{1}{\sigma\sqrt{n 2\pi }} \exp \left(-\frac{x_n^2}{2n\sigma^2}\right)

and again there is a similar expression for the distribution of yn . Notice that each of these distribution has a mean value of zero. On average, meaning on average over the entire probability distribution of realizations of the walk, the drunkard doesn’t go anywhere. In each individual walk he certainly does go somewhere, of course, but he is equally likely to move in any direction the probabilistic mean has to be zero. The total net displacement from the origin, rn , is just given by Pythagoras’ theorem:

r_n^2=x_n^2+y_n^2

 from which it is quite easy to establish that the probability distribution has to be

 p(r_n)=\frac{r_n}{n\sigma^2} \exp \left(-\frac{r_n^2}{2n\sigma^2}\right)

 This is called the Rayleigh distribution, and this kind of process is called a Rayleigh “flight”. The mean value of the displacement is just σ√n. By virtue of the ubiquitous central limit theorem, this result also holds in the original case discussed by Pearson in the limit of very large n. So this gives another example of the useful rule-of-thumb that quantities arising from fluctuations among n entities generally give a result that depends on the square root of n.

The figure below shows a simulation of a Rayleigh random walk. It is quite a good model for the jiggling motion executed by a Brownian particle. 

 sp003196

The step size resulting from a collision of a Brownian particle with a molecule depends on the mass of the molecule and of the particle itself. A heavier particle will be relatively unaffected by each bash and thus take longer to diffuse than a lighter particle. Here is a nice video showing three-dimensional simulations of the diffusion of sugar molecules (left) and proteins (right) that demonstrates this effect.

Of course not even the most inebriated boozer will execute a truly random walk. One would expect each step direction to have at least some memory of the previous one. This gives rise to the idea of a correlated random walk.  Such objects can be used to mimic the behaviour of geometric objects that possess some stiffness in their joints, such as proteins or other long molecules. Nowadays theory of Brownian motion and related stochastic phenomena is now considerably more sophisticated than the simply random flight models I have discussed here. The more general formalism can be used to understand many situations involving phenomena such as diffusion and percolation, not to mention gambling games and the stock market. The ability of these intrinsically “random” processes to yield surprisingly rich patterns is, to me, one of their most fascinating aspects. It takes only a little tweak to create order from chaos.

 

Nox Nocti Indicat Scientiam

Posted in Poetry, The Universe and Stuff with tags , on October 23, 2009 by telescoper

According to my blog access statistics, some of the poems I post on here seem to be fairly popular so I thought I’d put up another one by another poet  from the Metaphysical tradition, William Habington. He belonged to a prominent Catholic family and lived in England from 1605 to 1654, during a time of great religious upheaval.

The title of this particular poem is taken from the Latin (Vulgate) version of Psalm 19, the first two lines of which are

Caeli enarrant gloriam Dei et opus manus eius adnuntiat firmamentum.
Dies diei eructat verbum et nox nocti indicat scientiam.

The King James Bible translates this as

The heavens declare the glory of God; and the firmament sheweth his handywork.
Day unto day uttereth speech, and night unto night sheweth knowledge.

Some translations I have seen give “night after night” rather than the form above. My distant recollection of  Latin learnt at school tells me that nocti is the dative case of the third declension noun nox, so I think think “night shows knowledge to night” is indeed the correct sense of the Latin. Of course I don’t know what the sense of the original Hebrew is!

The original Psalm is the text on which one of the mightiest choruses of Haydn’s  Creation is based, “The Heavens are Telling” and Habington’s poem is a meditation on it. It seems to me to be a natural companion to the poem by John Masefield I posted earlier in the week, but I don’t know whether they share a common inspiration in the Psalm or just in the Universe itself.

When I survey the bright
Celestial sphere;
So rich with jewels hung, that Night
Doth like an Ethiop bride appear:

My soul her wings doth spread
And heavenward flies,
Th’ Almighty’s mysteries to read
In the large volumes of the skies.

For the bright firmament
Shoots forth no flame
So silent, but is eloquent
In speaking the Creator’s name.

No unregarded star
Contracts its light
Into so small a character,
Removed far from our human sight,

But if we steadfast look
We shall discern
In it, as in some holy book,
How man may heavenly knowledge learn.

It tells the conqueror
That far-stretch’d power,
Which his proud dangers traffic for,
Is but the triumph of an hour:

That from the farthest North,
Some nation may,
Yet undiscover’d, issue forth,
And o’er his new-got conquest sway:

Some nation yet shut in
With hills of ice
May be let out to scourge his sin,
Till they shall equal him in vice.

And then they likewise shall
Their ruin have;
For as yourselves your empires fall,
And every kingdom hath a grave.

Thus those celestial fires,
Though seeming mute,
The fallacy of our desires
And all the pride of life confute:–

For they have watch’d since first
The World had birth:
And found sin in itself accurst,
And nothing permanent on Earth.


Another take on cosmic anisotropy

Posted in Cosmic Anomalies, The Universe and Stuff with tags , , , on October 22, 2009 by telescoper

Yesterday we had a nice seminar here by Antony Lewis who is currently at Cambridge, but will be on his way to Sussex in the New Year to take up a lectureship there. I thought I’d put a brief post up here so I can add it to my collection of items concerning cosmic anomalies. I admit that I had missed the paper he talked about (by himself and Duncan Hanson) when it came out on the ArXiv last month, so I’m very glad his visit drew this to my attention.

What Hanson & Lewis did was to think of a number of simple models in which the pattern of fluctuations in the temperature of the cosmic microwave background radiation across the sky might have a preferred direction. They then construct optimal estimators for the parameters in these models (assuming the underlying fluctuations are Gaussian) and then apply these estimators to the data from the Wilkinson Microwave Anisotropy Probe (WMAP). Their subsequent analysis attempts to answer the question whether the data prefer these anisotropic models to the bog-standard cosmology which is statistically isotropic.

I strongly suggest you read their paper in detail because it contains a lot of interesting things, but I wanted to pick out one result for special mention. One of their models involves a primordial power spectrum that is intrinsically anisotropic. The model is of the form

P(\vec{ k})=P(k) [1+a(k)g(\vec{k})]

compared to the standard P(k), which does not depend on the direction of the wavevector. They find that the WMAP measurements strongly prefer this model to the standard one. Great! A departure from the standard cosmological model! New Physics! Re-write your textbooks!

Well, not really. The direction revealed by the best-choice parameter fit to the data is shown in the smoothed picture  (top). Underneath it are simulations of the sky predicted by their  model decomposed into an isoptropic part (in the middle) and an anisotropic part (at the bottom).

lewis2

You can see immediately that the asymmetry axis is extremely close to the scan axis of the WMAP satellite, i.e. at right angles to the Ecliptic plane.

This immediately suggests that it might not be a primordial effect at all but either (a) a signal that is aligned with the Ecliptic plane (i.e. something emanating from the Solar System) or (b) something arising from the WMAP scanning strategy. Antony went on to give strong evidence that it wasn’t primordial and it wasn’t from the Solar System. The WMAP satellite has a number of independent differencing assemblies. Anything external to the satellite should produce the same signal in all of them, but the observed signal varies markedly from one to another. The conclusion, then, is that this particular anomaly is largely generated by an instrumental systematic.

The best candidate for such an effect is that it is an artefact of a asymmetry in the beams of the two telescopes on the satellite. Since the scan pattern has a preferred direction, the beam profile may introduce a direction-dependent signal into the data. No attempt has been made to correct for this effect in the published maps so far, and it seems to me to be very likely that this is the root of this particular anomaly.

We will have to see the extent to which beam systematics will limit the ability of Planck to shed further light on this issue.

Ergodic Means…

Posted in The Universe and Stuff with tags , , , , , , on October 19, 2009 by telescoper

The topic of this post is something I’ve been wondering about for quite a while. This afternoon I had half an hour spare after a quick lunch so I thought I’d look it up and see what I could find.

The word ergodic is one you will come across very frequently in the literature of statistical physics, and in cosmology it also appears in discussions of the analysis of the large-scale structure of the Universe. I’ve long been puzzled as to where it comes from and what it actually means. Turning to the excellent Oxford English Dictionary Online, I found the answer to the first of these questions. Well, sort of. Under etymology we have

ad. G. ergoden (L. Boltzmann 1887, in Jrnl. f. d. reine und angewandte Math. C. 208), f. Gr.

I say “sort of” because it does attribute the origin of the word to Ludwig Boltzmann, but the greek roots (εργον and οδοσ) appear to suggest it means “workway” or something like that. I don’t think I follow an ergodic path on my way to work so it remains a little mysterious.

The actual definitions of ergodic given by the OED are

Of a trajectory in a confined portion of space: having the property that in the limit all points of the space will be included in the trajectory with equal frequency. Of a stochastic process: having the property that the probability of any state can be estimated from a single sufficiently extensive realization, independently of initial conditions; statistically stationary.

As I had expected, it has two  meanings which are related, but which apply in different contexts. The first is to do with paths or orbits, although in physics this is usually taken to meantrajectories in phase space (including both positions and velocities) rather than just three-dimensional position space. However, I don’t think the OED has got it right in saying that the system visits all positions with equal frequency. I think an ergodic path is one that must visit all positions within a given volume of phase space rather than being confined to a lower-dimensional piece of that space. For example, the path of a planet under the inverse-square law of gravity around the Sun is confined to a one-dimensional ellipse. If the force law is modified by external perturbations then the path need not be as regular as this, in extreme cases wandering around in such a way that it never joins back on itself but eventually visits all accessible locations. As far as my understanding goes, however, it doesn’t have to visit them all with equal frequency. The ergodic property of orbits is  intimately associated with the presence of chaotic dynamical behaviour.

The other definition relates to stochastic processes, i.e processes involving some sort of random component. These could either consist of a discrete collection of random variables {X1…Xn} (which may or may not be correlated with each other) or a continuously fluctuating function of some parameter such as time t, i.e. X(t) or spatial position (or perhaps both).

Stochastic processes are quite complicated measure-valued mathematical entities because they are specified by probability distributions. What the ergodic hypothesis means in the second sense is that measurements extracted from a single realization of such a process have a definition relationship to analagous quantities defined by the probability distribution.

I always think of a stochastic process being like a kind of algorithm (whose workings we don’t know). Put it on a computer, press “go” and it spits out a sequence of numbers. The ergodic hypothesis means that by examining a sufficiently long run of the output we could learn something about the properties of the algorithm.

An alternative way of thinking about this for those of you of a frequentist disposition is that the probability average is taken over some sort of statistical ensemble of possible realizations produced by the algorithm, and this must match the appropriate long-term average taken over one realization.

This is actually quite a deep concept and it can apply (or not) in various degrees.  A simple example is to do with properties of the mean value. Given a single run of the program over some long time T we can compute the sample average

\bar{X}_T\equiv \frac{1}{T} \int_0^Tx(t) dt

the probability average is defined differently over the probability distribution, which we can call p(x)

\langle X \rangle \equiv \int x p(x) dx

If these two are equal for sufficiently long runs, i.e. as T goes to infinity, then the process is said to be ergodic in the mean. A process could, however, be ergodic in the mean but not ergodic with respect to some other property of the distribution, such as the variance. Strict ergodicity would require that the entire frequency distribution defined from a long run should match the probability distribution to some accuracy.

Now  we have a problem with the OED again. According to the defining quotation given above, ergodic can be taken to mean statistically stationary. Actually that’s not true. ..

In the one-parameter case, “statistically stationary” means that the probability distribution controlling the process is independent of time, i.e. that p(x,t)=p(x,t+Δt) . It’s fairly straightforward to see that the ergodic property requires that a process X(t) be stationary, but the converse is not the case. Not every stationary process is necessarily ergodic. Ned Wright gives an example here. For a higher-dimensional process, such as a spatially-fluctuating random field the analogous property is statistical homogeneity, rather than stationarity, but otherwise everything carries over.

Ergodic theorems are very tricky to prove in general, but there are well-known results that rigorously establish the ergodic properties of Gaussian processes (which is another reason why theorists like myself like them so much). However, it should be mentioned that even if the ergodic assumption applies its usefulness depends critically on the rate of convergence. In the time-dependent example I gave above, it’s no good if the averaging period required is much longer than the age of the Universe; in that case even ergodicity makes it difficult to make inferences from your sample. Likewise the ergodic hypothesis doesn’t help you analyse your galaxy redshift survey if the averaging scale needed is larger than the depth of the sample.

Moreover, it seems to me that many physicists resort to ergodicity when there isn’t any compelling mathematical grounds reason to think that it is true. In some versions of the multiverse scenario, it is hypothesized that the fundamental constants of nature describing our low-energy turn out “randomly” to take on different values in different domains owing to some sort of spontaneous symmetry breaking perhaps associated a phase transition generating  cosmic inflation. We happen to live in a patch within this structure where the constants are such as to make human life possible. There’s no need to assert that the laws of physics have been designed to make us possible if this is the case, as most of the multiverse doesn’t have the fine tuning that appears to be required to allow our existence.

As an application of the Weak Anthropic Principle, I have no objection to this argument. However, behind this idea lies the assertion that all possible vacuum configurations (and all related physical constants) do arise ergodically. I’ve never seen anything resembling a proof that this is the case. Moreover, there are many examples of physical phase transitions for which the ergodic hypothesis is known not to apply.  If there is a rigorous proof that this works out, I’d love to hear about it. In the meantime, I remain sceptical.

Greatness in Little

Posted in Poetry, The Universe and Stuff with tags , , on October 15, 2009 by telescoper

The BBC Website yesterday mentioned that according to the British Astronomer Royal, Lord Martin Rees, celestial bodies are less complicated than the bodies of insects – let alone those of human beings – and cosmology is an easier science than the study of a balanced diet.

As I was tucking into my carefully balanced meal of fish and chips last night, the first part of the quotation suddenly reminded me of the following poem Greatness in Little by Richard Leigh (1649-1728), a relatively obscure poet of the seventeenth century who managed to excel himself in this particular poem of 1675 in which he compares the intricate workings of insects with the grandest achievements of human explorers.

In spotted globes, that have resembled all
Which we or beasts possess to one great ball
Dim little specks for thronging cities stand,
Lines wind for rivers, blots bound sea and land.
Small are those spots which in the moon we view,
Yet glasses these like shades of mountains shew;
As what an even brightness does retain,
A glorious level seems, and shining plain.
Those crowds of stars in the populous sky,
Which art beholds as twinkling worlds on high,
Appear to naked, unassisted sight
No more than sparks or slender points of light.
The sun, a flaming universe alone,
Bigger than that about which his fires run;
Enlightening ours, his globe but part does gild,
Part by his lustre or Earth’s shades concealed;
His glory dwindled so, as what we spy
Scarce fills the narrow circle of the eye.
What new Americas of light have been
Yet undiscovered there, or yet unseen,
Art’s near approaches awfully forbid,
As in the majesty of nature hid.
Nature, who with like state, and equal pride,
Her great works does in height and distance hide,
And shuts up her minuter bodies all
In curious frames, imperceptibly small.
Thus still incognito, she seeks recess
In greatness half-seen, or dim littleness.
Ah, happy littleness! that art thus blest,
That greatest glories aspire to seem least.
Even those installed in a higher sphere,
The higher they are raised, the less appear,
And in their exaltation emulate
Thy humble grandeur and thy modest state.
Nor is this all thy praise, though not the least,
That greatness is thy counterfeit at best.
Those swelling honours, which in that we prize,
Thou dost contain in thy more thrifty size;
And hast that pomp, magnificence does boast,
Though in thy stature and dimensions lost.
Those rugged little bodies whose parts rise
And fall in various inequalities,
Hills in the risings of their surface show,
As valleys in their hollow pits below.
Pompous these lesser things, but yet less rude
Than uncompact and looser magnitude.
What Skill is in the frame of Insects shown?
How fine the Threds, in their small Textures spun?
How close those Instruments and Engines knit,
Which Motion, and their slender Sense transmit?
Like living Watches, each of these conceals
A thousand Springs of Life, and moving wheels.
Each ligature a Lab’rynth seems, each part
All wonder is, all Workmanship and Art.
Rather let me this little Greatness know,
Then all the Mighty Acts of Great Ones do.
These Engines understand, rather than prove
An Archimedes, and the Earth remove.
These Atom-Worlds found out, I would despise
Colombus, and his vast Discoveries.

The Law of Unreason

Posted in Bad Statistics, The Universe and Stuff with tags , , , , on October 11, 2009 by telescoper

Not much time to post today, so I thought I’d just put up a couple of nice little quotes about the Central Limit Theorem. In case you don’t know it, this theorem explains why so many phenomena result in measurable things whose frequencies of occurrence can be described by the Normal (Gaussian) distribution, with its characteristic Bell-shaped curve. I’ve already mentioned the role that various astronomers played in the development of this bit of mathematics, so I won’t repeat the story in this post.

In fact I was asked to prove the theorem during my PhD viva, and struggled to remember how to do it, but it’s such an important thing that it was quite reasonable for my examiners  to ask the question and quite reasonable for them to have expected me to answer it! If you want to know how to do it, then I’ll give you a hint: it involves a Fourier transform!

Any of you who took a peep at Joan Magueijo’s lecture that I posted about yesterday will know that the title of his talk was Anarchy and Physical Laws. The main issue he addressed was whether the existence of laws of physics requires that the Universe must have been designed or whether mathematical regularities could somehow emerge from a state of lawlessness. Why the Universe is lawful is of course one of the greatest mysteries of all, and one that, for some at least, transcends science and crosses over into the realm of theology.

In my little address at the end of Joao’s talk I drew an analogy with the Central Limit Theorem which is an example of an emergent mathematical law that describes situations which are apparently extremely chaotic. I just wanted to make the point that there are well-known examples of such things, even if the audience were sceptical about applying such notions to the entire Universe.

The quotation I picked was this one from Sir Francis Galton:

I know of scarcely anything so apt to impress the imagination as the wonderful form of cosmic order expressed by the “Law of Frequency of Error”. The law would have been personified by the Greeks and deified, if they had known of it. It reigns with serenity and in complete self-effacement, amidst the wildest confusion. The huger the mob, and the greater the apparent anarchy, the more perfect is its sway. It is the supreme law of Unreason. Whenever a large sample of chaotic elements are taken in hand and marshalled in the order of their magnitude, an unsuspected and most beautiful form of regularity proves to have been latent all along

However, it is worth remembering also that not everything has a normal distribution: the central limit theorem requires linear, additive behaviour of the variables involved. I posted about an example where this is not the case here. Theorists love to make the Gaussian assumption when dealing with phenomena that they want to model with stochastic processes because these make many calculations tractable that otherwise would be too difficult. In cosmology, for example, we usually assume that the primordial density perturbations that seeded the formation of large-scale structure obeyed Gaussian statistics. Observers and experimentalists frequently assume Gaussian measurement errors in order to apply off-the-shelf statistical methods to their results. Often nature is kind to us but every now and again we find anomalies that are inconsistent with the normal distribution. Those exceptions usually lead to clues that something interesting is going on that violates the terms of the Central Limit Theorem. There are inklings that this may be the case in cosmology.

So to balance Galton’s remarks, I add this quote by Gabriel Lippmann which I’ve taken the liberty of translating from the original French.

Everyone believes in the [normal] law of errors: the mathematicians, because they think it is an experimental fact; and the experimenters, because they suppose it is a theorem of mathematics

There are more things in heaven and earth than are described by the Gaussian distribution!

Another Day at the ArXiv..

Posted in Cosmic Anomalies, The Universe and Stuff with tags , , , , , , , on October 8, 2009 by telescoper

Every now and again I remember that this is supposed to be some sort of science blog. This happened again this morning after three hours of meetings with my undergraduate project students. Dealing with questions about simulating the cosmic microwave background, measuring the bending of light during an eclipse, and how to do QCD calculations on a lattice reminded me that I’m supposed to know something about stuff like that.

Anyway, looking for something to post about while I eat my lunchtime sandwich, I turned to the estimable arXiv and turned to the section marked astro-ph, and to the new submissions category, for inspiration.

I’m one of the old-fashioned types who still gets an email every day of the new submissions. In the old days there were only a few, but today’s new submissions were 77 in number, only about half-a-dozen of which seemed directly relevant to things I’m interested in. It’s always a bit of a struggle keeping up and I often miss important things. There’s no way I can read as widely around my own field as I would like to, or as I used to in the past, but that’s the information revolution for you…

Anyway, the thing that leapt out at me first was an interesting paper by Dikarev et al (accepted for publication in the Astrophysical Journal) that speculates about the possibility that dust grains in the solar system might be producing emission that messes up measurements of the cosmic microwave background, thus possibly causing the curious cosmic anomalies seen by WMAP I’ve blogged about on more than one previous occasion.

Their abstract reads:

Analyses of the cosmic microwave background (CMB) radiation maps made by the Wilkinson Microwave Anisotropy Probe (WMAP) have revealed anomalies not predicted by the standard inflationary cosmology. In particular, the power of the quadrupole moment of the CMB fluctuations is remarkably low, and the quadrupole and octopole moments are aligned mutually and with the geometry of the Solar system. It has been suggested in the literature that microwave sky pollution by an unidentified dust cloud in the vicinity of the Solar system may be the cause for these anomalies. In this paper, we simulate the thermal emission by clouds of spherical homogeneous particles of several materials. Spectral constraints from the WMAP multi-wavelength data and earlier infrared observations on the hypothetical dust cloud are used to determine the dust cloud’s physical characteristics. In order for its emissivity to demonstrate a flat, CMB-like wavelength dependence over the WMAP wavelengths (3 through 14 mm), and to be invisible in the infrared light, its particles must be macroscopic. Silicate spheres from several millimetres in size and carbonaceous particles an order of magnitude smaller will suffice. According to our estimates of the abundance of such particles in the Zodiacal cloud and trans-neptunian belt, yielding the optical depths of the order of 1E-7 for each cloud, the Solar-system dust can well contribute 10 microKelvin (within an order of magnitude) in the microwaves. This is not only intriguingly close to the magnitude of the anomalies (about 30 microKelvin), but also alarmingly above the presently believed magnitude of systematic biases of the WMAP results (below 5 microKelvin) and, to an even greater degree, of the future missions with higher sensitivities, e.g. PLANCK.

I haven’t read the paper in detail yet, but will definitely do so. In the meantime I’d be interested to hear the reaction to this claim from dusty experts!

Of course we know there is dust in the solar system, and were reminded of this in spectacular style earlier this week by the discovery (by the Spitzer telescope) of an enormous new ring around Saturn.

That tenuous link gives me an excuse to include a gratuitous pretty picture:

It may look impressive, but I hope things like that are not messing up the CMB. Has anyone got a vacuum cleaner?

Nobel Betting

Posted in Science Politics, The Universe and Stuff with tags , , , , on October 5, 2009 by telescoper

I’m reminded that the 2009 Nobel Prize for Physics will be announced tomorrow, on Tuesday 6th October. A recent article in the Times Higher suggested that British physicists might be in line for glory (based on a study of citation statistics). However, the Table they produced showed that their predictions haven’t really got a good track record so it might be unwise to bet too much on the outcome! This year’s predictions are at the top, with previous years underneath; the only successful prediction is highlighted in blue:

nobel

The problem I think is that it’s difficult to win the Nobel Prize for theoretical work unless confirmed by a definitive experiment, so much as I admire (Lord) Martin Rees – and would love to see a Nobel Prize going to astrophysics generally – I think I’d have to mark him down as an outsider. It would be absurd to give the prize to string theory, of course, as that makes no contact whatsoever with experiment or observation.

I think it would be particularly great if Sir Michael Berry won a share of the physics prize, but we’ll have to wait and see. The other British runner in the paddock is Sir John Pendry. While it would be excellent for British science to have a Nobel prize, what I think is best about the whole show is that it is one of the rare occasions that puts a spotlight on basic science, so it’s good for all of us (even us non-runners).

I think the panel made a bit of a bizarre decision last year and I hope there won’t be another steward’s enquiry this year to distract us from the chance to celebrate the achievements of the winner(s).

I’d be interested to hear any thoughts on other candidates through the comments box. No doubt there’ll be some reactions after the announcement too!

The Milky Way in a New Light

Posted in The Universe and Stuff with tags , , , , , on October 2, 2009 by telescoper

I note that the Herschel mission now has its own blog, so I no longer have to try to remember to put all the sexy images on here. However, at the end of a worrying week for UK astronomy, I thought it would be a good idea to put up one of the wonderful new infra-red images of the Milky Way just obtained from Herschel. This is the first composite colour picture made in “parallel mode”, i.e. by using the PACS and SPIRE instruments together. Together the two instruments cover a wavelength range from 70 to 500 microns. The resulting image uses red to represent the cooler long-wavelength emission (seen by SPIRE) and bluer colours show hotter areas. The region of active star formation shown is close to the Galactic plane; detailed images such as this, showing the intricate filamentary structure of the material in this stellar nursery, will help us to understand better how what the complex processes involved in stellar birth.

The Evidence

Posted in Biographical, The Universe and Stuff with tags , , , on September 25, 2009 by telescoper

Further to my recent post about the evidence for a low-density Universe, I thought I’d embarrass all concerned with this image, taken in Leiden in 1995.

Various shady characters masquerading as “experts” were asked by the audience of graduate students at a summer school to give their favoured values for the cosmological parameters (from top to bottom: the Hubble constant, density parameter, cosmological constant, curvature parameter and age of the Universe).

From left to right we have Alain Blanchard (AB), Bernard Jones (BJ, standing), John Peacock (JP), me (yes, with a beard and a pony tail – the shame of it), Vincent Icke (VI), Rien van de Weygaert (RW) and Peter Katgert (PK, standing). You can see on the blackboard that the only one to get anywhere close to correctly predicting the parameters of what would become the standard cosmological model was, in fact, Rien van de Weygaert.