Archive for the The Universe and Stuff Category

Eidolons

Posted in Poetry, The Universe and Stuff with tags , , on February 17, 2010 by telescoper

Off early this morning, as I have to travel to the frozen North to give a seminar in a foreign land. Time, therefore, to pad this blog thing out with another poem. I haven’t posted much by Walt Whitman so now seems like a good time to correct the omission. This is called Eidolons, and it’s taken from Whitman’s famous and, at the time of its publication, controversial, collection of poems Leaves of Grass.

The word itself is from the Greek ειδωλον, meaning an image, spectre or phantom and, according to the Oxford English Dictionary (which Whitman would of course not have been using), it can have the additional meanings in English of a “mental image” or an “insubstantial appearance”, a “false image or fallacy”. It  also has the meaning of “an image of an idealised person or thing”, and is thus the origin of the word Idol.

Eidolons is written in Whitman’s characteristic free verse style, with a broad sweep and strong cadences which really should be read out loud rather than silently on the page.

I’ve heard it said that this poem is anti-scientific. I suppose it is, in some respects, but only if you think that science is capable of telling us everything there is to know about the Universe. I don’t think of science like that, so I don’t see this poem as anti-scientific. It celebrates world beyond that which we perceive directly and that which our minds comprehend. Our representations of true reality are eidolons because they are incomplete and imperfect and not, I think, because they are mere fallacies. Whitman is not saying science is wrong, just that it only gives us part of the picture.

Anyway, that’s why I think. Read for yourself and see what you think. But whether or not it is anti-science it is definitely about science. The references to professors, stars, spectroscopes and the like are all clear. He even seems to be having a pre-emptive dig at the multiverse theory!

I met a seer,
Passing the hues and objects of the world,
The fields of art and learning, pleasure, sense,
To glean eidolons.

Put in thy chants said he,
No more the puzzling hour nor day, nor segments, parts, put in,
Put first before the rest as light for all and entrance-song of all,
That of eidolons.

Ever the dim beginning,
Ever the growth, the rounding of the circle,
Ever the summit and the merge at last, (to surely start again,)
Eidolons! eidolons!

Ever the mutable,
Ever materials, changing, crumbling, re-cohering,
Ever the ateliers, the factories divine,
Issuing eidolons.

Lo, I or you,
Or woman, man, or state, known or unknown,
We seeming solid wealth, strength, beauty build,
But really build eidolons.

The ostent evanescent,
The substance of an artist’s mood or savan’s studies long,
Or warrior’s, martyr’s, hero’s toils,
To fashion his eidolon.

Of every human life,
(The units gather’d, posted, not a thought, emotion, deed, left out,)
The whole or large or small summ’d, added up,
In its eidolon.

The old, old urge,
Based on the ancient pinnacles, lo, newer, higher pinnacles,
From science and the modern still impell’d,
The old, old urge, eidolons.

The present now and here,
America’s busy, teeming, intricate whirl,
Of aggregate and segregate for only thence releasing,
To-day’s eidolons.

These with the past,
Of vanish’d lands, of all the reigns of kings across the sea,
Old conquerors, old campaigns, old sailors’ voyages,
Joining eidolons.

Densities, growth, facades,
Strata of mountains, soils, rocks, giant trees,
Far-born, far-dying, living long, to leave,
Eidolons everlasting.

Exalte, rapt, ecstatic,
The visible but their womb of birth,
Of orbic tendencies to shape and shape and shape,
The mighty earth-eidolon.

All space, all time,
(The stars, the terrible perturbations of the suns,
Swelling, collapsing, ending, serving their longer, shorter use,)
Fill’d with eidolons only.

The noiseless myriads,
The infinite oceans where the rivers empty,
The separate countless free identities, like eyesight,
The true realities, eidolons.

Not this the world,
Nor these the universes, they the universes,
Purport and end, ever the permanent life of life,
Eidolons, eidolons.

Beyond thy lectures learn’d professor,
Beyond thy telescope or spectroscope observer keen, beyond all mathematics,
Beyond the doctor’s surgery, anatomy, beyond the chemist with his chemistry,
The entities of entities, eidolons.

Unfix’d yet fix’d,
Ever shall be, ever have been and are,
Sweeping the present to the infinite future,
Eidolons, eidolons, eidolons.

The prophet and the bard,
Shall yet maintain themselves, in higher stages yet,
Shall mediate to the Modern, to Democracy, interpret yet to them,
God and eidolons.

And thee my soul,
Joys, ceaseless exercises, exaltations,
Thy yearning amply fed at last, prepared to meet,
Thy mates, eidolons.

Thy body permanent,
The body lurking there within thy body,
The only purport of the form thou art, the real I myself,
An image, an eidolon.

Thy very songs not in thy songs,
No special strains to sing, none for itself,
But from the whole resulting, rising at last and floating,
A round full-orb’d eidolon.

Killing Vectors

Posted in The Universe and Stuff with tags , , , on February 16, 2010 by telescoper

I’ve been feeling a rant coming for some time now. Since I started teaching again three weeks ago, actually. The target of my vitriol this time is the teaching of Euclidean vectors. Not vectors themselves, of course. I like vectors. They’re great. The trouble is the way we’re forced to write them these days when we use them in introductory level physics classes.

You see, when I was a lad, I was taught to write a geometric vector in the folowing fashion:

\underline{r} =\left(\begin{array}{c} x \\ y \\ z \end{array} \right).

This is a simple column vector, where x,y,z are the components in a three-dimensional cartesian coordinate system. Other kinds of vector, such as those representing states in quantum mechanics, or anywhere else where linear algebra is used, can easily be represented in a similar fashion.

This notation is great because it’s very easy to calculate the scalar (dot) and vector (cross) products of two such objects by writing them in column form next to each other and performing a simple bit of manipulation. For example, the scalar product of the two vectors

\underline{u}=\left(\begin{array}{c} 1 \\ 1 \\ 1 \end{array} \right) and \underline{v}=\left(\begin{array}{c} 1\\ 1 \\ -2 \end{array} \right)

can easily be found by multiplying the corresponding elements of each together and totting them up:

\underline{u}\cdot \underline{v} = (1 \times 1) + (1\times 1) + (1\times -2) =0,

showing immediately that these two vectors are orthogonal. In normalised form, these two particular vectors  appear in other contexts in physics, where they have a more abstract interpretation than simple geometry, such as in the representation of the gluon in particle physics.

Moreover, writing vectors like this makes it a lot easier to transform them via the action of a matrix, by multipying rows in the usual fashion, e.g.

\left(\begin{array}{ccc} \cos \theta & \sin\theta & 0 \\ -\sin\theta & \cos \theta & 0 \\ 0 & 0 & 1\end{array} \right) \left(\begin{array}{c} x \\ y \\ z \end{array} \right) = \left(\begin{array}{c} x\cos \theta + y\sin\theta \\ -x \sin \theta + y\cos \theta \\ z \end{array} \right)

which corresponds to a rotation of the vector in the x-y plane. Transposing a column vector into a row vector is easy too.

Well, that’s how I was taught to do it.

However, somebody, sometime, decided that, in Britain at least, this concise and computationally helpful notation had to be jettisoned and students instead must be forced to write

\underline{r} = x \underline{\hat{i}} + y \underline{\hat{j}} + z \underline{\hat{k}}

Some of you may even be used to doing it that way yourself. Why is this awful? For a start, it’s incredibly clumsy. It is less intuitive, doesn’t lend itself to easy operations on the vectors like I described above, doesn’t translate easily into the more general case of a matrix, and is generally just …well… awful.

Worse still, for the purpose of teaching inexperienced students physics, it offers the possibility of horrible notational confusion. In particular, the unit vector \underline{\hat{i}} is too easily confused with i, the square root of minus one. Introduce a plane wave with a wavevector \underline{k} and it gets even worse, especially when you want to write \exp(i\underline{k}\cdot\underline{x})!

No, give me the row and column notation any day.

I would really like to know is who decided that our schools had to teach the horrible notation, rather than the nice one, and why? I think everyone who teaches physics knows that a clear and user-friendly notation is an enormous help and a bad one is an enormous hindrance.  It doesn’t surprise me that some student struggle with even simple mathematics when its presented in such a silly way. On those grounds, I refuse to play ball, and always use the better notation.

Call me old-fashioned.

Colour in Fourier Space

Posted in The Universe and Stuff with tags , , , , , on February 9, 2010 by telescoper

As I threatened promised after Anton’s interesting essay on the perception of colour, a couple of days ago, I thought I’d write a quick item about something vaguely relevant that relates to some of my own research. In fact, this ended up as a little paper in Nature written by myself and Lung-Yih Chiang, a former student of mine who’s now based in his homeland of Taiwan.

This is going to be a bit more technical than my usual stuff, but it also relates to a post I did some time ago concerning the cosmic microwave background and to the general idea of the cosmic web, which has also featured in a previous item. You may find it useful to read these contributions first if you’re not au fait with cosmological jargon.

Or you may want to ignore it altogether and come back when I’ve found another look-alike

The large-scale structure of the Universe – the vast chains of galaxies that spread out over hundreds of millions of light-years and interconnect in a complex network (called the cosmic web) – is thought to have its origin in small fluctuations generated in the early universe by quantum mechnical effects during a bout of cosmic inflation.

These fluctuations in the density of an otherwise homogeneous universe are usually expressed in dimensionless form via the density contrast, defined as\delta({\bf x})=(\rho({\bf x})-\bar{\rho})/\bar{\rho}, where \bar{\rho} is the mean density. Because it’s what physicists always do when they can’t think of anything better, we take the Fourier transform of this and write it as \tilde{\delta}, which is a complex function of the wavevector {\bf k}, and can therefore be written

\tilde{\delta}({\bf k})=A({\bf k}) \exp [i\Phi({\bf k})],

where A is the amplitude and \Phi is the phase belonging to the wavevector {\bf k}; the phase is an angle between zero and 2\pi radians.

This is a particularly useful thing to do because the simplest versions of inflation predict that the phases of each of the Fourier modes should be randomly distributed. Each is independent of the others and is essentially a random angle designating any point on the unit circle. What this really means is that there is no information content in their distribution, so that the harmonic components are in a state of maximum statistical disorder or entropy. This property also guarantees that fluctuations from place to place have a Gaussian distribution, because the density contrast at any point is formed from a superposition of a large number of independent plane-wave modes  to which the central limit theorem applies.

However, this just describes the initial configuration of the density contrast as laid down very early in the Big Bang. As the Universe expands, gravity acts on these fluctuations and alters their properties. Regions with above-average initial density (\delta >0) attract material from their surroundings and get denser still. They then attract more material, and get denser. This is an unstable process that eventually ends up producing enormous concentrations of matter (\delta>>1) in some locations and huge empty voids everywhere else.

This process of gravitational instability has been studied extensively in a variety of astrophysical settings. There are basically two regimes: the linear regime covering the early stages when \delta << 1 and the non-linear regime when large contrasts begin to form. The early stage is pretty well understood; the latter isn’t. Although many approximate analytical methods have been invented which capture certain aspects of the non-linear behaviour, general speaking we have to  run N-body simulations that calculate everything numerically by brute force to get anywhere.

The difference between linear and non-linear regimes is directly reflected in the Fourier-space behaviour. In the linear regime, each Fourier mode evolves independently of the others so the initial statistical form is preserved. In the non-linear regime, however, modes couple together and the initial Gaussian distribution begins to distort.

About a decade ago, Lung-Yih and I started to think about whether one might start to understand the non-linear regime a bit better by looking at the phases of the Fourier modes, an aspect of the behaviour that had been largely neglected until then. Our point was that mode-coupling effects must surely generate phase correlations that were absent in the initial random-phase configuration.

In order to explore the phase distribution we hit upon the idea of representing the phase of each Fourier mode using a  colour model. Anton’s essay discussed the  RGB (red-green-blue) parametrization of colour is used on computer screens as well as the CMY (Cyan-Magenta-Yellow) system preferred for high-quality printing.

However, there are other systems that use parameters different to those representing basic tones in these schemes. In particular, there are colour models that involve a parameter called the hue, which represents the position of a particular colour on the colour wheel shown left. In terms of the usual RGB framework you can see that red has a hue of zero, green is 120 degrees, and blue is 240. The complementary colours cyan, magenta and yellow lie 180 degrees opposite their RGB counterparts.

This representation is handy because it can be employed in a scheme that uses colour to represent Fourier phase information. Our idea was simple. The phases of the initial conditions should be random, so in this representation the Fourier transform should just look like a random jumble of colours with equal amounts of, say, red green and blue. As non-linear mode coupling takes hold of the distribution, however, a pattern should emerge in the phases in a manner which is characteristic of gravitational instability.

I won’t go too much further into the details here, but I will show a picture that proves that it works!

What you see here are four columns. The leftmost shows (from top to bottom) the evolution of a two-dimensional simulation of gravitational clustering. You can see the structure develops hierarchically, with an increasing characteristic scale of structure as time goes on.

The second column shows a time sequence of (part of) the Fourier transform of the distribution seen in the first; for the aficianados I should say that this is only one quadrant of the transform and that the rest is omitted for reasons of symmetry. Amplitude information is omitted here and the phase at each position is represented by an appropriate hue. To represent on this screen, however, we had to convert back to the RGB system.

The pattern is hard to see on this low resolution plot but two facts are noticeable. One is that a definite texture emerges, a bit like Harris Tweed, which gets stronger as the clustering develops. The other is that the relative amount of red green and blue does not change down the column.

The reason for the second property is that although clustering develops and the distribution of density fluctuations becomes non-Gaussian, the distribution of phases remains uniform in the sense that binning the phases of the entire Fourier transform would give a flat histogram. This is a consequence of the fact that the statistical properties of the fluctuations remain invariant under spatial translations even when they are non-linear.

Although the one-point distribuition of phases stays uniform even into the strongly non-linear regime, they phases do start to learn about each other, i.e. phase correlations emerge. Columns 3 and 4 illustrate this in the simplest possible way; instead of plotting the phases of each wavemode we plot the differences between the phases of neighbouring modes in the x  and y directions respectively.

If the phases are random then the phase differences are also random. In the initial state, therefore, columns 3 and 4 look just like column 2. However, as time goes on you should be able to see the emergence of a preferred colour in both columns, showing that the distribution of phase differences is no longer random.

The hard work is to describe what’s going on mathematically. I’ll spare you the details of that! But I hope I’ve at least made the point that this is a useful way of demonstrating that phase correlations exist and of visualizing some of their properties.

It’s also – I think – quite a lot of fun!

P.S. If you’re interested in the original paper, you will find it in Nature, Vol. 406 (27 July 2000), pp. 376-8.

(Guest Post) What is Colour?

Posted in Art, The Universe and Stuff with tags , , , , , on February 7, 2010 by telescoper

As often happens on this blog, the comments following an item a few days ago went off in unexpected directions, one of which related to optics and vision. This led to my old friend, and regular commenter on this blog, Anthony Garrett (“Anton”), sending me an essay on the subject of colour perception and some very fine examples of abstract art. There thus appeared a perfect opportunity for another Guest Post, so for the rest of this item I’m handing over to Anton…

-0-

Some years ago I was privileged to get to know, toward the end of her life, a retired teacher from Durham called Olive Chedburn. She made wonderful greeting cards which she sent to her friends, using a technique known as encaustic art. This employs heated beeswax with coloured pigment added, and a hot iron; you can read more about it at Wikipedia.

Here are the three pieces that she sent to me:

Although I am in general not a fan of abstract art, I think these are lovely. One friend said that they resembled underwater coral scenes. To me they look more like the inside of caves or chasms, perhaps with a waterfall. One of their beauties is that they definitely look like something – but you can never quite catch what.

Olive wrote a meditation on light and colour, in nature and in the Christian Bible, which I enjoyed reading very much. The main thing she left out was the science of light and colour, of which she had no knowledge. I wrote and sent her a complementary essay about this. Peter clearly likes her art and my essay, because he kindly offered to reproduce both on his blog, as you see. Olive died two years ago and her art now stands as her memorial. I hope you enjoy it as much as I did.

My essay now follows; if you want to look into the subject in greater depth then I recommend this website, which was designed to inform artists.

Colour perception is often said to be subjective. It is less clear what that means, however. The relevant scientific notion is wavelength. Light is a wave – although, remarkably, no physical medium oscillates (unlike sound waves in air, for instance); in the language of a century ago there is no ‘aether’.

Strictly speaking it would be better to talk about the frequency of light waves, because the wavelength changes with the density of the medium through which the light passes, but the frequency is unchanged. (The product of the wavelength and the frequency is the speed of light, which is a staggering 300,000 kilometers per second in empty space.) But the change in wavelength of light passing from a vacuum into air is so small that it can be ignored for present purposes. The change in wavelength (and in wave speed) is much greater when light passes into glass, or into the transparent fluids inside the eye, is much greater (25% reduction in water), since these media are much denser than air.

Light that consists of a single wavelength is called monochromatic light. Monochromatic light is not divided (further) by a prism, or by anything else that is done to it – a fact discovered by Isaac Newton in the 17th century. (Newton also reassembled the various colours back into white light.) One may superimpose differing amounts (intensities) of light of various wavelengths and look at the result. ‘White light’ is a superposition having roughly the same intensity in each colour band, as we confirm by putting it through a prism. (A prism splits light, because differing wavelengths of light entering the prism are shortened by differing amounts. The same effect creates rainbows as light passes through water droplets in the atmosphere.) In analysing colour, physics deals only the notion of how much light of each wavelength reaches the eye – the ‘spectrum’ (formally, the spectral density function) of the light. The distribution of the light across the retina – the screen at the back of the eye – also counts; a single object may appear to be coloured somewhat differently when viewed against differing backgrounds. Light has further characteristics (such as coherence, which is significant in lasers), but they make no difference to the perception of colour. A property of light known as its polarisation may change upon reflection from – or transmission through – a medium, but polarisation of light is not itself detected by the eye. (This raises the question: Are we interested in the object we are looking upon, or the light entering our eye?)

Wavelength is precisely defined, but colours – such as ‘blue’ – relate to a (fairly narrow) band of wavelengths, such that any monochromatic beam within that band will be perceived as blue. Moreover, if I add a low intensity of white light into blue, the result will still be perceived as blue. And if, in a spectrum that is generally agreed to be white, I make a small change in the amount of one particular wavelength, the result will still generally be agreed to be white. Only black is unambiguous: it is the absence of any light, of any wavelength. (Even then, it is the perceived absence, for light that is below the sensitivity threshold of the eye does not count; we shall consider perception below.)

We perceive some objects because they emit light into our eyes, such as a LED (light-emitting diode). Light of a particular frequency/wavelength/colour is emitted is when a (negatively charged) electron within an atom falls from one orbit around the positively charged atomic nucleus to another orbit around it; quantum theory tells us that only certain orbits are possible. (The difference in energy between the two orbits goes into the light that is emitted when the electron shifts orbit, and is proportional to the frequency of the light.) We see non-emitting objects because they reflect some of the light that falls on them, into our eyes. The colour that we say such an object is depends on the light that passes from the object to our eyes. This depends in turn on two factors: the combination of wavelengths falling on it; and how much of each particular wavelength the object reflects. (All light that is not reflected is absorbed, warming the object in the same way as sunbathing.) Intrinsic to the object is not its ‘colour’ but the proportion of each wavelength hitting it that it reflects. ‘Red paint’ means paint containing pigment that reflects only red light and absorbs all other colours (likewise for blue paint, etc); so that if ‘red paint’ is illuminated by a uniform mixture of light colours (i.e., white light) then only the red bounces back off it, and it looks red. But if the same object is illuminated by blue light, it absorbs the blue light so that (virtually) nothing comes off by way of reflection, and the object is perceived as black. We say that objects ‘are’ a particular colour because we generally view them in daylight or artificial white light, which contains all colours. ‘White paint’ is paint that reflects all colours and absorbs none. It looks whatever colour is shone at it – red in red light, blue in blue light, white in white light, and so on. Black paint absorbs all colours, and (uniquely) looks the same in any light.

A ‘red filter’ is something designed to let only red wavelengths through (and similarly for other filters). Something that lets all wavelengths through – the analogue of ‘white paint’ – is called transparent. (Air is virtually transparent, although it lets slightly more blue light through than other wavelengths – that is why the sky, which is lit by the many wavelengths emitted by the sun, looks blue.) Something that lets no light through – the analogue of black paint – is called a barrier. On its far side from the light source it looks black.

Also important is the texture of a surface. A perfectly reflecting material is colloquially called a white surface if it is rough enough to disperse incoming light in all directions, but if it is smooth on the scale of the incoming wavelengths then it is called a mirror. Texture is also responsible for the difference between matt and gloss paint. As for the scales involved, wavelengths of light visible to humans vary from red, which is around wavelength 0.7 micrometers (a micrometer is one thousandth of a millimetre) to blue/violet, which is about half that wavelength. In contrast, radio waves, which are of the same family and speed as light, have wavelengths of hundreds of metres.

Biological science can translate the physical specification of what lands on the retina into a specific pattern of nerve impulses passing from the eye to the visual cortex. That can in turn be correlated with the person saying “it’s green” or “it’s red” (or whatever). The names of colours are learned by tradition. As a child, each of us shared with an adult the experience of perceiving light of a particular wavelength; the adult named the colour and we learned the name. If children were not taught the names of colours then a consensus would emerge among them of what to call the colours, based on the similarity of their experiences. This consensus arises in turn from the common features of their perceptive systems (eye plus visual cortex).

Every colour to which humans give a name corresponds to a characteristic shape of the spectrum of wavelengths entering the eye. Lodged in the human retina are different types of colour receptor cells, known as cones. Each type of cone contains a different light-sensitive pigment, which absorbs and reacts most strongly to light of a particular wavelength. If you fire monochromatic light at a particular cone cell and then gradually decrease the wavelength (starting from red), the cell will transmit an increasingly strong signal to the brain until its own wavelength of peak sensitivity is reached; after that the signal will fall away on the other side the peak. Humans have three working types of cone cell, having distinct wavelengths of peak sensitivity. (The three sensitivity curves overlap to some extent.) This is why we can reasonably accurately simulate all colours that humans perceive by mixing just three colours, known as the primary colours.

People who are said to be colour-blind may have only two types of working cone, rather than three. They perceive the world differently, although they learn this only by observing that their reactions to certain wavelengths of light differ from the reactions of the majority. A man who was not colour-blind and whose cones of one particular type were suddenly switched off would see the world tinted, but a colour-blind man whose retinal cells had identical firing responses would say that things looked normal – because his brain would have trained itself from birth to regard this as the norm. Some species of animals have sensitivity spectra very different from the normal human one. Some animals see in black-and-white only (like humans at low light levels – see below); others have cone combinations with a less or a more uniform response than humans to light that is equally intense across the visual spectrum.

The mixing of primary colours of light to generate any colour known to human experience is a conceptually different problem from mixing paints to do the same. When you mix (‘add’) together light beams of the primary colours (Red, Green, Blue, roughly corresponding to the responses of the differing pigments in the three types of cone cells), you get white light. (Colour monitors and televisions have a multitude of ‘RGB’ dots.) These three are known as the ‘additive primary colours’. If you mix pigments of the three primary colours then the result is black paint, since each primary reflects only one colour, which the other primary pigments in the mixture suppress. Colour printers in fact mix cyan (which is blueish), yellow and magenta (pink-purple) in order to create all the colours known to man when the printer output is viewed in white light. These are the ‘subtractive’ primary colours, so named because if we subtract one of the additive primary colours from white light, leaving a mixture of the other two, we obtain the three subtractive primary colours. Whereas the mixing of light to obtain a desired colour is systematic, the mixing of pigment to do likewise is based on a library of knowledge gained by trial and error. Similarly, prediction of the colour of light that passes through consecutive glass jars of coloured translucent liquid (i.e., filters) is systematic, but the result of mixing the fluids is not.

Photography is conceptually more complicated than painting. What you see depends on further factors: the light that originally hit the photosensitive recorder; the response of the photosensitive recorder; the printing of the photograph (which may compensate for deficiencies in the response); and the light that the photograph is viewed in. Furthermore, negative film followed by printing and viewing; slide film viewing; digital photography viewed onscreen; and viewing a printout of a digital photograph each provide distinct re-creations at the eye of the light coming into the viewfinder.

Human perception of colour is actually more complex than I have stated. There are other cells in the retina called rods. These are more sensitive to light than cones but do not distinguish between colours. They come into their own at low levels of illumination; as a result, human vision under dimly lit conditions is essentially black-and-white. When the light intensity increases, beginning from darkness, the cones ‘kick in’ roughly when the rods become ‘saturated’ and send out no stronger signal as the brightness increases further. The brain also appears to take into account differences between the signals coming from the three types of cone, and differences between these and the rods.

A century after Newton, Goethe wrote on colour in an apparently opposing (and highly critical) way. Although what Newton had said was correct, hindsight makes it clear that Goethe was more concerned with the perception of colour than with the physics of light. We glimpse here two different philosophies: the ‘modern’ view espoused by the Enlightenment (no pun is intended on the name) that a world exists ‘out there’ to be explained (Newton), and the ‘post-modern’ view that our sensory impressions are all we have, and are therefore the most fundamental (Goethe). Goethe took the view that colour arises from the interplay between light and dark. Nowadays we have learned that humans perceive colours when they look at a spinning disc with a particular black-and-white pattern printed on it, for instance – presenting a challenge to theories of colour perception. Although Goethe’s explanations have been superseded, he was an acute observer of colour phenomena more complex than those analysed by Newton. There is still plenty to learn about the perception of colour.

The World

Posted in Poetry, The Universe and Stuff with tags , , on January 28, 2010 by telescoper

The  poet Henry Vaughan was born in Trenewydd (Newton), near Brecon, in Wales, in 1622 and lived most of his life not far from there in the small village of Llansantffraed, where he also practised as a physician. He died in 1695. His twin brother Thomas Vaughan was a noted philosopher (and alchemist), so theirs was clearly an interesting family! Henry Vaughan followed in the footsteps of another famous Welsh metaphysical poet, George Herbert, although literary experts seem to argue about their relative merits, as literary experts are wont to do…

I’ve recently developed a bit of a thing for English (and Welsh) metaphysical poets and have included a few examples on here, partly because they are totally new to me and might therefore be new to people reading this blog, and partly because they often deal with grand themes about the Universe which gives me an excuse to include them on what I sometimes pretend is a science blog.

Like many of his ilk (including Thomas Traherne, who I’ve blogged about before) Henry Vaughan wasn’t particularly celebrated in his lifetime but he was increasingly appreciated after his death;  William Wordsworth acknowledged him as a major influence, for example. Recurring themes in Vaughan’s poems – like those of Wordsworth – are the loss of childhood innocence and a love for Nature. I’ve picked one of his most famous works as an example. It doesn’t have as strong an astronomical connection as some others, but the opening lines are so beautiful I hope you won’t mind!

The World

I saw Eternity the other night
Like a great Ring of pure and endless light
All calm as it was bright;
And round beneath it, Time, in hours, days, years,
Driven by the spheres,
Like a vast shadow moved, in which the world
And all her train were hurled.
The doting Lover in his quaintest strain
Did there complain;
Near him, his lute, his fancy, and his flights,
Wit’s sour delights;
With gloves and knots, the silly snares of pleasure;
Yet his dear treasure
All scattered lay, while he his eyes did pour
Upon a flower.

The darksome Statesman hung with weights and woe,
Like a thick midnight fog, moved there so slow
He did nor stay nor go;
Condemning thoughts, like sad eclipses, scowl
Upon his soul,
And clouds of crying witnesses without
Pursued him with one shout.
Yet digged the mole, and, lest his ways be found,
Worked under ground,
Where he did clutch his prey; but One did see
That policy.
Churches and altars fed him, perjuries
Were gnats and flies;
It rained about him blood and tears, but he
Drank them as free.

The fearful Miser on a heap of rust
Sat pining all his life there, did scarce trust
His own hands with the dust;
Yet would not place one piece above, but lives
In fear of thieves.
Thousands there were as frantic as himself,
And hugged each one his pelf.
The downright Epicure placed heaven in sense
And scorned pretence;
While others, slipped into a wide excess,
Said little less;
The weaker sort, slight, trivial wares enslave,
Who think them brave;
And poor despisèd Truth sat counting by
Their victory.

Yet some, who all this while did weep and sing,
And sing and weep, soared up into the Ring;
But most would use no wing.
‘Oh, fools,’ said I, ‘thus to prefer dark night
Before true light,
To live in grots and caves, and hate the day
Because it shows the way,
The way which from this dead and dark abode
Leaps up to God,
A way where you might tread the sun, and be
More bright than he.’
But as I did their madness so discuss,
One whispered thus,
This Ring the Bridegroom did for none provide
But for his Bride.

The Seven Year Itch

Posted in Bad Statistics, Cosmic Anomalies, The Universe and Stuff with tags , , , on January 27, 2010 by telescoper

I was just thinking last night that it’s been a while since I posted anything in the file marked cosmic anomalies, and this morning I woke up to find a blizzard of papers on the arXiv from the Wilkinson Microwave Anisotropy Probe (WMAP) team. These relate to an analysis of the latest data accumulated now over seven years of operation; a full list of the papers is given here.

I haven’t had time to read all of them yet, but I thought it was worth drawing attention to the particular one that relates to the issue of cosmic anomalies. I’ve taken the liberty of including the abstract here:

A simple six-parameter LCDM model provides a successful fit to WMAP data, both when the data are analyzed alone and in combination with other cosmological data. Even so, it is appropriate to search for any hints of deviations from the now standard model of cosmology, which includes inflation, dark energy, dark matter, baryons, and neutrinos. The cosmological community has subjected the WMAP data to extensive and varied analyses. While there is widespread agreement as to the overall success of the six-parameter LCDM model, various “anomalies” have been reported relative to that model. In this paper we examine potential anomalies and present analyses and assessments of their significance. In most cases we find that claimed anomalies depend on posterior selection of some aspect or subset of the data. Compared with sky simulations based on the best fit model, one can select for low probability features of the WMAP data. Low probability features are expected, but it is not usually straightforward to determine whether any particular low probability feature is the result of the a posteriori selection or of non-standard cosmology. We examine in detail the properties of the power spectrum with respect to the LCDM model. We examine several potential or previously claimed anomalies in the sky maps and power spectra, including cold spots, low quadrupole power, quadropole-octupole alignment, hemispherical or dipole power asymmetry, and quadrupole power asymmetry. We conclude that there is no compelling evidence for deviations from the LCDM model, which is generally an acceptable statistical fit to WMAP and other cosmological data.

Since I’m one of those annoying people who have been sniffing around the WMAP data for signs of departures from the standard model, I thought I’d comment on this issue.

As the abstract says, the  LCDM model does indeed provide a good fit to the data, and the fact that it does so with only 6 free parameters is particularly impressive. On the other hand, this modelling process involves the compression of an enormous amount of data into just six numbers. If we always filter everything through the standard model analysis pipeline then it is possible that some vital information about departures from this framework might be lost. My point has always been that every now and again it is worth looking in the wastebasket to see if there’s any evidence that something interesting might have been discarded.

Various potential anomalies – mentioned in the above abstract – have been identified in this way, but usually there has turned out to be less to them than meets the eye. There are two reasons not to get too carried away.

The first reason is that no experiment – not even one as brilliant as WMAP – is entirely free from systematic artefacts. Before we get too excited and start abandoning our standard model for more exotic cosmologies, we need to be absolutely sure that we’re not just seeing residual foregrounds, instrument errors, beam asymmetries or some other effect that isn’t anything to do with cosmology. Because it has performed so well, WMAP has been able to do much more science than was originally envisaged, but every experiment is ultimately limited by its own systematics and WMAP is no different. There is some (circumstantial) evidence that some of the reported anomalies may be at least partly accounted for by  glitches of this sort.

The second point relates to basic statistical theory. Generally speaking, an anomaly A (some property of the data) is flagged as such because it is deemed to be improbable given a model M (in this case the LCDM). In other words the conditional probability P(A|M) is a small number. As I’ve repeatedly ranted about in my bad statistics posts, this does not necessarily mean that P(M|A)- the probability of the model being right – is small. If you look at 1000 different properties of the data, you have a good chance of finding something that happens with a probability of 1 in a thousand. This is what the abstract means by a posteriori reasoning: it’s not the same as talking out of your posterior, but is sometimes close to it.

In order to decide how seriously to take an anomaly, you need to work out P(M|A), the probability of the model given the anomaly, which requires that  you not only take into account all the other properties of the data that are explained by the model (i.e. those that aren’t anomalous), but also specify an alternative model that explains the anomaly better than the standard model. If you do this, without introducing too many free parameters, then this may be taken as compelling evidence for an alternative model. No such model exists -at least for the time being – so the message of the paper is rightly skeptical.

So, to summarize, I think what the WMAP team say is basically sensible, although I maintain that rummaging around in the trash is a good thing to do. Models are there to be tested and surely the best way to test them is to focus on things that look odd rather than simply congratulating oneself about the things that fit? It is extremely impressive that such intense scrutiny over the last seven years has revealed so few oddities, but that just means that we should look even harder..

Before too long, data from Planck will provide an even sterner test of the standard framework. We really do need an independent experiment to see whether there is something out there that WMAP might have missed. But we’ll have to wait a few years for that.

So far it’s WMAP 7 Planck 0, but there’s plenty of time for an upset. Unless they close us all down.

Astronomy (and Particle Physics) Look-alikes, No. 10

Posted in Astronomy Lookalikes, Opera, The Universe and Stuff with tags , , , , on January 23, 2010 by telescoper

I was struck by the similarity between the design of the  ATLAS detector, at the Large Hadron Collider in CERN, and that of a recent production of Les Troyens by Hector Berlioz  in Valencia, Spain. How’s that for cultural impact?

Pity it had to be this Opera though. I hate it. Somebody should do a similar thing with the Magic Flute, which is actually all about particle physics

Herschel News

Posted in The Universe and Stuff with tags , , , , on January 17, 2010 by telescoper

I’ve been a bit slow to mention recent news about the European Space Agency‘s Herschel mission so this is by way of a quick update.

The first thing is to remind you that there was a big meeting of Herschel scientists in Madrid just before Christmas, which was attended by quite a number of Cardiff astronomers. It also happened to coincide with  less happy events. The purpose of this meeting was to share the preliminary results from the Science Demonstration Phase of Herschel’s operations. I did a quick post about some of the results, but didn’t have time to cover everything, which I still don’t. However, the complete set of presentations is now available online and I’d encourage you to sample some of the amazing results. Matt Griffin gave a nice overview of the key results at the RAS Ordinary Meeting just over a week ago.

You may recall that the Herschel telescope is fitted with three instruments:

  • The Photodetector Array Camera and Spectrometer (PACS)
  • The Spectral and Photometric Imaging REceiver (SPIRE)
  • The Heterodyne Instrument for the Far Infrared (HIFI)

The last of these instruments is basically a high-resolution spectrometer which, among other things will be great for detecting spectral lines from molecules, including good old H2O. In fact here’s a nice example of a water line seen in a comet

The problem is that HIFI has actually been switched off for quite a while – 160 days in fact – after a fault developed in its power supply. There is a backup power-supply, of course, but the engineers didn’t want to switch it over until they had figured out what had gone wrong, which took quite a while.  However, last Thursday, the HIFI instrument was switched back on and is now working fine. The full story can be found here. It was also covered quite a bit in the general media, including  the BBC.

While HIFI was offline, the calibration and verification of PACS and SPIRE went ahead at a good speed and now HIFI will have to catch up which has meant a bit of juggling around with schedules but, other than that, it’s all systems go…

Finally, I’ll just point out in case you didn’t know or have forgotten, that the Herschel Mission has its own wordpress blog, which is regularly updated  and is well worth checking out.

A Little Bit of Quantum

Posted in The Universe and Stuff with tags , , , , , , , , , , , on January 16, 2010 by telescoper

I’m trying to avoid getting too depressed by writing about the ongoing funding crisis for physics in the United Kingdom, so by way of a distraction I thought I’d post something about physics itself rather than the way it is being torn apart by short-sighted bureaucrats. A number of Cardiff physics students are currently looking forward (?) to their Quantum Mechanics examinations next week, so I thought I’d try to remind them of what fascinating subject it really is…

The development of the kinetic theory of gases in the latter part of the 19th Century represented the culmination of a mechanistic approach to Natural Philosophy that had begun with Isaac Newton two centuries earlier. So successful had this programme been by the turn of the 20th century that it was a fairly common view among scientists of the time that there was virtually nothing important left to be “discovered” in the realm of natural philosophy. All that remained were a few bits and pieces to be tidied up, but nothing could possibly shake the foundations of Newtonian mechanics.

But shake they certainly did. In 1905 the young Albert Einstein – surely the greatest physicist of the 20th century, if not of all time – single-handedly overthrew the underlying basis of Newton’s world with the introduction of his special theory of relativity. Although it took some time before this theory was tested experimentally and gained widespread acceptance, it blew an enormous hole in the mechanistic conception of the Universe by drastically changing the conceptual underpinning of Newtonian physics. Out were the “commonsense” notions of absolute space and absolute time, and in was a more complex “space-time” whose measurable aspects depended on the frame of reference of the observer.

Relativity, however, was only half the story. Another, perhaps even more radical shake-up was also in train at the same time. Although Einstein played an important role in this advance too, it led to a theory he was never comfortable with: quantum mechanics. A hundred years on, the full implications of this view of nature are still far from understood, so maybe Einstein was correct to be uneasy.

The birth of quantum mechanics partly arose from the developments of kinetic theory and statistical mechanics that I discussed briefly in a previous post. Inspired by such luminaries as James Clerk Maxwell and Ludwig Boltzmann, physicists had inexorably increased the range of phenomena that could be brought within the descriptive framework furnished by Newtonian mechanics and the new modes of statistical analysis that they had founded. Maxwell had also been responsible for another major development in theoretical physics: the unification of electricity and magnetism into a single system known as electromagnetism. Out of this mathematical tour de force came the realisation that light was a form of electromagnetic wave, an oscillation of electric and magnetic fields through apparently empty space.  Optical light forms just part of the possible spectrum of electromagnetic radiation, which ranges from very long wavelength radio waves at one end to extremely short wave gamma rays at the other.

With Maxwell’s theory in hand, it became possible to think about how atoms and molecules might exchange energy and reach equilibrium states not just with each other, but with light. Everyday experience that hot things tend to give off radiation and a number of experiments – by Wilhelm Wien and others – had shown that there were well-defined rules that determined what type of radiation (i.e. what wavelength) and how much of it were given off by a body held at a certain temperature. In a nutshell, hotter bodies give off more radiation (proportional to the fourth power of their temperature), and the peak wavelength is shorter for hotter bodies. At room temperature, bodies give off infra-red radiation, stars have surface temperatures measured in thousands of degrees so they give off predominantly optical and ultraviolet light. Our Universe is suffused with microwave radiation corresponding to just a few degrees above absolute zero.

The name given to a body in thermal equilibrium with a bath of radiation is a “black body”, not because it is black – the Sun is quite a good example of a black body and it is not black at all – but because it is simultaneously a perfect absorber and perfect emitter of radiation. In other words, it is a body which is in perfect thermal contact with the light it emits. Surely it would be straightforward to apply classical Maxwell-style statistical reasoning to a black body at some temperature?

It did indeed turn out to be straightforward, but the result was a catastrophe. One can see the nature of the disaster very straightforwardly by taking a simple idea from classical kinetic theory. In many circumstances there is a “rule of thumb” that applies to systems in thermal equilibrium. Roughly speaking, the idea is that energy becomes divided equally between every possible “degree of freedom” the system possesses. For example, if a box of gas consists of particles that can move in three dimensions then, on average, each component of the velocity of a particle will carry the same amount of kinetic energy. Molecules are able to rotate and vibrate as well as move about inside the box, and the equipartition rule can apply to these modes too.

Maxwell had shown that light was essentially a kind of vibration, so it appeared obvious that what one had to do was to assign the same amount of energy to each possible vibrational degree of freedom of the ambient electromagnetic field. Lord Rayleigh and Sir James Jeans did this calculation and found that the amount of energy radiated by a black body as a function of wavelength should vary proportionally to the temperature T and to inversely as the fourth power of the wavelength λ, as shown in the diagram for an example temperature of 5000K:

Even without doing any detailed experiments it is clear that this result just has to be nonsense. The Rayleigh-Jeans law predicts that even very cold bodies should produce infinite amounts of radiation at infinitely short wavelengths, i.e. in the ultraviolet. It also predicts that the total amount of radiation – the area under the curve in the above figure – is infinite. Even a very cold body should emit infinitely intense electromagnetic radiation. Infinity is bad.

Experiments show that the Rayleigh-Jeans law does work at very long wavelengths but in reality the radiation reaches a maximum (at a wavelength that depends on the temperature) and then declines at short wavelengths, as shown also in the above Figure. Clearly something is very badly wrong with the reasoning here, although it works so well for atoms and molecules.

It wouldn’t be accurate to say that physicists all stopped in their tracks because of this difficulty. It is amazing the extent to which people are able to carry on despite the presence of obvious flaws in their theory. It takes a great mind to realise when everyone else is on the wrong track, and a considerable time for revolutionary changes to become accepted. In the meantime, the run-of-the-mill scientist tends to carry on regardless.

The resolution of this particular fundamental conundrum is accredited to Karl Ernst Ludwig “Max” Planck (right), who was born in 1858. He was the son of a law professor, and himself went to university at Berlin and Munich, receiving his doctorate in 1880. He became professor at Kiel in 1885, and moved to Berlin in 1888. In 1930 he became president of the Kaiser Wilhelm Institute, but resigned in 1937 in protest at the behaviour of the Nazis towards Jewish scientists. His life was blighted by family tragedies: his second son died in the First World War; both daughters died in childbirth; and his first son was executed in 1944 for his part in a plot to assassinate Adolf Hitler. After the Second World War the institute was named the Max Planck Institute, and Planck was reappointed director. He died in 1947; by then such a famous scientist that his likeness appeared on the two Deutschmark coin issued in 1958.

Planck had taken some ideas from Boltzmann’s work but applied them in a radically new way. The essence of his reasoning was that the ultraviolet catastrophe basically arises because Maxwell’s electromagnetic field is a continuous thing and, as such, appears to have an infinite variety of ways in which it can absorb energy. When you are allowed to store energy in whatever way you like in all these modes, and add them all together you get an infinite power output. But what if there was some fundamental limitation in the way that an atom could exchange energy with the radiation field? If such a transfer can only occur in discrete lumps or quanta – rather like “atoms” of radiation – then one could eliminate the ultraviolet catastrophe at a stroke. Planck’s genius was to realize this, and the formula he proposed contains a constant that still bears his name. The energy of a light quantum E is related to its frequency ν via E=hν, where h is Planck’s constant, one of the fundamental constants that occur throughout theoretical physics.

Boltzmann had shown that if a system possesses a  discrete energy state labelled by j separated by energy Ej then at a given temperature the likely relative occupation of the two states is determined by a “Boltzmann factor” of the form:

n_{j} \propto \exp\left(-\frac{E_{j}}{k_BT}\right),

so that the higher energy state is exponentially less probable than the lower energy state if the energy difference is much larger than the typical thermal energy kB T ; the quantity kB is Boltzmann’s constant, another fundamental constant. On the other hand, if the states are very close in energy compared to the thermal level then they will be roughly equally populated in accordance with the “equipartition” idea I mentioned above.

The trouble with the classical treatment of an electromagnetic field is that it makes it too easy for the field to store infinite energy in short wavelength oscillations: it can put  a little bit of energy in each of a lot of modes in an unlimited way. Planck realised that his idea would mean ultra-violet radiation could only be emitted in very energetic quanta, rather than in lots of little bits. Building on Boltzmann’s reasoning, he deduced the probability of exciting a quantum with very high energy is exponentially suppressed. This in turn leads to an exponential cut-off in the black-body curve at short wavelengths. Triumphantly, he was able to calculate the exact form of the black-body curve expected in his theory: it matches the Rayleigh-Jeans form at long wavelengths, but turns over and decreases at short wavelengths just as the measurements require. The theoretical Planck curve matches measurements perfectly over the entire range of wavelengths that experiments have been able to probe.

Curiously perhaps, Planck stopped short of the modern interpretation of this: that light (and other electromagnetic radiation) is composed of particles which we now call photons. He was still wedded to Maxwell’s description of light as a wave phenomenon, so he preferred to think of the exchange of energy as being quantised rather than the radiation itself. Einstein’s work on the photoelectric effect in 1905 further vindicated Planck, but also demonstrated that light travelled in packets. After Planck’s work, and the development of the quantum theory of the atom pioneered by Niels Bohr, quantum theory really began to take hold of the physics community and eventually it became acceptable to conceive of not just photons but all matter as being part particle and part wave. Photons are examples of a kind of particle known as a boson, and the atomic constituents such as electrons and protons are fermions. (This classification arises from their spin: bosons have spin which is an integer multiple of Planck’s constant, whereas fermions have half-integral spin.)

You might have expected that the radical step made by Planck would immediately have led to a drastic overhaul of the system of thermodynamics put in place in the preceding half-a-century, but you would be wrong. In many ways the realization that discrete energy levels were involved in the microscopic description of matter if anything made thermodynamics easier to understand and apply. Statistical reasoning is usually most difficult when the space of possibilities is complicated. In quantum theory one always deals fundamentally with a discrete space of possible outcomes. Counting discrete things is not always easy, but it’s usually easier than counting continuous things. Even when they’re infinite.

Much of modern physics research lies in the arena of condensed matter physics, which deals with the properties of solids and gases, often at the very low temperatures where quantum effects become important. The statistical thermodynamics of these systems is based on a very slight modification of Boltzmann’s result:

n_{j} \propto \left[\exp\left(\frac{E_{j}}{k_BT}\right)\pm 1\right]^{-1},

which gives the equilibrium occupation of states at an energy level Ej; the difference between bosons and fermions manifests itself as the sign in the denominator. Fermions take the upper “plus” sign, and the resulting statistical framework is based on the so-called Fermi-Dirac distribution; bosons have the minus sign and obey Bose-Einstein statistics. This modification of the classical theory of Maxwell and Boltzmann is simple, but leads to a range of fascinating phenomena, from neutron stars to superconductivity.

Moreover, the nature the ultraviolet catastrophe for black-body radiation at the start of the 20th Century perhaps also holds lessons for modern physics. One of the fundamental problems we have in theoretical cosmology is how to calculate the energy density of the vacuum using quantum field theory. This is a more complicated thing to do than working out the energy in an electromagnetic field, but the net result is a catastrophe of the same sort. All straightforward ways of computing this quantity produce a divergent answer unless a high-energy cut off is introduced. Although cosmological observations of the accelerating universe suggest that vacuum energy is there, its actual energy density is way too small for any plausible cutoff.

So there we are. A hundred years on, we have another nasty infinity. It’s a fundamental problem, but its answer will probably open up a new way of understanding the Universe.


Share/Bookmark

Log Space

Posted in The Universe and Stuff with tags , , , on January 13, 2010 by telescoper

This is probably going to test the graphical limits of this blog to breaking point, but I thought it would be fun to put here nevertheless. This picture is a map showing the cosmos on a logarithmic scale, all the way out from the Earth’s centre to the edge of the observed Universe with the cosmological bit at the top (naturally). 

I wouldn’t mind a pound for every time this has found itself on someone’s office wall over the years!

It was made about five years ago by a group of astronomers at Princeton and if you follow the link you can find more explanation of how it was put together, as well as various versions of the plot in different formats and resolutions, so please follow it if you can’t see the picture very well here.