Archive for Cosmology

A Little Bit of Quantum

Posted in The Universe and Stuff with tags , , , , , , , , , , , on January 16, 2010 by telescoper

I’m trying to avoid getting too depressed by writing about the ongoing funding crisis for physics in the United Kingdom, so by way of a distraction I thought I’d post something about physics itself rather than the way it is being torn apart by short-sighted bureaucrats. A number of Cardiff physics students are currently looking forward (?) to their Quantum Mechanics examinations next week, so I thought I’d try to remind them of what fascinating subject it really is…

The development of the kinetic theory of gases in the latter part of the 19th Century represented the culmination of a mechanistic approach to Natural Philosophy that had begun with Isaac Newton two centuries earlier. So successful had this programme been by the turn of the 20th century that it was a fairly common view among scientists of the time that there was virtually nothing important left to be “discovered” in the realm of natural philosophy. All that remained were a few bits and pieces to be tidied up, but nothing could possibly shake the foundations of Newtonian mechanics.

But shake they certainly did. In 1905 the young Albert Einstein – surely the greatest physicist of the 20th century, if not of all time – single-handedly overthrew the underlying basis of Newton’s world with the introduction of his special theory of relativity. Although it took some time before this theory was tested experimentally and gained widespread acceptance, it blew an enormous hole in the mechanistic conception of the Universe by drastically changing the conceptual underpinning of Newtonian physics. Out were the “commonsense” notions of absolute space and absolute time, and in was a more complex “space-time” whose measurable aspects depended on the frame of reference of the observer.

Relativity, however, was only half the story. Another, perhaps even more radical shake-up was also in train at the same time. Although Einstein played an important role in this advance too, it led to a theory he was never comfortable with: quantum mechanics. A hundred years on, the full implications of this view of nature are still far from understood, so maybe Einstein was correct to be uneasy.

The birth of quantum mechanics partly arose from the developments of kinetic theory and statistical mechanics that I discussed briefly in a previous post. Inspired by such luminaries as James Clerk Maxwell and Ludwig Boltzmann, physicists had inexorably increased the range of phenomena that could be brought within the descriptive framework furnished by Newtonian mechanics and the new modes of statistical analysis that they had founded. Maxwell had also been responsible for another major development in theoretical physics: the unification of electricity and magnetism into a single system known as electromagnetism. Out of this mathematical tour de force came the realisation that light was a form of electromagnetic wave, an oscillation of electric and magnetic fields through apparently empty space.  Optical light forms just part of the possible spectrum of electromagnetic radiation, which ranges from very long wavelength radio waves at one end to extremely short wave gamma rays at the other.

With Maxwell’s theory in hand, it became possible to think about how atoms and molecules might exchange energy and reach equilibrium states not just with each other, but with light. Everyday experience that hot things tend to give off radiation and a number of experiments – by Wilhelm Wien and others – had shown that there were well-defined rules that determined what type of radiation (i.e. what wavelength) and how much of it were given off by a body held at a certain temperature. In a nutshell, hotter bodies give off more radiation (proportional to the fourth power of their temperature), and the peak wavelength is shorter for hotter bodies. At room temperature, bodies give off infra-red radiation, stars have surface temperatures measured in thousands of degrees so they give off predominantly optical and ultraviolet light. Our Universe is suffused with microwave radiation corresponding to just a few degrees above absolute zero.

The name given to a body in thermal equilibrium with a bath of radiation is a “black body”, not because it is black – the Sun is quite a good example of a black body and it is not black at all – but because it is simultaneously a perfect absorber and perfect emitter of radiation. In other words, it is a body which is in perfect thermal contact with the light it emits. Surely it would be straightforward to apply classical Maxwell-style statistical reasoning to a black body at some temperature?

It did indeed turn out to be straightforward, but the result was a catastrophe. One can see the nature of the disaster very straightforwardly by taking a simple idea from classical kinetic theory. In many circumstances there is a “rule of thumb” that applies to systems in thermal equilibrium. Roughly speaking, the idea is that energy becomes divided equally between every possible “degree of freedom” the system possesses. For example, if a box of gas consists of particles that can move in three dimensions then, on average, each component of the velocity of a particle will carry the same amount of kinetic energy. Molecules are able to rotate and vibrate as well as move about inside the box, and the equipartition rule can apply to these modes too.

Maxwell had shown that light was essentially a kind of vibration, so it appeared obvious that what one had to do was to assign the same amount of energy to each possible vibrational degree of freedom of the ambient electromagnetic field. Lord Rayleigh and Sir James Jeans did this calculation and found that the amount of energy radiated by a black body as a function of wavelength should vary proportionally to the temperature T and to inversely as the fourth power of the wavelength λ, as shown in the diagram for an example temperature of 5000K:

Even without doing any detailed experiments it is clear that this result just has to be nonsense. The Rayleigh-Jeans law predicts that even very cold bodies should produce infinite amounts of radiation at infinitely short wavelengths, i.e. in the ultraviolet. It also predicts that the total amount of radiation – the area under the curve in the above figure – is infinite. Even a very cold body should emit infinitely intense electromagnetic radiation. Infinity is bad.

Experiments show that the Rayleigh-Jeans law does work at very long wavelengths but in reality the radiation reaches a maximum (at a wavelength that depends on the temperature) and then declines at short wavelengths, as shown also in the above Figure. Clearly something is very badly wrong with the reasoning here, although it works so well for atoms and molecules.

It wouldn’t be accurate to say that physicists all stopped in their tracks because of this difficulty. It is amazing the extent to which people are able to carry on despite the presence of obvious flaws in their theory. It takes a great mind to realise when everyone else is on the wrong track, and a considerable time for revolutionary changes to become accepted. In the meantime, the run-of-the-mill scientist tends to carry on regardless.

The resolution of this particular fundamental conundrum is accredited to Karl Ernst Ludwig “Max” Planck (right), who was born in 1858. He was the son of a law professor, and himself went to university at Berlin and Munich, receiving his doctorate in 1880. He became professor at Kiel in 1885, and moved to Berlin in 1888. In 1930 he became president of the Kaiser Wilhelm Institute, but resigned in 1937 in protest at the behaviour of the Nazis towards Jewish scientists. His life was blighted by family tragedies: his second son died in the First World War; both daughters died in childbirth; and his first son was executed in 1944 for his part in a plot to assassinate Adolf Hitler. After the Second World War the institute was named the Max Planck Institute, and Planck was reappointed director. He died in 1947; by then such a famous scientist that his likeness appeared on the two Deutschmark coin issued in 1958.

Planck had taken some ideas from Boltzmann’s work but applied them in a radically new way. The essence of his reasoning was that the ultraviolet catastrophe basically arises because Maxwell’s electromagnetic field is a continuous thing and, as such, appears to have an infinite variety of ways in which it can absorb energy. When you are allowed to store energy in whatever way you like in all these modes, and add them all together you get an infinite power output. But what if there was some fundamental limitation in the way that an atom could exchange energy with the radiation field? If such a transfer can only occur in discrete lumps or quanta – rather like “atoms” of radiation – then one could eliminate the ultraviolet catastrophe at a stroke. Planck’s genius was to realize this, and the formula he proposed contains a constant that still bears his name. The energy of a light quantum E is related to its frequency ν via E=hν, where h is Planck’s constant, one of the fundamental constants that occur throughout theoretical physics.

Boltzmann had shown that if a system possesses a  discrete energy state labelled by j separated by energy Ej then at a given temperature the likely relative occupation of the two states is determined by a “Boltzmann factor” of the form:

n_{j} \propto \exp\left(-\frac{E_{j}}{k_BT}\right),

so that the higher energy state is exponentially less probable than the lower energy state if the energy difference is much larger than the typical thermal energy kB T ; the quantity kB is Boltzmann’s constant, another fundamental constant. On the other hand, if the states are very close in energy compared to the thermal level then they will be roughly equally populated in accordance with the “equipartition” idea I mentioned above.

The trouble with the classical treatment of an electromagnetic field is that it makes it too easy for the field to store infinite energy in short wavelength oscillations: it can put  a little bit of energy in each of a lot of modes in an unlimited way. Planck realised that his idea would mean ultra-violet radiation could only be emitted in very energetic quanta, rather than in lots of little bits. Building on Boltzmann’s reasoning, he deduced the probability of exciting a quantum with very high energy is exponentially suppressed. This in turn leads to an exponential cut-off in the black-body curve at short wavelengths. Triumphantly, he was able to calculate the exact form of the black-body curve expected in his theory: it matches the Rayleigh-Jeans form at long wavelengths, but turns over and decreases at short wavelengths just as the measurements require. The theoretical Planck curve matches measurements perfectly over the entire range of wavelengths that experiments have been able to probe.

Curiously perhaps, Planck stopped short of the modern interpretation of this: that light (and other electromagnetic radiation) is composed of particles which we now call photons. He was still wedded to Maxwell’s description of light as a wave phenomenon, so he preferred to think of the exchange of energy as being quantised rather than the radiation itself. Einstein’s work on the photoelectric effect in 1905 further vindicated Planck, but also demonstrated that light travelled in packets. After Planck’s work, and the development of the quantum theory of the atom pioneered by Niels Bohr, quantum theory really began to take hold of the physics community and eventually it became acceptable to conceive of not just photons but all matter as being part particle and part wave. Photons are examples of a kind of particle known as a boson, and the atomic constituents such as electrons and protons are fermions. (This classification arises from their spin: bosons have spin which is an integer multiple of Planck’s constant, whereas fermions have half-integral spin.)

You might have expected that the radical step made by Planck would immediately have led to a drastic overhaul of the system of thermodynamics put in place in the preceding half-a-century, but you would be wrong. In many ways the realization that discrete energy levels were involved in the microscopic description of matter if anything made thermodynamics easier to understand and apply. Statistical reasoning is usually most difficult when the space of possibilities is complicated. In quantum theory one always deals fundamentally with a discrete space of possible outcomes. Counting discrete things is not always easy, but it’s usually easier than counting continuous things. Even when they’re infinite.

Much of modern physics research lies in the arena of condensed matter physics, which deals with the properties of solids and gases, often at the very low temperatures where quantum effects become important. The statistical thermodynamics of these systems is based on a very slight modification of Boltzmann’s result:

n_{j} \propto \left[\exp\left(\frac{E_{j}}{k_BT}\right)\pm 1\right]^{-1},

which gives the equilibrium occupation of states at an energy level Ej; the difference between bosons and fermions manifests itself as the sign in the denominator. Fermions take the upper “plus” sign, and the resulting statistical framework is based on the so-called Fermi-Dirac distribution; bosons have the minus sign and obey Bose-Einstein statistics. This modification of the classical theory of Maxwell and Boltzmann is simple, but leads to a range of fascinating phenomena, from neutron stars to superconductivity.

Moreover, the nature the ultraviolet catastrophe for black-body radiation at the start of the 20th Century perhaps also holds lessons for modern physics. One of the fundamental problems we have in theoretical cosmology is how to calculate the energy density of the vacuum using quantum field theory. This is a more complicated thing to do than working out the energy in an electromagnetic field, but the net result is a catastrophe of the same sort. All straightforward ways of computing this quantity produce a divergent answer unless a high-energy cut off is introduced. Although cosmological observations of the accelerating universe suggest that vacuum energy is there, its actual energy density is way too small for any plausible cutoff.

So there we are. A hundred years on, we have another nasty infinity. It’s a fundamental problem, but its answer will probably open up a new way of understanding the Universe.


Share/Bookmark

Log Space

Posted in The Universe and Stuff with tags , , , on January 13, 2010 by telescoper

This is probably going to test the graphical limits of this blog to breaking point, but I thought it would be fun to put here nevertheless. This picture is a map showing the cosmos on a logarithmic scale, all the way out from the Earth’s centre to the edge of the observed Universe with the cosmological bit at the top (naturally). 

I wouldn’t mind a pound for every time this has found itself on someone’s office wall over the years!

It was made about five years ago by a group of astronomers at Princeton and if you follow the link you can find more explanation of how it was put together, as well as various versions of the plot in different formats and resolutions, so please follow it if you can’t see the picture very well here.

The Hubble Ultra Deep Field in Three Dimensions

Posted in The Universe and Stuff with tags , , , on January 4, 2010 by telescoper

I came across this video about the Hubble Ultra Deep Field (which I have blogged about before) and thought you might enjoy it. I think it’s fairly self-explanatory too!

A Compression of Distances

Posted in Biographical, Poetry, The Universe and Stuff with tags , , , on December 28, 2009 by telescoper

I’m back in Cardiff after a few days of yuletide indulgence in my home town of Newcastle upon Tyne in the North East of England. And very nice it was too, although my mass has increased as a consequence. We didn’t do much except eat and drink, although we did manage a scenic drive on Boxing Day through the beautiful Northumberland countryside, even more beautiful than usual because of the covering of snow that fell heavily before Christmas and never got round to melting.

Last year I did the round trip from Cardiff to Newcastle by train, which is quite a lengthy ordeal, but this year the powers that be have decided to close the main railway line from South Wales into England (via Bristol) because of engineering work. Route B, via Cheltenham and Birmingham, was also closed, so the only way to do the journey by train would have been via Manchester, a trip of around 8 hours each way. It wasn’t a very difficult decision therefore to abandon the railways this year and fly, which turned out to be remarkably painless. Although we landed in snow at Newcastle the planes both ways were on time and, with a flying time of less than an hour, I had much more time for sloth and gluttony.

Just before I left for my short break a book sent from Cinnamon Press popped through my letterbox. I occasionally post bits of poetry on here, and if there’s any doubt about copyright I always check with the publisher before putting them online. I had a nice exchange of emails with this particular publisher as a result of which they sent me a collection of poems they thought I might like to feature. This one is called A Compression of Distances and it’s by a poet quite new to me, Daphne Gloag.

Poetry books are ideal for reading on short trips on train or plane. They’re usually slim so they are easy to carry and you can read them one poem at a time in between pesky interruptions, such as take-off and landing. I didn’t have time to read this one before leaving so I put it in my pocket and took it with me. Given the changed mode of travel this year, the title seemed quite appropriate for this journey!

Anyway, it’s a very interesting collection altogether but there are a few poems at the end, taken from a  much longer collection called Beginnings, which seem to me to be the most appropriate to put on here. I agree wholeheartedly with the comments  on the jacket by John Latham

Her poems are remarkable, especially in the way she has successfully taken complex concepts in modern science – particularly cosmology – and integrated them successfully and seamlessly into poems which speak of the human condition in an effective and moving manner.

I have to say that it is a difficult task to combine modern physics with poetry. Often, attempts to do this either completely trivialise the scientific content or become tiresomely didactic. I think these poems get it just right. What Daphne Gloag does is to juxtapose  ideas from comtemporary cosmology (inflation, dark matter, etc) with diverse aspects of human experience. The parallels are often very moving as well as ingeneous. The poems are also preceded by brief explanations of the physics. Here is one of the best examples.

The children’s charity concert:
matter and antimatter

Particles and antiparticles are interchangeable, but just after the big bang the process whereby they kept annihilating each other ended by producing very slightly more matter than antimatter, making the universe possible.

Arriving at the church for the children’s charity concert
we remembered the words of Richard Feynman:
Created and annihilated,
created and annihilated –
what a waste of time.

He was speaking of those particles and antiparticles
at the beginning of time
annihilated in explosions of light.

In the church the children were playing
for the refugees of Kosovo;
our granddaughter’s long hair shone
like the sheen of her violin.
She did not know
she was a child of that hair’s breadth victory
of particles over antiparticles
in the early universe: annihilation
for all but a few, a final imbalance
just enough for making galaxies and worlds
and at that end of time
those children and the making of their years.

They played Bach and Twinkle twinkle little star,
not knowing what a star is
or the violence of stars,
not knowing they were perfected children
of the violent universe,
not knowing the years piled up on the scrap heaps
of that country they’d raised money for…
the man with his ear sawn off slowly
and fed to a dog like offal, the girl
with her legs torn off, her family machine gunned,
blown into darkness.

So many annihilations of perfected years.
But also those children in their panache of light.

You can order a copy of A Compression of Distances by Daphne Gloag directly from the publisher.

Dark Matter Rumour

Posted in The Universe and Stuff with tags , , on December 8, 2009 by telescoper

In between a morning session – technically a “half-away-day” discussing Strategic Issues in the Development of Postgraduate Research at Cardiff University (zzzz..) and tootling off to Bristol this afternoon to give a recapitulation of my public lecture on the Cosmic Web to the South-West Branch of the Institute of Physics in Bristol, I don’t have time to post much today.

I will, however, take the opportunity to do what the blogosphere does best, which is to spread unfounded (or perhaps partly founded rumours). If it’s true this one is a biggy, but I’m not responsible for any loss or damage arising if it turns out to be untrue…

The rumour (which I first heard about here and then, a bit later, there) is that the Cryogenic Dark Matter Search (CDMS) experiment (which is based down  a mine in Minnesota, but  run from the University of California at Berkeley) is about to announce the direct discovery of dark matter.

I don’t have any inside information, but it is alleged that the collaboration has had paper accepted in Nature – and they generally only publish really significant results rather than upper limits (unless they are to do with gravitational waves).  Nature articles are embargoed until publication, meaning that the collaboration can’t release the results or talk about them until December 18…

..so I guess you will just have to wait!

The Cosmic Web

Posted in The Universe and Stuff with tags , , , , , on November 23, 2009 by telescoper

When I was writing my recent  (typically verbose) post about chaos  on a rainy saturday afternoon, I cut out a bit about astronomy because I thought it was too long even by my standards of prolixity. However, walking home this evening I realised I could actually use it in a new post inspired by a nice email I got after my Herschel lecture in Bath. More of that in a minute, but first the couple of paras I edited from the chaos item…

Astronomy provides a nice example that illustrates how easy it is to make things too complicated to solve. Suppose we have two massive bodies orbiting in otherwise empty space. They could be the Earth and Moon, for example, or a binary star system. Each of the bodies exerts a gravitational force on the other that causes it to move. Newton himself showed that the orbit followed by each of the bodies is an ellipse, and that both bodies orbit around their common centre of mass. The Earth is much more massive than the Moon, so the centre of mass of the Earth-Moon system is rather close to the centre of the Earth. Although the Moon appears to do all the moving, the Earth orbits too. If the two bodies have equal masses, they each orbit the mid-point of the line connecting them, like two dancers doing a waltz.

Now let us add one more body to the dance. It doesn’t seem like too drastic a complication to do this, but the result is a mathematical disaster. In fact there is no known mathematical solution for the gravitational three-body problem, apart from a few special cases where some simplifying symmetry helps us out. The same applies to the N-body problem for any N bigger than 2. We cannot solve the equations for systems of gravitating particles except by using numerical techniques and very big computers. We can do this very well these days, however, because computer power is cheap.

Computational cosmologists can “solve” the N-body problem for billions of particles, by starting with an input list of positions and velocities of all the particles. From this list the forces on each of them due to all the other particles can be calculated. Each particle is then moved a little according to Newton’s laws, thus advancing the system by one time-step. Then the forces are all calculated again and the system inches forward in time. At the end of the calculation, the solution obtained is simply a list of the positions and velocities of each of the particles. If you would like to know what would have happened with a slightly different set of initial conditions you need to run the entire calculation again. There is no elegant formula that can be applied for any input: each laborious calculation is specific to its initial conditions.

Now back to the Herschel lecture I gave, called The Cosmic Web, the name given to the frothy texture of the large-scale structure of the Universe revealed by galaxy surveys such as the 2dFGRS:

One of the points I tried to get across in the lecture was that we can explain the pattern – quite accurately – in the framework of the Big Bang cosmology by a process known as gravitational instability. Small initial irregularities in the density of the Universe tend to get amplified as time goes on. Regions just a bit denser than average tend to pull in material from their surroundings faster, getting denser and denser until they collapse in on themselves, thus forming bound objects.

This  Jeans instability  is the dominant mechanism behind star formation in molecular clouds, and it leads to the rapid collapse of blobby extended structures  to tightly bound clumps. On larger scales relevant to cosmological structure formation we have to take account of the fact that the universe is expanding. This means that gravity has to fight against the expansion in order to form structures, which slows it down. In the case of a static gas cloud the instability grows exponentially with time, whereas in an expanding background it is a slow power-law.

This actually helps us in cosmology because the process of structure formation is not so fast that it destroys all memory of the initial conditions, which is what happens when stars form. When we look at the large-scale structure of the galaxy distribution we are therefore seeing something which contains a memory of where it came from. I’ve blogged before about what started the whole thing off here.

Here’s a (very low-budget) animation of the formation of structure in the expanding universe as computed by an N-body code. The only subtlety in this is that it is in comoving coordinates, which expand with the universe: the box should really be getting bigger but is continually rescaled with the expansion to keep it the same size on the screen.

You can see that filaments form in profusion but these merge and disrupt in such a way that the characteristic size of the pattern evolves with time. This is called hierarchical clustering.

One of the questions I got by email after the talk was basically that if the same gravitational instability produced stars and large-scale structure, why wasn’t the whole universe just made of enormous star-like structures rather than all these strange filaments and things?

Part of the explanation is that the filaments are relatively transient things. The dominant picture is one in which the filaments and clusters
become incorporated in larger-scale structures but really dense concentrations, such as the spiral galaxies, which do
indeed look a bit like big solar systems, are relatively slow to form.

When a non-expanding cloud of gas collapses to form a star there is also some transient filamentary structure  but the processes involved go so rapidly that it is all swept away quickly. Out there in the expanding universe we can still see the cobwebs.

Aquae Sulis

Posted in Books, Talks and Reviews, The Universe and Stuff with tags , , , , , on November 19, 2009 by telescoper

Just time for a quick post this lunchtime, in between a whole day of meetings with students about projects and other things. This afternoon I have to whizz off to the fine city of Bath where this evening I am giving a public lecture jointly organized  by the University of Bath and the William Herschel Society (which is based in Bath).

The title of my talk is The Cosmic Web, and a brief outline is as follows.

The lecture will focus on the large scale structure of the Universe and the ideas that physicists are weaving together to explain how it came to be the way it is.

Over the last few decades astronomers have revealed that our cosmos is not only vast in scale – at least 14 billion light years in radius – but also exceedingly complex in texture, with galaxies and clusters of galaxies linked together in immense chains and sheets tracing out an immense network of structures we call the Cosmic Web.

Cosmologists have developed theoretical explanations for its origin that involve such exotic concepts as ‘dark matter’ and ‘cosmic inflation’, producing a cosmic web of ideas that is in many ways as rich and fascinating as the Universe itself.

The University of Bath website has more details of the talk, and I think they are going to do a podcast too. I’ll actually be doing a recap in a couple of weeks’ time in Bristol at an event for the Institute of Physics, of which more anon.

Bath is only about an hour from Cardiff by train and I’m very much looking forward to this trip as I have never been to the University of Bath before.I remember from my schooldays that the Romans named the place Aquae Sulis (or, as my Latin teacher Mr Keating who couldn’t pronounce his esses would say, Aquae Thulith).  The local waters were famous for their healing powers even before the Romans got to England, and the Celtic inhabitants attributed this to a deity they called  Sulis. The Romans kept the name, although they decided that Sulis was actually their goddess Minerva in disguise. The Romans were good at appropriating local traditions like that.

The only potential fly in the ointment is the British weather, which has been terrible over the last week or so and further deluges are forecast this afternoon and evening. As I write, though, it’s actually fine and sunny and the weather map suggests the worst of the current band of rain has passed to the north of here. I hope I’m not tempting providence, and that there won’t be too much of the aquae heading in my direction!

The Monkey Complex

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , on November 15, 2009 by telescoper

There’s an old story that if you leave a set of monkeys hammering on typewriters for a sufficiently long time then they will eventually reproduce the entire text of Shakespeare’s play Hamlet. It comes up in a variety of contexts, but the particular generalisation of this parable in cosmology is to argue that if we live in an enormously big universe (or “multiverse“), in which the laws of nature (as specified by the relevant fundamental constants) vary “sort of randomly” from place to place, then there will be a domain in which they have the right properties for life to evolve. This is one way of explaining away the apparent fine-tuning of the laws of physics: they’re not finely tuned, but we just live in a place where they allowed us to evolve. Although it may seem an easy step from monkeys to the multiverse, it always seemed to me a very shaky one.

For a start, let’s go back to the monkeys. The supposition that given an infinite time the monkeys must produce everything that’s possible in a finite sequence, is not necessarily true even if one does allow an infinite time. It depends on how they type. If the monkeys were always to hit two adjoining keys at the same time then they would never produce a script for Hamlet, no matter how long they typed for, as the combinations QW or ZX do not appear anywhere in that play. To guarantee what we need the kind their typing has to be ergodic, a very specific requirement not possessed by all “random” sequences.

A more fundamental problem is what is meant by randomness in the first place. I’ve actually commented on this before, in a post that still seems to be collecting readers so I thought I’d develop one or two of the ideas a little.

 It is surprisingly easy to generate perfectly deterministic mathematical sequences that behave in the way we usually take to characterize indeterministic processes. As a very simple example, consider the following “iteration” scheme:

 X_{j+1}= 2 X_{j} \mod(1)

If you are not familiar with the notation, the term mod(1) just means “drop the integer part”.  To illustrate how this works, let us start with a (positive) number, say 0.37. To calculate the next value I double it (getting 0.74) and drop the integer part. Well, 0.74 does not have an integer part so that’s fine. This value (0.74) becomes my first iterate. The next one is obtained by putting 0.74 in the formula, i.e. doubling it (1.48) and dropping  the integer part: result 0.48. Next one is 0.96, and so on. You can carry on this process as long as you like, using each output number as the input state for the following step of the iteration.

Now to simplify things a little bit, notice that, because we drop the integer part each time, all iterates must lie in the range between 0 and 1. Suppose I divide this range into two bins, labelled “heads” for X less than ½ and “tails” for X greater than or equal to ½. In my example above the first value of X is 0.37 which is “heads”. Next is 0.74 (tails); then 0.48 (heads), 0.96(heads), and so on.

This sequence now mimics quite accurately the tossing of a fair coin. It produces a pattern of heads and tails with roughly 50% frequency in a long run. It is also difficult to predict the next term in the series given only the classification as “heads” or “tails”.

However, given the seed number which starts off the process, and of course the algorithm, one could reproduce the entire sequence. It is not random, but in some respects  looks like it is.

One can think of “heads” or “tails” in more general terms, as indicating the “0” or “1” states in the binary representation of a number. This method can therefore be used to generate the any sequence of digits. In fact algorithms like this one are used in computers for generating what are called pseudorandom numbers. They are not precisely random because computers can only do arithmetic to a finite number of decimal places. This means that only a finite number of possible sequences can be computed, so some repetition is inevitable, but these limitations are not always important in practice.

The ability to generate  random numbers accurately and rapidly in a computer has led to an entirely new way of doing science. Instead of doing real experiments with measuring equipment and the inevitable errors, one can now do numerical experiments with pseudorandom numbers in order to investigate how an experiment might work if we could do it. If we think we know what the result would be, and what kind of noise might arise, we can do a random simulation to discover the likelihood of success with a particular measurement strategy. This is called the “Monte Carlo” approach, and it is extraordinarily powerful. Observational astronomers and particle physicists use it a great deal in order to plan complex observing programmes and convince the powers that be that their proposal is sufficiently feasible to be allocated time on expensive facilities. In the end there is no substitute for real experiments, but in the meantime the Monte Carlo method can help avoid wasting time on flawed projects:

…in real life mistakes are likely to be irrevocable. Computer simulation, however, makes it economically practical to make mistakes on purpose.

(John McLeod and John Osborne, in Natural Automata and Useful Simulations).

So is there a way to tell whether a set of numbers is really random? Consider the following sequence:

1415926535897932384626433832795028841971

Is this a random string of numbers? There doesn’t seem to be a discernible pattern, and each possible digit seems to occur with roughly the same frequency. It doesn’t look like anyone’s phone number or bank account. Is that enough to make you think it is random?

Actually this is not at all random. If I had started it with a three and a decimal place you might have cottoned on straight away. “3.1415926..” is the first few digits in the decimal representation of p. The full representation goes on forever without repeating. This is a sequence that satisfies most naïve definitions of randomness. It does, however, provide something of a hint as to how we might construct an operational definition, i.e. one that we can apply in practice to a finite set of numbers.

The key idea originates from the Russian mathematician Andrei Kolmogorov, who wrote the first truly rigorous mathematical work on probability theory in 1933. Kolmogorov’s approach was considerably ahead of its time, because it used many concepts that belong to the era of computers. In essence, what he did was to provide a definition of the complexity of an N-digit sequence in terms of the smallest amount of computer memory it would take to store a program capable of generating the sequence. Obviously one can always store the sequence itself, which means that there is always a program that occupies about as many bytes of memory as the sequence itself, but some numbers can be generated by codes much shorter than the numbers themselves. For example the sequence

111111111111111111111111111111111111

can be generated by the instruction to “print 1 35 times”, which can be stored in much less memory than the original string of digits. Such a sequence is therefore said to be algorithmically compressible.

There are many ways of calculating the digits of π numerically also, so although it may look superficially like a random string it is most definitely not random. It is algorithmically compressible.

I’m not sure how compressible Hamlet is, but it’s certainly not entirely random. When I studied it at school I certainly wished it were a little shorter…

The complexity of a sequence can be defined to be the length of the shortest program capable of generating it. If no algorithm can be found that compresses the sequence into a program shorter than itself then it is maximally complex and can suitably be defined as random. This is a very elegant description, and has good intuitive appeal.  

I’m not sure how compressible Hamlet is, but it’s certainly not entirely random. At any rate, when I studied it at school, I certainly wished it were a little shorter…

However, this still does not provide us with a way of testing rigorously whether a given finite sequence has been produced “randomly” or not.

If an algorithmic compression can be found then that means we declare the given sequence not to be  random. However we can never be sure if the next term in the sequence would fit with what our algorithm would predict. We have to argue, inferentially, that if we have fit a long sequence with a simple algorithm then it is improbable that the sequence was generated randomly.

On the other hand, if we fail to find a suitable compression that doesn’t mean it is random either. It may just mean we didn’t look hard enough or weren’t clever enough.

Human brains are good at finding patterns. When we can’t see one we usually take the easy way out and declare that none exists. We often model a complicated system as a random process because it is  too difficult to predict its behaviour accurately even if we know the relevant laws and have  powerful computers at our disposal. That’s a very reasonable thing to do when there is no practical alternative. 

It’s quite another matter, however,  to embrace randomness as a first principle to avoid looking for an explanation in the first place. For one thing, it’s lazy, taking the easy way out like that. And for another it’s a bit arrogant. Just because we can’t find an explanation within the framework of our current theories doesn’t mean more intelligent creatures than us won’t do so. We’re only monkeys, after all.

Lev Kofman

Posted in The Universe and Stuff with tags , on November 14, 2009 by telescoper

June 17, 1957 – November 12, 2009

DSC00517 copy___china_head_cut_LRI heard yesterday from Andrew Jaffe of the death a few days ago of Lev Kofman (left), from cancer. Lev was a wonderfully spontaneous  and generous character as well as a very fine physicist. I hadn’t known that he was ill, which made the news of his death all the more shocking and the sense of loss even deeper. My thoughts and those of my colleagues who were lucky enough to know Lev are with his family and friends at what must be difficult time for them.

I first met Lev about twenty years ago and we bumped into each other fairly frequently over the following years. Then I went on sabbatical to Toronto, where Lev was based, and therefore spent a quite a bit of  time with him talking cosmology, drinking and failing to play football.  It’s hard to believe that now, just a few years later, the wonderful light he cast on those around him has actually gone out. He was such a hive of activity all the time I once joked that I thought the Lev should be a unit of energy (like Gev).

I’m sure there will be very many formal tributes paid to Lev by people who knew him far better than me – there is an item on cosmic variance which is worth reading if you didn’t know much about him. For my part, I’ll just say that I liked and admired him enormously and the field of cosmology will be much poorer for his passing.

An email letter was sent out by Lev’s family and friends, which I hope they will not mind me reproducing here, as I think it perfectly conveys the deep affection which Lev inspired in all who had the opportunity to meet and work with him.

We are deeply saddened  to inform you that the fabulous Lev Kofman, husband of Anna, father of Sergei 13 and Maria 15, brother of Svetlana, and our great friend, died in the early morning of November 12 from cancer. Many of you were able to commune with Lev as the situation deteriorated over the past weeks, by visits, phone calls, and emails read to him. We are deeply grateful for that: and it provided some solace for Lev to know the tremendous impact he has had on the lives of so many of you.

He bravely kept the physics going strong throughout his illness, characteristic of Lev. His scientific outpourings and influence  will transcend this passage. As you know, he made fundamental contributions to Lambda cosmology and dark energy, structure in the cosmic web, inflationary theory, its Gaussian and non-Gaussian aspects, and gravitational waves. He initiated and developed the theory of preheating, showing how all matter could arise from a coherent vacuum energy at the end of inflation, his cosmic baby. And much more besides. He was the quintessential leader, for CITA and CIFAR as a whole, and for the vibrant early universe group he established, providing inspirational guidance to a generation of young researchers.

He felt the physics to his very core. Beyond this, it is the indomitable, fun-loving, deeply philosophical spirit, a gourmand of life in all its manifestations, that we will miss so much.

With our best wishes in these sad times,

Anna Chandarina (Kofman)
Svetlana Kofman
Dick Bond
Andrei Linde
Renata Kallosh

And if you never had the chance to see the man in action you can find some videos of lectures he gave at the Perimeter Institute here.

Planck’s Progress

Posted in The Universe and Stuff with tags , , on November 10, 2009 by telescoper

Only time for a very quick post today, so I thought I’d just pass on some news I got via Chris North about how Planck is doing. As it happens, the satellite has recently reached  the point where it has observed about half the sky. It spins on its axis in rather stately fashion (at about one revolution per minute) and, as it moves in its orbit, that sweeps the telescope across the celestial sphere. Each scan is almost a great circle, but  these gradually creep around over about a six month period to cover the whole sky.

The nice picture below, in ecliptic coordinates, shows how far it has got. You can also see the Galactic plane, arching across the sky and showing up clearly at the frequencies Planck is sensitive to.

Planck3NovSkyCoverage

The Planck Consortium had an official meeting last week in Bologna at which they drank lots of wine and ate lots of food, but other than that nobody who was there has told me anything.

It’s all very hush hush don’t you know.