Archive for the The Universe and Stuff Category

The Last Experiment (via In the Dark)

Posted in Science Politics, The Universe and Stuff on September 16, 2010 by telescoper

Today is this blog’s second birthday, so I thought I’d celebrate with a self-auto-reblog-type-of-thing of a post I wrote that memorable day in September 2008 when I started doing all this nonsense, which was just after the LHC had been switched on for the first time…

Actually, I really just wanted to see how this new reblog gizmo works.

I've launched myself into the blogosphere just a bit too late for the feeding frenzy surrounding the switching on of the Large Hadron Collider at CERN last week. Obviously the event itself was a bit of a non-event as it will take years for anything interesting to come out the other end of its multi-billion-dollar tunnel. There are a couple of things worth saying in retrospect, though, now that the dust has settled. The first is about all this non … Read More

via In the Dark

Hot Stuff, Looking Cool..

Posted in The Universe and Stuff with tags , , , , , on September 15, 2010 by telescoper

It’s nice for a change to have an excuse to write something about science rather than science funding, as a press release appeared today concerning the discovery of a new supercluster by Planck in collaboration with the X-ray observatory XMM-Newton.

The physics behind this new discovery concerns what happens to low-energy photons from the cosmic microwave background (CMB) when they are scattered by extremely hot plasma. Basically, incoming microwave photons collide with highly energetic electrons with the result that they gain energy and so are shifted to shorter wavelengths. The generic name given to this process is inverse Compton scattering, and it can happen in a variety of physical contexts. In cosmology, however, there is a particularly important situation where this process has observable consequences, when CMB photons travel through the extremely hot (but extremely tenuous) ionized gas in a cluster of galaxies. In this setting the process is called the Sunyaev-Zel’dovich effect.

The observational consequence is slightly paradoxical because what happens is that the microwave background can appears to have a lower temperature (at least for a certain range of wavelengths) in the direction of a galaxy cluster (in which the plasma can have a temperature of 10 million degrees or more). This is because fewer photons reach the observer in the microwave part of the spectrum that would if the cluster did not intervene; the missing ones have been kicked up to higher energies and are therefore not seen at their original wavelength, ergo the CMB looks a little cooler along the line of sight to a cluster than in other directions. To put it another way, what has actually happened is that the hot electrons have distorted the spectrum of the photons passing through it.

Here’s an example of the Sunyaev-Zel’dovich effect in action as seen by Planck in seven frequency bands:

At low frequencies (in the Rayleigh-Jeans part of the spectrum) the region where the cluster is looks cooler than average, although at high frequencies the effect is reversed.

The magnitude of the temperature distortion produced by a cluster depends on the density of electrons in the plasma pervading the cluster n, the temperature of the plasma T, and the overall size of the cluster; in fact, it’s propotional to n×T integrated along the line of sight through the cluster.

Why this new result is so interesting is that it combines very sensitive measurements of the microwave background temperature pattern  with sensitive measures of the X-ray emission over the same region of the sky. Plasma hot enough to produce a Sunyaev-Zel’dovich distortion of the CMB spectrum will also generate X-rays through a process known as thermal bremsstrahlung.  The power of the X-ray emission depends on the square of the electron density n2 multiplied by the Temperature T.

Since the Sunyaev-Zel’dovich and X-ray measurements depend on different mathematical combinations of the physical properties involved the amalgamation of these two techniques allows astronomers to probe the internal details of the cluster quite precisely.

The example shown here in the top two panels is of a familiar cluster – the Coma Cluster as mapped by Planck (in microwaves) and, by an older X-ray satellite called ROSAT, in X-rays. The two distributions have very similar morphology, strongly suggesting that they have a common origin in the cluster plasma.

The bottom panels show comparisons with the distribution of galaxies as seen in the optical part of the spectrum. You can see that the hot gas I’ve been talking about extends throughout the space between the galaxies. In fact, there is at least as much matter in the hot plasma as there is in the individual galaxies in objects like this, but it’s too hot to be seen in optical light. This could reasonably be called dark matter when it comes to its lack of optical emission, but it’s certainly not dark in X-rays!

The reason why the intracluster plasma is so hot boils down to the strength of the gravitational field in the cluster. Roughly speaking, the hot matter is in virial equilibrium within the gravitational potential generated by the mass distribution within the cluster. Since this is a very deep potential well, electrons move very quickly in response to it. In fact, the galaxies in the cluster are also roughly in virial equilibrium so they too are pulled about by the gravitational field. Galaxies don’t sit around quietly in clusters, they buzz about like bees in a bottle.

Anyway, the new data arising from the combination of Planck and XMM-Newton has revealed not just one cluster, but a cluster of clusters (i.e. a “supercluster”):

It’s early days for Planck, of course, and this is no more than a taster.
The Planck team is currently analysing the data from the first all-sky survey to identify both known and new galaxy clusters for the early Sunyaev-Zel’dovich catalogue, which will be released in January of 2011 as part of the Early Release Compact Source Catalogue. The full Sunyaev-Zel’dovich catalogue may well turn out to be the most enduring legacy of the Planck mission.


Share/Bookmark

Star-gazer

Posted in Poetry, The Universe and Stuff with tags , , on September 11, 2010 by telescoper

Forty-two years ago (to me if to no one else
The number is of some interest) it was a brilliant starry night
And the westward train was empty and had no corridors
So darting from side to side I could catch the unwonted sight
Of those almost intolerably bright
Holes, punched in the sky, which excited me partly because
Of their Latin names and partly because I had read in the textbooks
How very far off they were, it seemed their light
Had left them (some at least) long years before I was.

And this remembering now I mark that what
Light was leaving some of them at least then,
Forty-two years ago, will never arrive
In time for me to catch it, which light when
It does get here may find that there is not
Anyone left alive
To run from side to side in a late night train
Admiring it and adding noughts in vain.

(written in 1963, by Louis MacNeice)


Share/Bookmark

Astronomy Look-alikes, No. 40

Posted in Astronomy Lookalikes, The Universe and Stuff with tags , , , on September 10, 2010 by telescoper

Obviously someone else has already noticed the remarkable similarity between the structure of the human brain and that revealed by computer simulations of the large-scale structure of the Universe.

Does this mean that dark matter is really just all in the mind?


Share/Bookmark

Astronomy Photographer of the Year

Posted in Art, The Universe and Stuff with tags , , , on September 10, 2010 by telescoper

Amidst the doom and gloom of spending cuts and Ministerial incompetence we’re sometimes liable to forget what it’s all about. Last night provided us with a reminder, in the form of the Astronomy Photographer of the Year competition held at the National Maritime Museum (site of the Royal Observatory at Greenwich). There’s a varied selection of gorgeous entries on today’s Guardian, but this stunning image by Tom Lowe was the overall winner. Congratulations to him!


Share/Bookmark

Spinning Out

Posted in Cricket, The Universe and Stuff with tags , , , , , , , , , , on September 6, 2010 by telescoper

I don’t know why, but last week was my most popular week ever, at least in terms of blog hits! I was going to follow up with a foray into the role of spin in quantum mechanics, but decided instead to settle for a less ambitious project for this evening.

Yesterday I walked past the cricket ground at the SWALEC Stadium in Sophia Gardens, Cardiff, during the Twenty20 international between England and Pakistan. There is another match of this type tomorrow night which I’ll actually be going to, as long as it’s not rained off, but I have too many things to do to go to both games. Anyway, England’s excellent off-spinner Graham Swann was bowling when I watched through a gap in the stands at the river end of the stadium. He seemed to be getting an impressive amount of turn, and I got wondering about how fast a bowler like “Swannee” actual spins the ball.

For those of you not so familiar with cricket here’s a clip of another prodigious spinner of the ball, Australia’s legend of legspin Shane Warne:

For beginners, the game of cricket is a bit similar to baseball (insofar as it’s a game involving a bat and a ball), but the “strike zone” in cricket is a physical object ( a “wicket” made of wooden stumps with bails balanced on the top) unlike the baseball equivalent, which exists only in the mind of the umpire. The batsman must prevent the ball hitting the wicket and also try to score runs if he can. In contrast to baseball, however, he doesn’t have to score; he can elect to play a purely defensive shot or even not play any short at all if he judges the ball is going to miss, which is what happened to the hapless batsman in the clip.

You will see that Warne imparts considerable spin on the ball, which has the effect of making it change direction when it bounces.  The fact that the ball hits the playing surface before the batsman has a chance to play it introduces extra variables that you don’t see in baseball,  such as the state of the pitch (which generally deteriorates over the five days of a Test match, especially in the “rough” where bowlers have been running in). A spin bowler who causes the ball to deviate from right to left is called a legspin bowler, while one who makes it turn the other way is an offspin bowler. An orthodox legspinner generates most of the spin from a flick of the wrist while an offspinner mainly lets his fingers do the torquing.

Another difference that’s worth mentioning with respect to baseball is that the ball is bowled, i.e. the bowler’s arm is not supposed to bend during the delivery (although apparently that doesn’t apply if he’s from Sri Lanka). However, the bowler is allowed to take a run up, which will be quite short for a spin bowler, but long like a javelin thrower if it’s a fast bowler. Fast bowlers – who can bowl up to 95 mph (150 km/h) – don’t spin the ball to any degree but have other tricks up their sleeve I haven’t got time to go into here. A typical spin bowler delivers the ball at speeds ranging from 45 mph to 60 mph (70 km/hour to 100 km/hour).

The physical properties of a cricket ball are specified in the Laws of Cricket. It must be between 22.4 and 22.9 cm in circumference, i.e. 3.57 to 3.64 cm in radius and must weigh between 155.9g and 163g. It’s round, made of cork, and surrounded by a leather case with a stitched seam.

So now, after all that, I can give a back-of-the-envelope answer to the question I was wondering about on the way home. Looking at the video clip my initial impression was that the ball is deflected  by an angle as large as a radian, but in fact the foreshortening effect of the camera is quite deceptive. In fact the ball deviates by less than a metre between pitching and hitting the stumps. There is a gap of about 1 metre between the popping crease (where the batsman stands) and the stumps – it looks much less from the camera angle shown – and the ball probably pitches at least 2 metres in front of the crease. I would guess therefore that it actually deflects by an angle less than twenty degrees or so.

What happens physically is that some of the rotational kinetic energy of the ball is converted into translational kinetic energy associated with a component of the velocity  at right angles to the original direction of travel. In order for the deflection to be so large, the available rotational kinetic energy must be non-negligible compared to the original kinetic energy of the ball. Suppose the mass of the ball is M, the translational kinetic energy is T=\frac{1}{2} Mv^2 where v is the speed of the ball. If the angular velocity of rotation is \omega then the rotational kinetic energy \Omega =\frac{1}{2} I \omega^2, where I is the moment of inertia of the ball.

Approximating the ball as a uniform sphere of mass M and radius a, the moment of inertia is I=\frac{2}{5}Ma^2.  Putting T=\Omega, cancelling M on both sides and ignoring the factor of \frac{2}{5} – because I’m lazy – we see that the rotational and translational kinetic energies are comparable if

v^2 \simeq a^2\omega^2,

or \omega \simeq \frac{v}{a}, which makes sense because a\omega is just the speed of a point on the equator of the ball owing to the ball’s rotational motion. This equation therefore says that the speed of sideways motion of a point on the ball’s surface must be roughly comparable to speed of the ball’s forward motion. Taking v=80 km/h gives v\simeq \frac{80 \times 10^3}{60 \times 60} \simeq 20 m/s and a\simeq 0.036 m gives \omega \simeq 600 radians per second, which is about 100 revolutions per second. This would cause a huge deviation (about 45 degrees), but the real effect is rather smaller as I discussed above (see comments below). If the deflection is actually around 15 degrees then the rotation speed needed would be around 30 rev/s.

This estimate is obviously very rough because it ignores the direction of spin and the efficiency with the ball grips on the pitch – friction is obviously involved in the change of direction – but it gives a reasonable ballpark (or at least cricketground) estimate.

Of course if the bowler does the same thing every time it’s relatively easy for the batsman to allow for the spin. The best  bowlers therefore vary the amount and angle of spin they impart on each ball. Most, in fact,  have at least two qualitatively different types of ball but they disguise the differences in the act of delivery. Offspinners typically have an “arm ball” which doesn’t really spin but holds its line without appearing to be any different to their spinning delivery. Legspinners usually have a variety of alternative balls,  including a topspinner and/or a flipper and/or a googly. The latter is a ball that comes out of the back of the hand and actually spins the opposite way to a legspinner while being produced with apparently the same action. It’s very hard to bowl a googly accurately, but it’s a deadly thing when done right.

Another thing also worth mentioning is that the rotation of the cricket ball also causes a deviation of its flightpath through the air, by virtue of the Magnus effect. This causes the ball to curve in the air in the opposite direction to which it is going to deviate on bouncing, i.e. it would drift into a right-handed batsman before breaking away from him off the pitch. You can see a considerable amount of such movement in the video clip,  away from the left-hander in the air and then back into him off the pitch. Nature clearly likes to make things tough for batsmen!

With a number of secret weapons in his armoury the spin bowler can be a formidable opponent, a fact that has apparently been known to poets, philosophers and astronomers for the best part of a thousand years:

The Ball no Question makes of Ayes and Noes,
But Right or Left, as strikes the Player goes;
And he that toss’d Thee down into the Field,
He knows about it all — He knows — HE knows!

The Rubaiyat of Omar Khayyam [50]


Share/Bookmark

Get thee behind me, Plato

Posted in The Universe and Stuff with tags , , , , , , , , , , on September 4, 2010 by telescoper

The blogosphere, even the tiny little bit of it that I know anything about, has a habit of summoning up strange coincidences between things so, following EM Forster’s maxim “only connect”, I thought I’d spend a lazy saturday lunchtime trying to draw a couple of them together.

A few days ago I posted what was intended to be a fun little item about the wave-particle duality in quantum mechanics. Basically, what I was trying to say is that there’s no real problem about thinking of an electron as behaving sometimes like a wave and sometimes like a particle because, in reality (whatever that is), it is neither. “Particle” and “wave” are useful abstractions but they are not in an exact one-to-one correspondence with natural phenomena.

Before going on I should point out that the vast majority of physicists are well away of the distinction between, say,  the “theoretical” electron and whatever the “real thing” is. We physicists tend to live in theory space rather than in the real world, so we tend to teach physics by developing the formal mathematical properties of the “electron” (or “electric field”) or whatever, and working out what experimental consequences these entail in certain situations. Generally speaking, the theory works so well in practice that we often talk about the theoretical electron that exists in the realm of mathematics and the electron-in-itself as if they are one and the same thing. As long as this is just a pragmatic shorthand, it’s fine. However, I think we need to be careful to keep this sort of language under control. Pushing theoretical ideas out into the ontological domain is a dangerous game. Physics – especially quantum physics – is best understood as a branch of epistemology. What is known? is safer ground than what is there?

Anyway, my  little  piece sparked a number of interesting comments on Reddit, including a thread that went along the lines “of course an electron is neither a particle nor a wave,  it’s actually  a spin-1/2 projective representation of the Lorentz Group on a Hilbert space”. That description, involving more sophisticated mathematical concepts than those involved in bog-standard quantum mechanics, undoubtedly provides a more complete account of natural phenomena associated with the electrons and electrical fields, but I’ll stick to my guns and maintain that it still introduces a deep confusion to assert that the electron “is” something mathematical, whether that’s a “spin-1/2 projective representation” or a complex function or anything else.  That’s saying something physical is a mathematical. Both entities have some sort of existence, of course, but not the same sort, and the one cannot “be” the other. “Certain aspects of an electron’s behaviour can be described by certain mathematical structures” is as I’m  prepared to go.

Pushing deeper than quantum mechanics, into the realm of quantum field theory, there was the following contribution:

The electron field is a quantum field as described in quantum field theories. A quantum field covers all space time and in each point the quantum field is in some state, it could be the ground state or it could be an excitation above the ground state. The excitations of the electron field are the so-called electrons. The mathematical object that describes the electron field possesses, amongst others, certain properties that deal with transformations of the space-time coordinates. If, when performing a transformation of the space-time coordinates, the mathematical object changes in such a way that is compatible with the physics of the quantum field, then one says that the mathematical object of the field (also called field) is represented by a spin 1/2 (in the electron case) representation of a certain group of transformations (the Poincaré group, in this example).I understand your quibbling, it seems natural to think that “spin 1/2″ is a property of the mathematical tool to describe something, not the something itself. If you press on with that distinction however, you should be utterly puzzled of why physics should follow, step by step, the path led by mathematics.

For example, one speaks about the ¨invariance under the local action of the group SU(3)” as a fundamental property of the fields that feel the nuclear strong force. This has two implications, the mathematical object that represents quarks must have 3 ¨strong¨ degrees of freedom (the so-called color) and there must be 32-1 = 8 carriers of the force (the gluons) because the group of transformations in a SU(N) group has N2-1 generators. And this is precisely what is observed.

So an extremely abstract mathematical principle correctly accounts for the dynamics of an inmensely large quantity of phenomena. Why does then physics follow the derivations of mathematics if its true nature is somewhat different?

No doubt this line of reasoning is why so many theoretical physicists seem to adopt a view of the world that regards mathematical theories as being, as it were,  “built into” nature rather than being things we humans invented to describe nature. This is a form of Platonic realism.

I’m no expert on matters philosophical, but I’d say that I find this stance very difficult to understand, although I am prepared to go part of the way. I used to work in a Mathematics department many years ago and one of the questions that came up at coffee time occasionally was “Is mathematics invented or discovered?”. In my experience, pure mathematicians always answered “discovered” while others (especially astronomers, said “invented”). For what it’s worth, I think mathematics is a bit of both. Of course we can invent mathematical objects, endow them with certain attributes and proscribe rules for manipulating them and combining them with other entities. However, once invented anything that is worked out from them is “discovered”. In fact, one could argue that all mathematical theorems etc arising within such a system are simply tautological expressions of the rules you started with.

Of course physicists use mathematics to construct models that describe natural phenomena. Here the process is different from mathematical discovery as what we’re trying to do is work out which, if any, of the possible theories is actually the one that accounts best for whatever empirical data we have. While it’s true that this programme requires us to accept that there are natural phenomena that can be described in mathematical terms, I do not accept that it requires us to accept that nature “is” mathematical. It requires that there be some sort of law governing some  of aspects of nature’s behaviour but not that such laws account for everything.

Of course, mathematical ideas have been extremely successful in helping physicists build new physical descriptions of reality. On the other hand, however, there is a great deal of mathematical formalism that is is not useful in this way.  Physicists have had to select those mathematical object that we can use to represent natural phenomena, like selecting words from a dictionary. The fact that we can assemble a sentence using words from the Oxford English Dictionary that conveys some information about something we see doesn’t not mean that what we see “is” English. A whole load of grammatically correct sentences can be constructed that don’t make any sense in terms of observable reality, just as there is a great deal of mathematics that is internally self-consistent but makes no contact with physics.

Moreover, to the person whose quote I commented on above, I’d agree that the properties of the SU(3) gauge group have indeed accounted for many phenomena associated with the strong interaction, which is why the standard model of particle physics contains 8 gluons and quarks carrying a three-fold colour charge as described by quantum chromodynamics. Leaving aside the fact that QCD is such a terribly difficult theory to work with – in practice it involves  nightmarish lattice calculations on a scale to make even the most diehard enthusiast cringe –  what I would ask is whether this  description in any case sufficient for us to assert that it describes “true nature”?  Many physicists will no doubt disagree with me, but I don’t think so. It’s a map, not the territory.

So why am I boring you all with this rambling dissertation? Well, it  brings me to my other post – about Stephen Hawking’s comments about God. I don’t want to go over that issue again – frankly, I was bored with it before I’d finished writing my own blog post  – but it does relate to the bee that I often find in my bonnet about the tendency of many modern theoretical physicists to assign the wrong category of existence to their mathematical ideas. The prime example that springs to my mind is the multiverse. I can tolerate  certain versions of the multiverse idea, in fact. What I can’t swallow, however is the identification of the possible landscape of string theory vacua – essentially a huge set of possible solutions of a complicated set of mathematical equations – with a realised set of “parallel universes”. That particular ontological step just seems absurd to me.

I’m just about done, but one more thing I’d say to finish with is concerns the (admittedly overused) metaphor of maps and territories. Maps are undoubtedly useful in helping us find our way around, but we have to remember that there are always things that aren’t on the map at all. If we rely too heavily on one, we might miss something of great interest that the cartographer didn’t think important. Likewise if we fool ourselves into thinking our descriptions of nature are so complete that they “are” all that nature is, then we might miss the road to a better understanding.


Share/Bookmark

Hawking and the Mind of God

Posted in Books, Talks and Reviews, Science Politics, The Universe and Stuff with tags , , , , , on September 2, 2010 by telescoper

I woke up this morning to the news that, according to Stephen Hawking, God did not create the Universe but it was instead an “inevitable consequence of the Law of Physics”. By sheer coincidence this daft pronouncement has come out at the same time as the publication of Professor Hawking’s new book, an extract of which appears in todays Times.

It’s interesting that such a fatuous statement managed to become a lead item on the radio news and a headline in all the national newspapers despite being so obviously devoid of any meaning whatsoever. How can the Universe be  “a consequence” of the theories that we invented to describe it? To me that’s just like saying that the Lake District is a consequence of an Ordnance Survey map. And where did the Laws of Physics come from, if not from God?

Stephen Hawking is undoubtedly a very brilliant theoretical physicist. However, something I’ve noticed about theoretical physicists over the years is that if you get them talking on subjects outside physics they are generally likely to say things just as daft as some drunk bloke  down the pub. I’m afraid this is a case in point.

Part of me just wants to laugh this story off, but another part is alarmed at what must appear to many to be an example of an arrogant scientist presuming to pass judgement on subjects that are really none of his business. When scientists complain about the lack of enthusiasm shown by sections of the public towards their subject, perhaps they should take seriously the alienating effect that such statements can have. This kind of thing isn’t what I’d call public engagement. Quite the opposite, in fact.

In case anyone is interested, I am not religious but I do think that there are many things that science does not – and probably will never –  explain, such as why there is  something rather than nothing. I also believe that science and religious belief are not in principle incompatible – although whether there is a conflict in practice does depend of course on the form of religious belief and how it is observed. God and physics are in my view pretty much orthogonal. To put it another way,  if I were religious, there’s nothing in theoretical physics that would change make me want to change my mind. However, I’ll leave it to those many physicists who are learned in matters of theology to take up the (metaphorical) cudgels with Professor Hawking.

No doubt this bit of publicity will increase the sales of the new book, so I’ve decided  to point out that I have  written a book myself on precisely this question, which is available from all good airports bookshops. I’m sure you’ll understand that there isn’t a hint of opportunism in the way I’m drawing this to your attention. If you think this is a cynical attempt to cash in then all I can say is

BUY MY BOOK!

I also noticed that today’s Grauniad is offering a poll on the existence or non-existence of God. I noticed some time ago that there’s a poll facility on WordPress, so this gives me an excuse to try repeating it here. Anything dumb the Guardian can do, I can do dumber. However, owing to funding cuts I’ve decided to do a single poll encompassing several topical news stories at the same time.


Share/Bookmark

Dragons and Unicorns

Posted in Education, The Universe and Stuff with tags , , , , , , , on August 30, 2010 by telescoper

When I was an undergraduate I was often told by lecturers that I should find quantum mechanics very difficult, because it is unlike the classical physics I had learned about up to that point. The difference – or so I was informed – was that classical systems were predictable, but quantum systems were not. For that reason the microscopic world could only be described in terms of probabilities. I was a bit confused by this, because I already knew that many classical systems were predictable in principle, but not really in practice. I blogged about this some time ago, in fact. It was only when I had studied theory for a long time – almost three years – that I realised what was the correct way to be confused about it. In short, quantum probability is a very strange kind of probability that displays many peculiarities and subtleties  that one doesn’t see in the kind of systems we normally think of as “random”, such as coin-tossing or roulette wheels.

To illustrate how curious the quantum universe is we have to look no further than the very basic level of quantum theory, as formulated by the founder of wave mechanics, Erwin Schrödinger. Schrödinger was born in 1887 into an affluent Austrian family made rich by a successful oilcloth business run by his father. He was educated at home by a private tutor before going to the University of Vienna where he obtained his doctorate in 1910. During the First World War he served in the artillery, but was posted to an isolated fort where he found lots of time to read about physics. After the end of hostilities he travelled around Europe and started a series of inspired papers on the subject now known as wave mechanics; his first work on this topic appeared in 1926. He succeeded Planck as Professor of Theoretical Physics in Berlin, but left for Oxford when Hitler took control of Germany in 1933. He left Oxford in 1936 to return to Austria but fled when the Nazis seized the country and he ended up in Dublin, at the Institute for Advanced Studies which was created especially for him by the Irish Taoiseach, Eamon de Valera. He remained there happily for 17 years before returning to his native land at the University of Vienna. Sadly, he became ill shortly after arriving there and died in 1961.

Schrödinger was a friendly and informal man who got on extremely well with colleagues and students alike. He was also a bit scruffy even to the extent that he sometimes had trouble getting into major scientific conferences, such as the Solvay conferences which are exclusively arranged for winners of the Nobel Prize. Physicists have never been noted for their sartorial elegance, but Schrödinger must have been an extreme case.

The theory of wave mechanics arose from work published in 1924 by de Broglie who had suggested that every particle has a wave somehow associated with it, and the overall behaviour of a system resulted from some combination of its particle-like and wave-like properties. What Schrödinger did was to write down an equation, involving a Hamiltonian describing particle motion of the form I have discussed before, but written in such a way as to resemble the equation used to describe wave phenomena throughout physics. The resulting mathematical form for a single particle is

i\hbar\frac{\partial \Psi}{\partial t} = \hat{H}\Psi = -\frac{\hbar^2}{2m}\nabla^2 \Psi + V\Psi,

in which the term \Psi  is called the wave-function of the particle. As usual, the Hamiltonian H consists of two parts: one describes the kinetic energy (the first term on the right hand side) and the second its potential energy represented by V. This equation – the Schrödinger equation – is one of the most important in all physics.

At the time Schrödinger was developing his theory of wave mechanics it had a rival, called matrix mechanics, developed by Werner Heisenberg and others. Paul Dirac later proved that wave mechanics and matrix mechanics were mathematically equivalent; these days physicists generally use whichever of these two approaches is most convenient for particular problems.

Schrödinger’s equation is important historically because it brought together lots of bits and pieces of ideas connected with quantum theory into a single coherent descriptive framework. For example, in 1911 Niels Bohr had begun looking at a simple theory for the hydrogen atom which involved a nucleus consisting of a positively charged proton with a negatively charged electron moving around it in a circular orbit. According to standard electromagnetic theory this picture has a flaw in it: the electron is accelerating and consequently should radiate energy. The orbit of the electron should therefore decay rather quickly.

Bohr hypothesized that special states of this system were actually stable; these states were ones in which the orbital angular momentum of the electron was an integer multiple of Planck’s constant. This simple idea endows the hydrogen atom with a discrete set of energy levels which, as Bohr showed in 1913, were consistent with the appearance of sharp lines in the spectrum of light emitted by hydrogen gas when it is excited by, for example, an electrical discharge. The calculated positions of these lines were in good agreement with measurements made by Rydberg so the Bohr theory was in good shape. But where did the quantised angular momentum come from?

The Schrödinger equation describes some form of wave; its solutions \Psi(\vec{x},t) are generally oscillating functions of position and time. If we want it to describe a stable state then we need to have something which does not vary with time, so we proceed by setting the left-hand-side of the equation to zero. The hydrogen atom is a bit like a solar system with only one planet going around a star so we have circular symmetry which simplifies things a lot. The solutions we get are waves, and the mathematical task is to find waves that fit along a circular orbit just like standing waves on a circular string. Immediately we see why the solution must be quantized. To exist on a circle the wave can’t just have any wavelength; it has to fit into the circumference of the circle in such a way that it winds up at the same value after a round trip. In Schrödinger’s theory the quantisation of orbits is not just an ad hoc assumption, it emerges naturally from the wave-like nature of the solutions to his equation.

The Schrödinger equation can be applied successfully to systems which are much more complicated than the hydrogen atom, such as complex atoms with many electrons orbiting the nucleus and interacting with each other. In this context, this description is the basis of most work in theoretical chemistry. But it also poses very deep conceptual challenges, chiefly about how the notion of a “particle” relates to the “wave” that somehow accompanies it.

To illustrate the riddle, consider a very simple experiment where particles of some type (say electrons, but it doesn’t really matter; similar experiments can be done with photons or other particles) emerge from the source on the left, pass through the slits in the middle and are detected in the screen at the right.

In a purely “particle” description we would think of the electrons as little billiard balls being fired from the source. Each one then travels along a well-defined path, somehow interacts with the screen and ends up in some position on the detector. On the other hand, in a “wave” description we would imagine a wave front emerging from the source, being diffracted by the screen and ending up as some kind of interference pattern at the detector. This is what we see with light, for example, in the phenomenon known as Young’s fringes.

In quantum theory we have to think of the system as being in some sense both a wave and a particle. This is forced on us by the fact that we actually observe a pattern of “fringes” at the detector, indicating wave-like interference, but we also can detect the arrival of individual electrons as little dots. Somehow the propensity of electrons to arrive in positions on the screen is controlled by an element of waviness, but they manage to retain some aspect of their particleness. Moreover, one can turn the source intensity down to a level where there is only every one electron in the experiment at any time. One sees the dots arrive one by one on the detector, but adding them up over a long time still yields a pattern of fringes.

Curiouser and curiouser, said Alice.

Eventually the community of physicists settled on a party line that most still stick to: that the wave-function controls the probability of finding an electron at some position when a measurement is made. In fact the mathematical description of wave phenomena favoured by physicists involves complex numbers, so at each point in space at time \Psi is a complex number of the form \Psi= a+ib, where i =\sqrt{-1}; the corresponding probability is given by |\Psi^2|=a^2+b^2. This protocol, however, forbids one to say anything about the state of the particle before it measured. It is delocalized, not being definitely located anywhere, but only possessing a probability to be any particular place within the apparatus. One can’t even say which of the two slits it passes through. Somehow, it manages to pass through both slits. Or at least some of its wave-function does.

I’m not going to into the various philosophical arguments about the interpretation of quantum probabilities here, but I will pass on an analogy that helped me come to grips with the idea that an electron can behave in some respects like a wave and in others like a particle. At first thought, this seems a troubling paradox but it only appears so if you insist that our theoretical ideas are literal representations of what happens in reality. I think it’s much more sensible to treat the mathematics as a kind of map or sketch that is useful for us to do find our way around nature rather than confusing it with nature itself. Neither particles nor waves really exist in the quantum world – they’re just abstractions we use to try to describe as much as we can of what is going on. The fact that it doesn’t work perfectly shouldn’t surprise us, as there are are undoubtedly more things in Heaven and Earth than are dreamt of in our philosophy.

Imagine a mediaeval traveller, the first from your town to go to Africa. On his journeys he sees a rhinoceros, a bizarre creature that is unlike anything he’s ever seen before. Later on, when he gets back, he tries to describe the animal to those at home who haven’t seen it.  He thinks very hard. Well, he says, it’s got a long horn on its head, like a unicorn, and it’s got thick leathery skin, like a dragon. Neither dragons nor unicorns exist in nature, but they’re abstractions that are quite useful in conveying something about what a rhinoceros is like.

It’s the same with electrons. Except they don’t have horns and leathery skin. Obviously.


Share/Bookmark

The Sketch Process

Posted in Art, Education, The Universe and Stuff with tags , , , , , , , , , on August 25, 2010 by telescoper

It’s pouring with rain so, rather than set off home and get drenched, I thought I’d spend a few minutes on the blog and hope that the deluge dies down before I leave. Knowing my luck it will probably get worse.

Anyway, I thought I’d put together a short item on the theme of sketching. This is quite a strange subject for me to pick because drawing is something I’m completely useless at, but I hope you’ll bear with me and hopefully it will make some sense in the end.

What  spurred me on to think about it was the exhibit I’ve been involved with for the forthcoming Architecture Biennale in Venice as part of a project called Beyond Entropy organized by the Architectural Association School of Architecture. Unfortunately, although I’d originally planned to attend I can’t be there for the opening Symposium, but I hope it turns out to be as successful event as it promises to be!

Anyway, in the course of this project I came across this image of the Moon as drawn by Galileo

This led to an interesting discussion about the role of drawings like this in science. Of course  the use of sketches for the scientific representation of images has been superseded by photographic techniques, initially using film and more recently by digital techniques. The advantage of these methods is that they are quicker and also more “objective”. However, there are still many amateur astronomers who make drawings of the Moon as well as objects such as Jupiter and Saturn (which Galileo also drew). Moreover there are other fields in which experienced practioners continue to use pencil drawings in preference to photographic techniques. Archaeology provides many good examples, e.g.

The reason sketching still has a role in such fields is not that it can compete with photography for accuracy or objectivity but that there’s something about the process of sketching that engages the sketcher’s brain in a  way that’s very different from taking a photograph. The connection between eye, brain and hand seems to involve a cognitive element that is extremely useful in interpreting notes at a later date. In fact it’s probably their very subjectivity that makes them useful.  A thicker stroke of the pencil, or deliberately enhanced shading, or leaving out seemingly irrelevant detail, can help pick out  features that seem to the observer to be of particular significance. Months later when you’re trying to write up what you saw from your notes, those deliberate interventions against objectivity will take you back to what you  saw with your mind, not just with your eyes.

It doesn’t even matter whether or not you can draw well. The point isn’t so much to explain to other people what you’ve seen, but to record your own interaction with the object you’ve sketched in a way that allows you to preserve something more than a surface recollection.

You might think this is an unscientific thing to do, but I don’t think it is. The scientific process involves an interplay between objective reality and theoretical interpretation and drawing can be a useful part of this discourse. It’s as if the pencil allows the observer to interact with what is observed, forming a closer bond and probably a deeper level of understanding patterns and textures. I’m not saying it replaces a purely passive recording method like photography, but it can definitely help it.

I have not a shred of psychological evidence to back this up, but I’d also assert that sketching is very good for the learning process too.  Nowadays we tend to give out handouts of diagrams involved in physics, whether they relate to the design of apparatus or the geometrical configuration of a physical system. There’s a reason for doing this – they take a long time to draw and there’s a likelihood students will make mistakes copying them down. However, I’ve always  found that the only way to really take in what a diagram is saying is to try to draw it again myself. Even if the level of draftsmanship is worse, the level of understanding is undoubtedly better.Merely looking at someone else’s representation of something won’t give your brain as a good a feeling for what it is trying to say  as you would get if you tried to draw it yourself.

Perhaps what happens is that simply looking at a diagram only involves the connection between eye and brain. Drawing a copy requires also the connection between brain and hand. Maybe  this additional connection brings in additional levels of brain functionality. Sketching iinvolves your brain in an interaction that is different from merely looking.

The problem with excessive use of handouts – and this applies not only to figures  but also to lecture notes – is that they turn teaching into a very passive process. Taking notes in your own hand, and supplementing them with your own sketches – however scribbly and incomprehensible they may appear to other people – is  a much more active way to learn than collecting a stack of printed notes and meticulously accurate diagrams. And if it was good enough for Galileo, it should good enough for most of us!

Now it’s stopped raining so I’m off home!


Share/Bookmark