Archive for Physics

(Guest Post) STFC – It isn’t just about money

Posted in Science Politics with tags , , , , , , on October 4, 2010 by telescoper

The following piece was written by Professor George Efstathiou, FRS, who is Professor of Astrophysics at the University of Cambridge and Director of the Kavli Institute for Cosmology. The views expressed therein are George’s own, but I’m not saying that out of a desire to distance myself from his opinions. As a matter of fact, I was one of the people who signed the petition he describes in the article…

–o–

As Peter has reported on this site, physicists around the country are anxiously awaiting the results of the forthcoming Comprehensive Spending Review. Scientists whose research is supported by the Science and Technology Facilities Council (STFC)  are particularly anxious.  Since its creation, STFC has gone through two difficult scientific prioritisation exercises. Many excellent projects have been cancelled and grants supporting University groups have been cut savagely, by about 35%. STFC science has already descended into the Royal Society’s ‘game over‘ scenario. All of this has happened before the consequences of the economic crisis have hit the science budget. STFC has left itself uniquely poorly placed amongst the Research Councils to absorb further reductions following the CSR.

It is for this reason, that I and a few others organised a petition expressing a loss of confidence in the Chief Executive of STFC. The petition was signed by 916 researchers, including 162 Professors and 18 Fellows of the Royal Society. It was formally submitted to the STFC Chair (Michael Sterling) on 1st July together with an explicit request that STFC Council should review its role in this loss of confidence.

People will have had many different reasons for signing the petition. I made my views public well in advance (see my Letter to Lord Drayson). In all of my letters to ministers and others concerning the STFC ‘crisis’, I have never asked for more money. More money would help, of course, but this is utterly unrealistic in the current economic circumstances. No, over the last three years I have been lobbying for good governance. The strutural difficulties with STFC were easy to identify and I believe that with good governance the STFC programme could have been managed without such a catastrophic loss of science. Over three years, STFC have failed to establish a compelling narrative, strategy and constructive engagement with its science community. When one bears in mind that about 40 % of Physics staff work in areas for which STFC is the primary funding source, the consequences of the STFC crisis for University Departments, and the rest of the science base, are indeed serious.

So, whatever the outcome of the CSR, there are governance issues that we should be concerned about. There are three that I would like to raise here:

1. Fellowships and grants. Senior scientists from outside the UK point to the Fellowships and Rolling Grants as two of the most effective features of the UK funding system. Both are now under threat. I was responsible for making the case for the current 5 year system to PPARC Council. In addition to the evident benefits of continuity and reduction in peer review, Council need to understand that recruitment for postdocs involves a substantial lead time. If we are to compete for the best postdocs around the world (and not lose our best post docs), grant funds must be committed four years in advance. The 5 year rolling grant system, even with tapers, allows groups to advertise posts on an international timetable and to vire funds to maximise science output. Any move to responsive mode 3 year grants is guaranteed to deliver less science for a fixed amount of money. I would vigorously defend the Fellowships. Fellowships encourage scientific independence and provide a valuable “bottom-up” correction to the increasingly narrow “top-driven” science programme of STFC. Attacks on Fellowships and Rolling Grants will inevitably lead to a more introspective and less internationally competitive science programme.

2. The Composition of STFC Council. STFC Council, with a minority of leading research scientists, differs from other Research Councils. I have had several vigorous discussions with Michael Sterling concerning this issue and, in particular, the recent decision by BIS to appoint three new non-academic members to STFC. This led me to write a long letter to Adrian Smith (Director General of the Research Councils) reproduced here. Professor Smith replied that he approved of the present balance of Council and thought that it was compatible with the recommendations of previous reviews. I will leave readers to decide whether they agree. This is not a minor point. My experience on PPARC Council was that `lay members’ can often provide interesting perspectives on problems, but if they lack understanding of the science (sometimes alarmingly so) they will tend to accept the recommendations of the Executive. STFC needs a scientifically strong Council. Competent management is not enough. It is easy to keep within budget – you can be tough about cutting things. It is much harder to maximise the amount of science that you can do on a fixed budget. For that you need a scientific strategy and scientific judgement.

3. The New CEO. The search has begun for a new Chief Executive. There is one school of thought that a suitable candidate may be found from the corporate sector. Someone who may not understand the science, but would be a capable manager and communicator. I think that this would be a disaster. In my view, it is essential that a new CEO have an understanding of the science programme at STFC and should be prepared to act as an enthusiastic advocate for STFC science. We need a CEO who can engage constructively with the academic community and, when times are tough, articulate a strategy to limit the loss of science rather than gloat at our misfortune.

It would be great to have more money for STFC science. But money isn’t everything – we need to pay attention to governance issues as well. If we had been braver back in 2008 and openly challenged the Executive, we might not be in such a weak position now. We should not be so reticent in the future.


Share/Bookmark

Spin, Entanglement and Quantum Weirdness

Posted in The Universe and Stuff with tags , , , , , , , on October 3, 2010 by telescoper

After writing a post about spinning cricket balls a while ago I thought it might be fun to post something about the role of spin in quantum mechanics.

Spin is a concept of fundamental importance in quantum mechanics, not least because it underlies our most basic theoretical understanding of matter. The standard model of particle physics divides elementary particles into two types, fermions and bosons, according to their spin.  One is tempted to think of  these elementary particles as little cricket balls that can be rotating clockwise or anti-clockwise as they approach an elementary batsman. But, as I hope to explain, quantum spin is not really like classical spin: batting would be even more difficult if quantum bowlers were allowed!

Take the electron,  for example. The amount of spin an electron carries is  quantized, so that it always has a magnitude which is ±1/2 (in units of Planck’s constant; all fermions have half-integer spin). In addition, according to quantum mechanics, the orientation of the spin is indeterminate until it is measured. Any particular measurement can only determine the component of spin in one direction. Let’s take as an example the case where the measuring device is sensitive to the z-component, i.e. spin in the vertical direction. The outcome of an experiment on a single electron will lead a definite outcome which might either be “up” or “down” relative to this axis.

However, until one makes a measurement the state of the system is not specified and the outcome is consequently not predictable with certainty; there will be a probability of 50% probability for each possible outcome. We could write the state of the system (expressed by the spin part of its wavefunction  ψ prior to measurement in the form

|ψ> = (|↑> + |↓>)/√2

This gives me an excuse to use  the rather beautiful “bra-ket” notation for the state of a quantum system, originally due to Paul Dirac. The two possibilities are “up” (↑­) and “down” (↓) and they are contained within a “ket” (written |>)which is really just a shorthand for a wavefunction describing that particular aspect of the system. A “bra” would be of the form <|; for the mathematicians this represents the Hermitian conjugate of a ket. The √2 is there to insure that the total probability of the spin being either up or down is 1, remembering that the probability is the square of the wavefunction. When we make a measurement we will get one of these two outcomes, with a 50% probability of each.

At the point of measurement the state changes: if we get “up” it becomes purely |↑>  and if the result is  “down” it becomes |↓>. Either way, the quantum state of the system has changed from a “superposition” state described by the equation above to an “eigenstate” which must be either up or down. This means that all subsequent measurements of the spin in this direction will give the same result: the wave-function has “collapsed” into one particular state. Incidentally, the general term for a two-state quantum system like this is a qubit, and it is the basis of the tentative steps that have been taken towards the construction of a quantum computer.

Notice that what is essential about this is the role of measurement. The collapse of  ψ seems to be an irreversible process, but the wavefunction itself evolves according to the Schrödinger equation, which describes reversible, Hamiltonian changes.  To understand what happens when the state of the wavefunction changes we need an extra level of interpretation beyond what the mathematics of quantum theory itself provides,  because we are generally unable to write down a wave-function that sensibly describes the system plus the measuring apparatus in a single form.

So far this all seems rather similar to the state of a fair coin: it has a 50-50 chance of being heads or tails, but the doubt is resolved when its state is actually observed. Thereafter we know for sure what it is. But this resemblance is only superficial. A coin only has heads or tails, but the spin of an electron doesn’t have to be just up or down. We could rotate our measuring apparatus by 90° and measure the spin to the left (←) or the right (→). In this case we still have to get a result which is a half-integer times Planck’s constant. It will have a 50-50 chance of being left or right that “becomes” one or the other when a measurement is made.

Now comes the real fun. Suppose we do a series of measurements on the same electron. First we start with an electron whose spin we know nothing about. In other words it is in a superposition state like that shown above. We then make a measurement in the vertical direction. Suppose we get the answer “up”. The electron is now in the eigenstate with spin “up”.

We then pass it through another measurement, but this time it measures the spin to the left or the right. The process of selecting the electron to be one with  spin in the “up” direction tells us nothing about whether the horizontal component of its spin is to the left or to the right. Theory thus predicts a 50-50 outcome of this measurement, as is observed experimentally.

Suppose we do such an experiment and establish that the electron’s spin vector is pointing to the left. Now our long-suffering electron passes into a third measurement which this time is again in the vertical direction. You might imagine that since we have already measured this component to be in the up direction, it would be in that direction again this time. In fact, this is not the case. The intervening measurement seems to “reset” the up-down component of the spin; the results of the third measurement are back at square one, with a 50-50 chance of getting up or down.

This is just one example of the kind of irreducible “randomness” that seems to be inherent in quantum theory. However, if you think this is what people mean when they say quantum mechanics is weird, you’re quite mistaken. It gets much weirder than this! So far I have focussed on what happens to the description of single particles when quantum measurements are made. Although there seem to be subtle things going on, it is not really obvious that anything happening is very different from systems in which we simply lack the microscopic information needed to make a prediction with absolute certainty.

At the simplest level, the difference is that quantum mechanics gives us a theory for the wave-function which somehow lies at a more fundamental level of description than the usual way we think of probabilities. Probabilities can be derived mathematically from the wave-function,  but there is more information in ψ than there is in |2; the wave-function is a complex entity whereas the square of its amplitude is entirely real. If one can construct a system of two particles, for example, the resulting wave-function is obtained by superimposing the wave-functions of the individual particles, and probabilities are then obtained by squaring this joint wave-function. This will not, in general, give the same probability distribution as one would get by adding the one-particle probabilities because, for complex entities A and B,

A2+B2 ≠(A+B)2

in general. To put this another way, one can write any complex number in the form a+ib (real part plus imaginary part) or, generally more usefully in physics , as Re, where R is the amplitude and θ  is called the phase. The square of the amplitude gives the probability associated with the wavefunction of a single particle, but in this case the phase information disappears; the truly unique character of quantum physics and how it impacts on probabilies of measurements only reveals itself when the phase information is retained. This generally requires two or more particles to be involved, as the absolute phase of a single-particle state is essentially impossible to measure.

Finding situations where the quantum phase of a wave-function is important is not easy. It seems to be quite easy to disturb quantum systems in such a way that the phase information becomes scrambled, so testing the fundamental aspects of quantum theory requires considerable experimental ingenuity. But it has been done, and the results are astonishing.

Let us think about a very simple example of a two-component system: a pair of electrons. All we care about for the purpose of this experiment is the spin of the electrons so let us write the state of this system in terms of states such as  which I take to mean that the first particle has spin up and the second one has spin down. Suppose we can create this pair of electrons in a state where we know the total spin is zero. The electrons are indistinguishable from each other so until we make a measurement we don’t know which one is spinning up and which one is spinning down. The state of the two-particle system might be this:

|ψ> = (|↑↓> – |↓↑>)/√2

squaring this up would give a 50% probability of “particle one” being up and “particle two” being down and 50% for the contrary arrangement. This doesn’t look too different from the example I discussed above, but this duplex state exhibits a bizarre phenomenon known as quantum entanglement.

Suppose we start the system out in this state and then separate the two electrons without disturbing their spin states. Before making a measurement we really can’t say what the spins of the individual particles are: they are in a mixed state that is neither up nor down but a combination of the two possibilities. When they’re up, they’re up. When they’re down, they’re down. But when they’re only half-way up they’re in an entangled state.

If one of them passes through a vertical spin-measuring device we will then know that particle is definitely spin-up or definitely spin-down. Since we know the total spin of the pair is zero, then we can immediately deduce that the other one must be spinning in the opposite direction because we’re not allowed to violate the law of conservation of angular momentum: if Particle 1 turns out to be spin-up, Particle 2  must be spin-down, and vice versa. It is known experimentally that passing two electrons through identical spin-measuring gadgets gives  results consistent with this reasoning. So far there’s nothing so very strange in this.

The problem with entanglement lies in understanding what happens in reality when a measurement is done. Suppose we have two observers, Dick and Harry, each equipped with a device that can measure the spin of an electron in any direction they choose. Particle 1 emerges from the source and travels towards Dick whereas particle 2 travels in Harry’s direction. Before any measurement, the system is in an entangled superposition state. Suppose Dick decides to measure the spin of electron 1 in the z-direction and finds it spinning up. Immediately, the wave-function for electron 2 collapses into the down direction. If Dick had instead decided to measure spin in the left-right direction and found it “left” similar collapse would have occurred for particle 2, but this time putting it in the “right” direction.

Whatever Dick does, the result of any corresponding measurement made by Harry has a definite outcome – the opposite to Dick’s result. So Dick’s decision whether to make a measurement up-down or left-right instantaneously transmits itself to Harry who will find a consistent answer, if he makes the same measurement as Dick.

If, on the other hand, Dick makes an up-down measurement but Harry measures left-right then Dick’s answer has no effect on Harry, who has a 50% chance of getting “left” and 50% chance of getting right. The point is that whatever Dick decides to do, it has an immediate effect on the wave-function at Harry’s position; the collapse of the wave-function induced by Dick immediately collapses the state measured by Harry. How can particle 1 and particle 2 communicate in this way?

This riddle is the core of a thought experiment by Einstein, Podolsky and Rosen in 1935 which has deep implications for the nature of the information that is supplied by quantum mechanics. The essence of the EPR paradox is that each of the two particles – even if they are separated by huge distances – seems to know exactly what the other one is doing. Einstein called this “spooky action at a distance” and went on to point out that this type of thing simply could not happen in the usual calculus of random variables. His argument was later tightened considerably by John Bell in a form now known as Bell’s theorem.

To see how Bell’s theorem works, consider the following roughly analagous situation. Suppose we have two suspects in prison, say Dick and Harry (Tom grassed them up and has been granted immunity from prosecution). The  two are taken apart to separate cells for individual questioning. We can allow them to use notes, electronic organizers, tablets of stone or anything to help them remember any agreed strategy they have concocted, but they are not allowed to communicate with each other once the interrogation has started. Each question they are asked has only two possible answers – “yes” or “no” – and there are only three possible questions. We can assume the questions are asked independently and in a random order to the two suspects.

When the questioning is over, the interrogators find that whenever they asked the same question, Dick and Harry always gave the same answer, but when the question was different they only gave the same answer 25% of the time. What can the interrogators conclude?

The answer is that Dick and Harry must be cheating. Either they have seen the question list ahead of time or are able to communicate with each other without the interrogator’s knowledge. If they always give the same answer when asked the same question, they must have agreed on answers to all three questions in advance. But when they are asked different questions then, because each question has only two possible responses, by following this strategy it must turn out that at least two of the three prepared answers – and possibly all of them – must be the same for both Dick and Harry. This puts a lower limit on the probability of them giving the same answer to different questions. I’ll leave it as an exercise to the reader to show that the probability of coincident answers to different questions in this case must be at least 1/3.

This a simple illustration of what in quantum mechanics is known as a Bell inequality. Dick and Harry can only keep the number of such false agreements down to the measured level of 25% by cheating.

This example is directly analogous to the behaviour of the entangled quantum state described above under repeated interrogations about its spin in three different directions. The result of each measurement can only be either “yes” or “no”. Each individual answer (for each particle) is equally probable in this case; the same question always produces the same answer for both particles, but the probability of agreement for two different questions is indeed ¼ and not larger as would be expected if the answers were random. For example one could ask particle 1 “are you spinning up” and particle 2 “are you spinning to the right”? The probability of both producing an answer “yes” is 25% according to quantum theory but would be higher if the particles weren’t cheating in some way.

Probably the most famous experiment of this type was done in the 1980s, by Alain Aspect and collaborators, involving entangled pairs of polarized photons (which are bosons), rather than electrons, primarily because these are easier to prepare.

The implications of quantum entanglement greatly troubled Einstein long before the EPR paradox. Indeed the interpretation of single-particle quantum measurement (which has no entanglement) was already troublesome. Just exactly how does the wave-function relate to the particle? What can one really say about the state of the particle before a measurement is made? What really happens when a wave-function collapses? These questions take us into philosophical territory that I have set foot in already; the difficult relationship between epistemological and ontological uses of probability theory.

Thanks largely to the influence of Niels Bohr, in the relatively early stages of quantum theory a standard approach to this question was adopted. In what became known as the  Copenhagen interpretation of quantum mechanics, the collapse of the wave-function as a result of measurement represents a real change in the physical state of the system. Before the measurement, an electron really is neither spinning up nor spinning down but in a kind of quantum purgatory. After a measurement it is released from limbo and becomes definitely something. What collapses the wave-function is something unspecified to do with the interaction of the particle with the measuring apparatus or, in some extreme versions of this doctrine, the intervention of human consciousness.

I find it amazing that such a view could have been held so seriously by so many highly intelligent people. Schrödinger hated this concept so much that he invented a thought-experiment of his own to poke fun at it. This is the famous “Schrödinger’s cat” paradox. I’ve sent Columbo out of the room while I describe this.

In a closed box there is a cat. Attached to the box is a device which releases poison into the box when triggered by a quantum-mechanical event, such as radiation produced by the decay of a radioactive substance. One can’t tell from the outside whether the poison has been released or not, so one doesn’t know whether the cat is alive or dead. When one opens the box, one learns the truth. Whether the cat has collapsed or not, the wave-function certainly does. At this point one is effectively making a quantum measurement so the wave-function of the cat is either “dead” or “alive” but before opening the box it must be in a superposition state. But do we really think the cat is neither dead nor alive? Isn’t it certainly one or the other, but that our lack of information prevents us from knowing which? And if this is true for a macroscopic object such as a cat, why can’t it be true for a microscopic system, such as that involving just a pair of electrons?

As I learned at a talk by the Nobel prize-winning physicist Tony Leggett – who has been collecting data on this recently – most physicists think Schrödinger’s cat is definitely alive or dead before the box is opened. However, most physicists don’t believe that an electron definitely spins either up or down before a measurement is made. But where does one draw the line between the microscopic and macroscopic descriptions of reality? If quantum mechanics works for 1 particle, does it work also for 10, 1000? Or, for that matter, 1023?

Most modern physicists eschew the Copenhagen interpretation in favour of one or other of two modern interpretations. One involves the concept of quantum decoherence, which is basically the idea that the phase information that is crucial to the underlying logic of quantum theory can be destroyed by the interaction of a microscopic system with one of larger size. In effect, this hides the quantum nature of macroscopic systems and allows us to use a more classical description for complicated objects. This certainly happens in practice, but this idea seems to me merely to defer the problem of interpretation rather than solve it. The fact that a large and complex system makes tends to hide its quantum nature from us does not in itself give us the right to have a different interpretations of the wave-function for big things and for small things.

Another trendy way to think about quantum theory is the so-called Many-Worlds interpretation. This asserts that our Universe comprises an ensemble – sometimes called a multiverse – and  probabilities are defined over this ensemble. In effect when an electron leaves its source it travels through infinitely many paths in this ensemble of possible worlds, interfering with itself on the way. We live in just one slice of the multiverse so at the end we perceive the electron winding up at just one point on our screen. Part of this is to some extent excusable, because many scientists still believe that one has to have an ensemble in order to have a well-defined probability theory. If one adopts a more sensible interpretation of probability then this is not actually necessary; probability does not have to be interpreted in terms of frequencies. But the many-worlds brigade goes even further than this. They assert that these parallel universes are real. What this means is not completely clear, as one can never visit parallel universes other than our own …

It seems to me that none of these interpretations is at all satisfactory and, in the gap left by the failure to find a sensible way to understand “quantum reality”, there has grown a pathological industry of pseudo-scientific gobbledegook. Claims that entanglement is consistent with telepathy, that parallel universes are scientific truths, that consciousness is a quantum phenomena abound in the New Age sections of bookshops but have no rational foundation. Physicists may complain about this, but they have only themselves to blame.

But there is one remaining possibility for an interpretation of that has been unfairly neglected by quantum theorists despite – or perhaps because of – the fact that is the closest of all to commonsense. This view that quantum mechanics is just an incomplete theory, and the reason it produces only a probabilistic description is that does not provide sufficient information to make definite predictions. This line of reasoning has a distinguished pedigree, but fell out of favour after the arrival of Bell’s theorem and related issues. Early ideas on this theme revolved around the idea that particles could carry “hidden variables” whose behaviour we could not predict because our fundamental description is inadequate. In other words two apparently identical electrons are not really identical; something we cannot directly measure marks them apart. If this works then we can simply use only probability theory to deal with inferences made on the basis of information that’s not sufficient for absolute certainty.

After Bell’s work, however, it became clear that these hidden variables must possess a very peculiar property if they are to describe out quantum world. The property of entanglement requires the hidden variables to be non-local. In other words, two electrons must be able to communicate their values faster than the speed of light. Putting this conclusion together with relativity leads one to deduce that the chain of cause and effect must break down: hidden variables are therefore acausal. This is such an unpalatable idea that it seems to many physicists to be even worse than the alternatives, but to me it seems entirely plausible that the causal structure of space-time must break down at some level. On the other hand, not all “incomplete” interpretations of quantum theory involve hidden variables.

One can think of this category of interpretation as involving an epistemological view of quantum mechanics. The probabilistic nature of the theory has, in some sense, a subjective origin. It represents deficiencies in our state of knowledge. The alternative Copenhagen and Many-Worlds views I discussed above differ greatly from each other, but each is characterized by the mistaken desire to put quantum mechanics – and, therefore, probability –  in the realm of ontology.

The idea that quantum mechanics might be incomplete  (or even just fundamentally “wrong”) does not seem to me to be all that radical. Although it has been very successful, there are sufficiently many problems of interpretation associated with it that perhaps it will eventually be replaced by something more fundamental, or at least different. Surprisingly, this is a somewhat heretical view among physicists: most, including several Nobel laureates, seem to think that quantum theory is unquestionably the most complete description of nature we will ever obtain. That may be true, of course. But if we never look any deeper we will certainly never know…

With the gradual re-emergence of Bayesian approaches in other branches of physics a number of important steps have been taken towards the construction of a truly inductive interpretation of quantum mechanics. This programme sets out to understand  probability in terms of the “degree of belief” that characterizes Bayesian probabilities. Recently, Christopher Fuchs, amongst others, has shown that, contrary to popular myth, the role of probability in quantum mechanics can indeed be understood in this way and, moreover, that a theory in which quantum states are states of knowledge rather than states of reality is complete and well-defined. I am not claiming that this argument is settled, but this approach seems to me by far the most compelling and it is a pity more people aren’t following it up…


Share/Bookmark

Spinning Out

Posted in Cricket, The Universe and Stuff with tags , , , , , , , , , , on September 6, 2010 by telescoper

I don’t know why, but last week was my most popular week ever, at least in terms of blog hits! I was going to follow up with a foray into the role of spin in quantum mechanics, but decided instead to settle for a less ambitious project for this evening.

Yesterday I walked past the cricket ground at the SWALEC Stadium in Sophia Gardens, Cardiff, during the Twenty20 international between England and Pakistan. There is another match of this type tomorrow night which I’ll actually be going to, as long as it’s not rained off, but I have too many things to do to go to both games. Anyway, England’s excellent off-spinner Graham Swann was bowling when I watched through a gap in the stands at the river end of the stadium. He seemed to be getting an impressive amount of turn, and I got wondering about how fast a bowler like “Swannee” actual spins the ball.

For those of you not so familiar with cricket here’s a clip of another prodigious spinner of the ball, Australia’s legend of legspin Shane Warne:

For beginners, the game of cricket is a bit similar to baseball (insofar as it’s a game involving a bat and a ball), but the “strike zone” in cricket is a physical object ( a “wicket” made of wooden stumps with bails balanced on the top) unlike the baseball equivalent, which exists only in the mind of the umpire. The batsman must prevent the ball hitting the wicket and also try to score runs if he can. In contrast to baseball, however, he doesn’t have to score; he can elect to play a purely defensive shot or even not play any short at all if he judges the ball is going to miss, which is what happened to the hapless batsman in the clip.

You will see that Warne imparts considerable spin on the ball, which has the effect of making it change direction when it bounces.  The fact that the ball hits the playing surface before the batsman has a chance to play it introduces extra variables that you don’t see in baseball,  such as the state of the pitch (which generally deteriorates over the five days of a Test match, especially in the “rough” where bowlers have been running in). A spin bowler who causes the ball to deviate from right to left is called a legspin bowler, while one who makes it turn the other way is an offspin bowler. An orthodox legspinner generates most of the spin from a flick of the wrist while an offspinner mainly lets his fingers do the torquing.

Another difference that’s worth mentioning with respect to baseball is that the ball is bowled, i.e. the bowler’s arm is not supposed to bend during the delivery (although apparently that doesn’t apply if he’s from Sri Lanka). However, the bowler is allowed to take a run up, which will be quite short for a spin bowler, but long like a javelin thrower if it’s a fast bowler. Fast bowlers – who can bowl up to 95 mph (150 km/h) – don’t spin the ball to any degree but have other tricks up their sleeve I haven’t got time to go into here. A typical spin bowler delivers the ball at speeds ranging from 45 mph to 60 mph (70 km/hour to 100 km/hour).

The physical properties of a cricket ball are specified in the Laws of Cricket. It must be between 22.4 and 22.9 cm in circumference, i.e. 3.57 to 3.64 cm in radius and must weigh between 155.9g and 163g. It’s round, made of cork, and surrounded by a leather case with a stitched seam.

So now, after all that, I can give a back-of-the-envelope answer to the question I was wondering about on the way home. Looking at the video clip my initial impression was that the ball is deflected  by an angle as large as a radian, but in fact the foreshortening effect of the camera is quite deceptive. In fact the ball deviates by less than a metre between pitching and hitting the stumps. There is a gap of about 1 metre between the popping crease (where the batsman stands) and the stumps – it looks much less from the camera angle shown – and the ball probably pitches at least 2 metres in front of the crease. I would guess therefore that it actually deflects by an angle less than twenty degrees or so.

What happens physically is that some of the rotational kinetic energy of the ball is converted into translational kinetic energy associated with a component of the velocity  at right angles to the original direction of travel. In order for the deflection to be so large, the available rotational kinetic energy must be non-negligible compared to the original kinetic energy of the ball. Suppose the mass of the ball is M, the translational kinetic energy is T=\frac{1}{2} Mv^2 where v is the speed of the ball. If the angular velocity of rotation is \omega then the rotational kinetic energy \Omega =\frac{1}{2} I \omega^2, where I is the moment of inertia of the ball.

Approximating the ball as a uniform sphere of mass M and radius a, the moment of inertia is I=\frac{2}{5}Ma^2.  Putting T=\Omega, cancelling M on both sides and ignoring the factor of \frac{2}{5} – because I’m lazy – we see that the rotational and translational kinetic energies are comparable if

v^2 \simeq a^2\omega^2,

or \omega \simeq \frac{v}{a}, which makes sense because a\omega is just the speed of a point on the equator of the ball owing to the ball’s rotational motion. This equation therefore says that the speed of sideways motion of a point on the ball’s surface must be roughly comparable to speed of the ball’s forward motion. Taking v=80 km/h gives v\simeq \frac{80 \times 10^3}{60 \times 60} \simeq 20 m/s and a\simeq 0.036 m gives \omega \simeq 600 radians per second, which is about 100 revolutions per second. This would cause a huge deviation (about 45 degrees), but the real effect is rather smaller as I discussed above (see comments below). If the deflection is actually around 15 degrees then the rotation speed needed would be around 30 rev/s.

This estimate is obviously very rough because it ignores the direction of spin and the efficiency with the ball grips on the pitch – friction is obviously involved in the change of direction – but it gives a reasonable ballpark (or at least cricketground) estimate.

Of course if the bowler does the same thing every time it’s relatively easy for the batsman to allow for the spin. The best  bowlers therefore vary the amount and angle of spin they impart on each ball. Most, in fact,  have at least two qualitatively different types of ball but they disguise the differences in the act of delivery. Offspinners typically have an “arm ball” which doesn’t really spin but holds its line without appearing to be any different to their spinning delivery. Legspinners usually have a variety of alternative balls,  including a topspinner and/or a flipper and/or a googly. The latter is a ball that comes out of the back of the hand and actually spins the opposite way to a legspinner while being produced with apparently the same action. It’s very hard to bowl a googly accurately, but it’s a deadly thing when done right.

Another thing also worth mentioning is that the rotation of the cricket ball also causes a deviation of its flightpath through the air, by virtue of the Magnus effect. This causes the ball to curve in the air in the opposite direction to which it is going to deviate on bouncing, i.e. it would drift into a right-handed batsman before breaking away from him off the pitch. You can see a considerable amount of such movement in the video clip,  away from the left-hander in the air and then back into him off the pitch. Nature clearly likes to make things tough for batsmen!

With a number of secret weapons in his armoury the spin bowler can be a formidable opponent, a fact that has apparently been known to poets, philosophers and astronomers for the best part of a thousand years:

The Ball no Question makes of Ayes and Noes,
But Right or Left, as strikes the Player goes;
And he that toss’d Thee down into the Field,
He knows about it all — He knows — HE knows!

The Rubaiyat of Omar Khayyam [50]


Share/Bookmark

Get thee behind me, Plato

Posted in The Universe and Stuff with tags , , , , , , , , , , on September 4, 2010 by telescoper

The blogosphere, even the tiny little bit of it that I know anything about, has a habit of summoning up strange coincidences between things so, following EM Forster’s maxim “only connect”, I thought I’d spend a lazy saturday lunchtime trying to draw a couple of them together.

A few days ago I posted what was intended to be a fun little item about the wave-particle duality in quantum mechanics. Basically, what I was trying to say is that there’s no real problem about thinking of an electron as behaving sometimes like a wave and sometimes like a particle because, in reality (whatever that is), it is neither. “Particle” and “wave” are useful abstractions but they are not in an exact one-to-one correspondence with natural phenomena.

Before going on I should point out that the vast majority of physicists are well away of the distinction between, say,  the “theoretical” electron and whatever the “real thing” is. We physicists tend to live in theory space rather than in the real world, so we tend to teach physics by developing the formal mathematical properties of the “electron” (or “electric field”) or whatever, and working out what experimental consequences these entail in certain situations. Generally speaking, the theory works so well in practice that we often talk about the theoretical electron that exists in the realm of mathematics and the electron-in-itself as if they are one and the same thing. As long as this is just a pragmatic shorthand, it’s fine. However, I think we need to be careful to keep this sort of language under control. Pushing theoretical ideas out into the ontological domain is a dangerous game. Physics – especially quantum physics – is best understood as a branch of epistemology. What is known? is safer ground than what is there?

Anyway, my  little  piece sparked a number of interesting comments on Reddit, including a thread that went along the lines “of course an electron is neither a particle nor a wave,  it’s actually  a spin-1/2 projective representation of the Lorentz Group on a Hilbert space”. That description, involving more sophisticated mathematical concepts than those involved in bog-standard quantum mechanics, undoubtedly provides a more complete account of natural phenomena associated with the electrons and electrical fields, but I’ll stick to my guns and maintain that it still introduces a deep confusion to assert that the electron “is” something mathematical, whether that’s a “spin-1/2 projective representation” or a complex function or anything else.  That’s saying something physical is a mathematical. Both entities have some sort of existence, of course, but not the same sort, and the one cannot “be” the other. “Certain aspects of an electron’s behaviour can be described by certain mathematical structures” is as I’m  prepared to go.

Pushing deeper than quantum mechanics, into the realm of quantum field theory, there was the following contribution:

The electron field is a quantum field as described in quantum field theories. A quantum field covers all space time and in each point the quantum field is in some state, it could be the ground state or it could be an excitation above the ground state. The excitations of the electron field are the so-called electrons. The mathematical object that describes the electron field possesses, amongst others, certain properties that deal with transformations of the space-time coordinates. If, when performing a transformation of the space-time coordinates, the mathematical object changes in such a way that is compatible with the physics of the quantum field, then one says that the mathematical object of the field (also called field) is represented by a spin 1/2 (in the electron case) representation of a certain group of transformations (the Poincaré group, in this example).I understand your quibbling, it seems natural to think that “spin 1/2″ is a property of the mathematical tool to describe something, not the something itself. If you press on with that distinction however, you should be utterly puzzled of why physics should follow, step by step, the path led by mathematics.

For example, one speaks about the ¨invariance under the local action of the group SU(3)” as a fundamental property of the fields that feel the nuclear strong force. This has two implications, the mathematical object that represents quarks must have 3 ¨strong¨ degrees of freedom (the so-called color) and there must be 32-1 = 8 carriers of the force (the gluons) because the group of transformations in a SU(N) group has N2-1 generators. And this is precisely what is observed.

So an extremely abstract mathematical principle correctly accounts for the dynamics of an inmensely large quantity of phenomena. Why does then physics follow the derivations of mathematics if its true nature is somewhat different?

No doubt this line of reasoning is why so many theoretical physicists seem to adopt a view of the world that regards mathematical theories as being, as it were,  “built into” nature rather than being things we humans invented to describe nature. This is a form of Platonic realism.

I’m no expert on matters philosophical, but I’d say that I find this stance very difficult to understand, although I am prepared to go part of the way. I used to work in a Mathematics department many years ago and one of the questions that came up at coffee time occasionally was “Is mathematics invented or discovered?”. In my experience, pure mathematicians always answered “discovered” while others (especially astronomers, said “invented”). For what it’s worth, I think mathematics is a bit of both. Of course we can invent mathematical objects, endow them with certain attributes and proscribe rules for manipulating them and combining them with other entities. However, once invented anything that is worked out from them is “discovered”. In fact, one could argue that all mathematical theorems etc arising within such a system are simply tautological expressions of the rules you started with.

Of course physicists use mathematics to construct models that describe natural phenomena. Here the process is different from mathematical discovery as what we’re trying to do is work out which, if any, of the possible theories is actually the one that accounts best for whatever empirical data we have. While it’s true that this programme requires us to accept that there are natural phenomena that can be described in mathematical terms, I do not accept that it requires us to accept that nature “is” mathematical. It requires that there be some sort of law governing some  of aspects of nature’s behaviour but not that such laws account for everything.

Of course, mathematical ideas have been extremely successful in helping physicists build new physical descriptions of reality. On the other hand, however, there is a great deal of mathematical formalism that is is not useful in this way.  Physicists have had to select those mathematical object that we can use to represent natural phenomena, like selecting words from a dictionary. The fact that we can assemble a sentence using words from the Oxford English Dictionary that conveys some information about something we see doesn’t not mean that what we see “is” English. A whole load of grammatically correct sentences can be constructed that don’t make any sense in terms of observable reality, just as there is a great deal of mathematics that is internally self-consistent but makes no contact with physics.

Moreover, to the person whose quote I commented on above, I’d agree that the properties of the SU(3) gauge group have indeed accounted for many phenomena associated with the strong interaction, which is why the standard model of particle physics contains 8 gluons and quarks carrying a three-fold colour charge as described by quantum chromodynamics. Leaving aside the fact that QCD is such a terribly difficult theory to work with – in practice it involves  nightmarish lattice calculations on a scale to make even the most diehard enthusiast cringe –  what I would ask is whether this  description in any case sufficient for us to assert that it describes “true nature”?  Many physicists will no doubt disagree with me, but I don’t think so. It’s a map, not the territory.

So why am I boring you all with this rambling dissertation? Well, it  brings me to my other post – about Stephen Hawking’s comments about God. I don’t want to go over that issue again – frankly, I was bored with it before I’d finished writing my own blog post  – but it does relate to the bee that I often find in my bonnet about the tendency of many modern theoretical physicists to assign the wrong category of existence to their mathematical ideas. The prime example that springs to my mind is the multiverse. I can tolerate  certain versions of the multiverse idea, in fact. What I can’t swallow, however is the identification of the possible landscape of string theory vacua – essentially a huge set of possible solutions of a complicated set of mathematical equations – with a realised set of “parallel universes”. That particular ontological step just seems absurd to me.

I’m just about done, but one more thing I’d say to finish with is concerns the (admittedly overused) metaphor of maps and territories. Maps are undoubtedly useful in helping us find our way around, but we have to remember that there are always things that aren’t on the map at all. If we rely too heavily on one, we might miss something of great interest that the cartographer didn’t think important. Likewise if we fool ourselves into thinking our descriptions of nature are so complete that they “are” all that nature is, then we might miss the road to a better understanding.


Share/Bookmark

Hawking and the Mind of God

Posted in Books, Talks and Reviews, Science Politics, The Universe and Stuff with tags , , , , , on September 2, 2010 by telescoper

I woke up this morning to the news that, according to Stephen Hawking, God did not create the Universe but it was instead an “inevitable consequence of the Law of Physics”. By sheer coincidence this daft pronouncement has come out at the same time as the publication of Professor Hawking’s new book, an extract of which appears in todays Times.

It’s interesting that such a fatuous statement managed to become a lead item on the radio news and a headline in all the national newspapers despite being so obviously devoid of any meaning whatsoever. How can the Universe be  “a consequence” of the theories that we invented to describe it? To me that’s just like saying that the Lake District is a consequence of an Ordnance Survey map. And where did the Laws of Physics come from, if not from God?

Stephen Hawking is undoubtedly a very brilliant theoretical physicist. However, something I’ve noticed about theoretical physicists over the years is that if you get them talking on subjects outside physics they are generally likely to say things just as daft as some drunk bloke  down the pub. I’m afraid this is a case in point.

Part of me just wants to laugh this story off, but another part is alarmed at what must appear to many to be an example of an arrogant scientist presuming to pass judgement on subjects that are really none of his business. When scientists complain about the lack of enthusiasm shown by sections of the public towards their subject, perhaps they should take seriously the alienating effect that such statements can have. This kind of thing isn’t what I’d call public engagement. Quite the opposite, in fact.

In case anyone is interested, I am not religious but I do think that there are many things that science does not – and probably will never –  explain, such as why there is  something rather than nothing. I also believe that science and religious belief are not in principle incompatible – although whether there is a conflict in practice does depend of course on the form of religious belief and how it is observed. God and physics are in my view pretty much orthogonal. To put it another way,  if I were religious, there’s nothing in theoretical physics that would change make me want to change my mind. However, I’ll leave it to those many physicists who are learned in matters of theology to take up the (metaphorical) cudgels with Professor Hawking.

No doubt this bit of publicity will increase the sales of the new book, so I’ve decided  to point out that I have  written a book myself on precisely this question, which is available from all good airports bookshops. I’m sure you’ll understand that there isn’t a hint of opportunism in the way I’m drawing this to your attention. If you think this is a cynical attempt to cash in then all I can say is

BUY MY BOOK!

I also noticed that today’s Grauniad is offering a poll on the existence or non-existence of God. I noticed some time ago that there’s a poll facility on WordPress, so this gives me an excuse to try repeating it here. Anything dumb the Guardian can do, I can do dumber. However, owing to funding cuts I’ve decided to do a single poll encompassing several topical news stories at the same time.


Share/Bookmark

Dragons and Unicorns

Posted in Education, The Universe and Stuff with tags , , , , , , , on August 30, 2010 by telescoper

When I was an undergraduate I was often told by lecturers that I should find quantum mechanics very difficult, because it is unlike the classical physics I had learned about up to that point. The difference – or so I was informed – was that classical systems were predictable, but quantum systems were not. For that reason the microscopic world could only be described in terms of probabilities. I was a bit confused by this, because I already knew that many classical systems were predictable in principle, but not really in practice. I blogged about this some time ago, in fact. It was only when I had studied theory for a long time – almost three years – that I realised what was the correct way to be confused about it. In short, quantum probability is a very strange kind of probability that displays many peculiarities and subtleties  that one doesn’t see in the kind of systems we normally think of as “random”, such as coin-tossing or roulette wheels.

To illustrate how curious the quantum universe is we have to look no further than the very basic level of quantum theory, as formulated by the founder of wave mechanics, Erwin Schrödinger. Schrödinger was born in 1887 into an affluent Austrian family made rich by a successful oilcloth business run by his father. He was educated at home by a private tutor before going to the University of Vienna where he obtained his doctorate in 1910. During the First World War he served in the artillery, but was posted to an isolated fort where he found lots of time to read about physics. After the end of hostilities he travelled around Europe and started a series of inspired papers on the subject now known as wave mechanics; his first work on this topic appeared in 1926. He succeeded Planck as Professor of Theoretical Physics in Berlin, but left for Oxford when Hitler took control of Germany in 1933. He left Oxford in 1936 to return to Austria but fled when the Nazis seized the country and he ended up in Dublin, at the Institute for Advanced Studies which was created especially for him by the Irish Taoiseach, Eamon de Valera. He remained there happily for 17 years before returning to his native land at the University of Vienna. Sadly, he became ill shortly after arriving there and died in 1961.

Schrödinger was a friendly and informal man who got on extremely well with colleagues and students alike. He was also a bit scruffy even to the extent that he sometimes had trouble getting into major scientific conferences, such as the Solvay conferences which are exclusively arranged for winners of the Nobel Prize. Physicists have never been noted for their sartorial elegance, but Schrödinger must have been an extreme case.

The theory of wave mechanics arose from work published in 1924 by de Broglie who had suggested that every particle has a wave somehow associated with it, and the overall behaviour of a system resulted from some combination of its particle-like and wave-like properties. What Schrödinger did was to write down an equation, involving a Hamiltonian describing particle motion of the form I have discussed before, but written in such a way as to resemble the equation used to describe wave phenomena throughout physics. The resulting mathematical form for a single particle is

i\hbar\frac{\partial \Psi}{\partial t} = \hat{H}\Psi = -\frac{\hbar^2}{2m}\nabla^2 \Psi + V\Psi,

in which the term \Psi  is called the wave-function of the particle. As usual, the Hamiltonian H consists of two parts: one describes the kinetic energy (the first term on the right hand side) and the second its potential energy represented by V. This equation – the Schrödinger equation – is one of the most important in all physics.

At the time Schrödinger was developing his theory of wave mechanics it had a rival, called matrix mechanics, developed by Werner Heisenberg and others. Paul Dirac later proved that wave mechanics and matrix mechanics were mathematically equivalent; these days physicists generally use whichever of these two approaches is most convenient for particular problems.

Schrödinger’s equation is important historically because it brought together lots of bits and pieces of ideas connected with quantum theory into a single coherent descriptive framework. For example, in 1911 Niels Bohr had begun looking at a simple theory for the hydrogen atom which involved a nucleus consisting of a positively charged proton with a negatively charged electron moving around it in a circular orbit. According to standard electromagnetic theory this picture has a flaw in it: the electron is accelerating and consequently should radiate energy. The orbit of the electron should therefore decay rather quickly.

Bohr hypothesized that special states of this system were actually stable; these states were ones in which the orbital angular momentum of the electron was an integer multiple of Planck’s constant. This simple idea endows the hydrogen atom with a discrete set of energy levels which, as Bohr showed in 1913, were consistent with the appearance of sharp lines in the spectrum of light emitted by hydrogen gas when it is excited by, for example, an electrical discharge. The calculated positions of these lines were in good agreement with measurements made by Rydberg so the Bohr theory was in good shape. But where did the quantised angular momentum come from?

The Schrödinger equation describes some form of wave; its solutions \Psi(\vec{x},t) are generally oscillating functions of position and time. If we want it to describe a stable state then we need to have something which does not vary with time, so we proceed by setting the left-hand-side of the equation to zero. The hydrogen atom is a bit like a solar system with only one planet going around a star so we have circular symmetry which simplifies things a lot. The solutions we get are waves, and the mathematical task is to find waves that fit along a circular orbit just like standing waves on a circular string. Immediately we see why the solution must be quantized. To exist on a circle the wave can’t just have any wavelength; it has to fit into the circumference of the circle in such a way that it winds up at the same value after a round trip. In Schrödinger’s theory the quantisation of orbits is not just an ad hoc assumption, it emerges naturally from the wave-like nature of the solutions to his equation.

The Schrödinger equation can be applied successfully to systems which are much more complicated than the hydrogen atom, such as complex atoms with many electrons orbiting the nucleus and interacting with each other. In this context, this description is the basis of most work in theoretical chemistry. But it also poses very deep conceptual challenges, chiefly about how the notion of a “particle” relates to the “wave” that somehow accompanies it.

To illustrate the riddle, consider a very simple experiment where particles of some type (say electrons, but it doesn’t really matter; similar experiments can be done with photons or other particles) emerge from the source on the left, pass through the slits in the middle and are detected in the screen at the right.

In a purely “particle” description we would think of the electrons as little billiard balls being fired from the source. Each one then travels along a well-defined path, somehow interacts with the screen and ends up in some position on the detector. On the other hand, in a “wave” description we would imagine a wave front emerging from the source, being diffracted by the screen and ending up as some kind of interference pattern at the detector. This is what we see with light, for example, in the phenomenon known as Young’s fringes.

In quantum theory we have to think of the system as being in some sense both a wave and a particle. This is forced on us by the fact that we actually observe a pattern of “fringes” at the detector, indicating wave-like interference, but we also can detect the arrival of individual electrons as little dots. Somehow the propensity of electrons to arrive in positions on the screen is controlled by an element of waviness, but they manage to retain some aspect of their particleness. Moreover, one can turn the source intensity down to a level where there is only every one electron in the experiment at any time. One sees the dots arrive one by one on the detector, but adding them up over a long time still yields a pattern of fringes.

Curiouser and curiouser, said Alice.

Eventually the community of physicists settled on a party line that most still stick to: that the wave-function controls the probability of finding an electron at some position when a measurement is made. In fact the mathematical description of wave phenomena favoured by physicists involves complex numbers, so at each point in space at time \Psi is a complex number of the form \Psi= a+ib, where i =\sqrt{-1}; the corresponding probability is given by |\Psi^2|=a^2+b^2. This protocol, however, forbids one to say anything about the state of the particle before it measured. It is delocalized, not being definitely located anywhere, but only possessing a probability to be any particular place within the apparatus. One can’t even say which of the two slits it passes through. Somehow, it manages to pass through both slits. Or at least some of its wave-function does.

I’m not going to into the various philosophical arguments about the interpretation of quantum probabilities here, but I will pass on an analogy that helped me come to grips with the idea that an electron can behave in some respects like a wave and in others like a particle. At first thought, this seems a troubling paradox but it only appears so if you insist that our theoretical ideas are literal representations of what happens in reality. I think it’s much more sensible to treat the mathematics as a kind of map or sketch that is useful for us to do find our way around nature rather than confusing it with nature itself. Neither particles nor waves really exist in the quantum world – they’re just abstractions we use to try to describe as much as we can of what is going on. The fact that it doesn’t work perfectly shouldn’t surprise us, as there are are undoubtedly more things in Heaven and Earth than are dreamt of in our philosophy.

Imagine a mediaeval traveller, the first from your town to go to Africa. On his journeys he sees a rhinoceros, a bizarre creature that is unlike anything he’s ever seen before. Later on, when he gets back, he tries to describe the animal to those at home who haven’t seen it.  He thinks very hard. Well, he says, it’s got a long horn on its head, like a unicorn, and it’s got thick leathery skin, like a dragon. Neither dragons nor unicorns exist in nature, but they’re abstractions that are quite useful in conveying something about what a rhinoceros is like.

It’s the same with electrons. Except they don’t have horns and leathery skin. Obviously.


Share/Bookmark

Open Admissions

Posted in Education with tags , , , , , , , on August 21, 2010 by telescoper

As I predicted  last week, the A-level results announced on Thursday showed another increase in pass rates and in the number of top grades awarded, although I had forgotten that this year saw the introduction of the new A* grade. Overall, about 27% of students got an A or an A*, although the number getting an A* varied enormously from one course to another. In Further Maths, for example, 30% of the candidates who took the examination achieved an A* grade.

Although I have grave misgivings about the rigour of the assessment used in A-level science subjects, I do nevertheless heartily congratulate all those who have done well. In no way were my criticisms of the examinations system intended to be criticisms of the students who take them and they thoroughly deserve to celebrate their success.

Another interesting fact worth mentioning is that the number of pupils taking A-level physics rose again this year, by just over 5%, to a total of just over 30,000. After many years of decline in the popularity of physics as an A-level choice, it has now grown steadily over the past three years. Of course not everyone who does physics at A-level goes on to do it at university, but this is nevertheless a good sign for the future health of the subject.

There was a whopping 11.5% growth in the number of students taking Further Mathematics too, and this seems to be part of a general trend for more students to be doing science and technology subjects.

The newspapers have also been full of  tales of a frantic rush during the clearing process and the likelihood that many well-qualified aspiring students might miss out on university places altogether. Part of the reason for this is that the government recently put the brake on the expansion of university places, but it’s not all down to government cuts. It’s also at least partly because of the steady increase in the performance of students at A-level. More students are making their offers than before, so the options available for those who did slightly less well than they had hoped very much more limited.

In fact if you analyse the figures from UCAS you will see that as of Thursday 19th August 2010, 383,230 students had been secured a place at university. That’s actually about 10,000 more than at the corresponding stage last year. There were about 50,000 more students eligible to go into clearing this year (183,000 versus 135,000 in 2009), but at least part of this is due to people trying again who didn’t succeed last year. Clearly they won’t all find a place, so there’ll be a number of very disappointed school-leavers around, but they also can try again next year. So although it’s been a tough week for quite a few prospective students, it’s not really the catastrophe that some of the tabloids have been screaming about.

I’m not directly involved in the undergraduate admissions process for the School of Physics & Astronomy at Cardiff University, where I work, but try to keep up with what’s going on. It’s an extremely strange system and I think it’s fair to say that if we could design an admissions process from scratch we wouldn’t end up with the one we have now. Each year our School is given a target number of students to recruit; this year around 85. On the basis of the applications we receive we make a number of offers (e.g.  AAB for three A-levels, including Mathematics and Physics, for the MPhys programme). However, we have to operate a bit like an airline and make more offers than there are places. This is because (a) not all the people we make offers to will take up their offer and (b) not everyone who takes up an offer will make the grades.

In fact students usually apply to 5 universities and are allowed to accept one firm offer (CF) and one insurance choice (CI), in case they missed the grades for their firm choice. If they miss the grades for their CI they go into clearing. This year, as well as a healthy bunch of CFs, we had a huge number of CI acceptances, meaning we were the backup choice for many students whose ideal choice lay elsewhere. We usually don’t end up recruiting all that many students as CIs – most students do make the grades they need for their CF, but if they miss by a whisker the university they put first often takes them anyway. However, this year many of our CIs held CFs with universities we knew were going  to be pretty full, and in England at any rate, institutions are going to be fined if they exceed their quotas. It therefore looked possible that we might go over quota because of an unexpected influx of CIs caused by other universities applying their criteria more rigorously than they had in the past. We are, of course, obliged to honour all offers made as part of this process. Here in Wales we don’t actually get fined for overshooting the quota, but it would have been tough fitting excess numbers into the labs and organizing tutorials for them all.

Fortunately, our admissions team (led by Helen Hunt Carole Tucker) is very experienced at reading the lie of the land. As it turned out, the feared influx of CIs didn’t materialise, and we even had a dip into the clearing system to  recruit one or two good quality applicants who had fallen through the cracks elsewhere.  We seem to have turned out all right again this year, so it’s business as usual in October. In case you’re wondering, Cardiff University is now officially full up for 2010.

There’s a lot of guesswork involved in this system which seems to me to make it unnecessarily fraught for us, and obviously also for the students too! It would make more sense for students to apply after they’ve got their results not before, but this would require wholesale changes to the academic year. It’s been suggested before, but never got anywhere. One thing we do very well in the Higher Education sector is inertia!

I thought I’d end with another “news” item from the Guardian that claims that the Russell Group of universities – to which Cardiff belongs – operates a blacklist of A-level subjects that it considers inappropriate:

The country’s top universities have been called on to come clean about an unofficial list or lists of “banned” A-level subjects that may have prevented tens of thousands of state school pupils getting on to degree courses.

Teachers suspect the Russell Group of universities – which includes Oxford and Cambridge – of rejecting outright pupils who take A-level subjects that appear on the unpublished lists.

The lists are said to contain subjects such as law, art and design, business studies, drama and theatre studies – non-traditional A-level subjects predominantly offered by comprehensives, rather than private schools.

Of course when we’re selecting students for Physics programmes we request Physics and Mathematics A-level rather than Art and Design, simply because the latter do not provide an adequate preparation for what is quite a demanding course.  Other Schools no doubt make offers on a similar basis. It’s got nothing to do with  a bias against state schools, simply an attempt to select students who can cope with the course they have applied to do.

Moreover, speaking as a physicist I’d like to turn this whole thing around. Why is it that so many state schools do teach these subjects instead of  “traditional” subjects, including sciences such as physics?  Why is that so many comprehensive schools are allowed to operate as state-funded schools without offering adequate provision for science education? To my mind that’s a real, and far more insidious, form of blacklisting than what is alleged by the Guardian.

Death and Strawberries

Posted in Poetry with tags , , , , on August 20, 2010 by telescoper

This week in August 2010 has taken on quite a melancholy mood. Only a few days ago there was the death of physicist Nicola Cabibbo. Yesterday I heard that the great Russian mathematician Vladimir Igorevich Arnold, who did a lot of work of interest to physicists, had also passed away aged 72. And then this morning I was saddened to hear of the death of the wonderful Scottish poet Edwin Morgan, of pneumonia, at the age of 90.

It’s always sad when someone who has contributed so much to their field – whether it’s artistic or scientific – passes away, but the consolation is that each of them in their own way has left a wonderful legacy that remains to be treasured and will also inspire future generations.

Anyway, I thought I’d mark the passing of Edwin Morgan with my favourite poem of his, called Strawberries.

There were never strawberries
like the ones we had
that sultry afternoon
sitting on the step
of the open french window
facing each other
your knees held in mine
the blue plates in our laps
the strawberries glistening
in the hot sunlight
we dipped them in sugar
looking at each other
not hurrying the feast
for one to come
the empty plates
laid on the stone together
with the two forks crossed
and I bent towards you
sweet in that air

in my arms
abandoned like a child
from your eager mouth
the taste of strawberries
in my memory
lean back again
let me love you

let the sun beat
on our forgetfulness
one hour of all
the heat intense
and summer lightning
on the Kilpatrick hills

let the storm wash the plates

It may surprise you to learn that this poem is not written by a man to a woman, but from one man to another. A similar reaction is sometimes provoked by certain of Shakespeare’s Sonnets. It came as a shock to quite a few people when it was finally revealed, in fact, because Edwin Morgan kept to himself for a very long time who this was written about. Actually, it wasn’t until he was 70 that the poet stepped out of the closet, announced that he was gay, and explained that the poem was written about an experience he shared with another man. He maintained that at least part of the reason for him not being open publically was that he didn’t want to be branded as a “gay” poet, and that his poems were intended to be universal, which (in my view) they are but then that depends on what kind of universe you live in.

Grade Inflation

Posted in Education, Politics with tags , , , on August 12, 2010 by telescoper

Still too busy to post anything too substantial, but since this year’s A-level results are out next week – with the consequent scramble for University places – I thought I’d take a few minutes to share this  graph (taken from an article on the BBC website) which shows the steady dumbing-down improvement of educational standards student performance over the last few decades.

Nowadays, on average, about 27 per cent of students taking an A-level get a grade A. When I took mine (in 1981, if you must ask) the fraction getting an A was about 9%. It’s scary to think that I belong to a generation that must be so much less intelligent than the current one. Or could it be – dare I say it? – that A-level examinations might be getting easier?

Looking at the graph makes it clear that something happened around the mid-1980s that initiated an almost linear growth in the percentage of A-grades. I don’t know what will happen when the results come out next week, but it’s a reasonably safe bet that the trend will continue.

I can’t speak for other subjects, but there’s no question whatsoever that the level of achievement needed to get an A-grade in mathematics is much lower now than it was in the past. This has been proven over and over again. A few years ago, an article in the Times Higher discussed the evidence, including an analysis of the performance of new students on a diagnostic mathematics test they had to take on entering University.  The same test, covering basic algebra, trigonometry and calculus, had been administered every year so provided a good diagnostic of real mathematical ability that could be compared with the A-level grades achieved by the students.  They found, among other things, that students entering university with a grade B in mathematics in 1999 performed at about the same level as students in 1991 who had failed mathematics A-level.

The steadily decreasing level of mathematical training students receive in schools poses great problems not only for mathematics courses, but also for subjects like physics. We have to devote so much more time on the physics equivalent of “basic training” that we struggle to cover all the physics we should be covering in a degree program. Thus the dumbing down of A-levels leads to pressure to dumb down degrees too.

That brings me to the prospect of huge cuts – up to 35% if the stories are true – in government funding for universities, leading to pressure to shorten the traditional three-year Bachelors degree to one that takes only two years to complete. If this goes ahead it won’t be long before a student can get a degree by achieving the same level of knowledge as would have been displayed by an A-level student 30 years ago. Are we supposed to call this progress?

Or perhaps this business about two year degrees all really  does make sense. Maybe we should just accept that universities have to offer such courses because the school system has become broken beyond repair over the last 30 years, and it will be up to certain Higher Education institutions from now on to do the job that school sixth-forms used to do, i.e. teach A-levels.

A Sonnet of Significance

Posted in Poetry, The Universe and Stuff with tags , , , on August 3, 2010 by telescoper

Inspired by Dennis Overbye’s nice article in the New York Times about the plethora of false detections in physics and astronomy, and another one in Physics World by Robert P Crease with a similar theme, I’ve decided to relaunch my campaign to become the next Poet Laureate with this Sonnet (in Petrarchean form) which I offer as an homage to John Keats. I’ve slavishly copied the rhyme scheme of one of Keats’ greatest poems, although I think I’ve made all the lines scan properly which he didn’t manage to do in the original.  Nevertheles, I’m sure that if he were alive today he’d be turning in his grave.

Much have I marvell’d at discov’ries bold
And many gushing press releases seen
But often what is “found” just hasn’t been
(Though only rather later are we told).
Be doubtful if you ever do behold
A scientific “certainty” between
The pages of a Sunday magazine;
The complex truth is rarely so extolled.
So if you are a watcher of the skies
Or particle detection is your yen,
Refrain from spreading rumour and surmise
Lest you look silly time and time again.
Two sigma peaks – so you should realise –
Are naught but noise, so hold your tongue. Amen.