Archive for Quantum Mechanics

Brian Cox up the Exclusion Principle

Posted in The Universe and Stuff with tags , , on February 22, 2012 by telescoper

I know a few students of Quantum Mechanics read this blog so here’s a little challenge. View the following video segment featuring Sir Brian of Cox and see if you can spot the deliberate (?) mistake contained therein on the subject of the Pauli Exclusion Principle.

When you’ve made up your mind, you can take a peek at the objection that’s been exercising armchair physicists around the twittersphere, and also a more technical argument supporting Prof. Cox’s interpretation from a university in the Midlands.

UPDATE: 23/2/2012 Meanwhile, over the pond, Sean Carroll is on the case.

Another day, another tutorial…

Posted in Education, The Universe and Stuff with tags , , , , on October 13, 2011 by telescoper

Oh what fun it is to derive the Bohr radius. At least the camera on my Blackberry works!

Spin, Entanglement and Quantum Weirdness

Posted in The Universe and Stuff with tags , , , , , , , on October 3, 2010 by telescoper

After writing a post about spinning cricket balls a while ago I thought it might be fun to post something about the role of spin in quantum mechanics.

Spin is a concept of fundamental importance in quantum mechanics, not least because it underlies our most basic theoretical understanding of matter. The standard model of particle physics divides elementary particles into two types, fermions and bosons, according to their spin.  One is tempted to think of  these elementary particles as little cricket balls that can be rotating clockwise or anti-clockwise as they approach an elementary batsman. But, as I hope to explain, quantum spin is not really like classical spin: batting would be even more difficult if quantum bowlers were allowed!

Take the electron,  for example. The amount of spin an electron carries is  quantized, so that it always has a magnitude which is ±1/2 (in units of Planck’s constant; all fermions have half-integer spin). In addition, according to quantum mechanics, the orientation of the spin is indeterminate until it is measured. Any particular measurement can only determine the component of spin in one direction. Let’s take as an example the case where the measuring device is sensitive to the z-component, i.e. spin in the vertical direction. The outcome of an experiment on a single electron will lead a definite outcome which might either be “up” or “down” relative to this axis.

However, until one makes a measurement the state of the system is not specified and the outcome is consequently not predictable with certainty; there will be a probability of 50% probability for each possible outcome. We could write the state of the system (expressed by the spin part of its wavefunction  ψ prior to measurement in the form

|ψ> = (|↑> + |↓>)/√2

This gives me an excuse to use  the rather beautiful “bra-ket” notation for the state of a quantum system, originally due to Paul Dirac. The two possibilities are “up” (↑­) and “down” (↓) and they are contained within a “ket” (written |>)which is really just a shorthand for a wavefunction describing that particular aspect of the system. A “bra” would be of the form <|; for the mathematicians this represents the Hermitian conjugate of a ket. The √2 is there to insure that the total probability of the spin being either up or down is 1, remembering that the probability is the square of the wavefunction. When we make a measurement we will get one of these two outcomes, with a 50% probability of each.

At the point of measurement the state changes: if we get “up” it becomes purely |↑>  and if the result is  “down” it becomes |↓>. Either way, the quantum state of the system has changed from a “superposition” state described by the equation above to an “eigenstate” which must be either up or down. This means that all subsequent measurements of the spin in this direction will give the same result: the wave-function has “collapsed” into one particular state. Incidentally, the general term for a two-state quantum system like this is a qubit, and it is the basis of the tentative steps that have been taken towards the construction of a quantum computer.

Notice that what is essential about this is the role of measurement. The collapse of  ψ seems to be an irreversible process, but the wavefunction itself evolves according to the Schrödinger equation, which describes reversible, Hamiltonian changes.  To understand what happens when the state of the wavefunction changes we need an extra level of interpretation beyond what the mathematics of quantum theory itself provides,  because we are generally unable to write down a wave-function that sensibly describes the system plus the measuring apparatus in a single form.

So far this all seems rather similar to the state of a fair coin: it has a 50-50 chance of being heads or tails, but the doubt is resolved when its state is actually observed. Thereafter we know for sure what it is. But this resemblance is only superficial. A coin only has heads or tails, but the spin of an electron doesn’t have to be just up or down. We could rotate our measuring apparatus by 90° and measure the spin to the left (←) or the right (→). In this case we still have to get a result which is a half-integer times Planck’s constant. It will have a 50-50 chance of being left or right that “becomes” one or the other when a measurement is made.

Now comes the real fun. Suppose we do a series of measurements on the same electron. First we start with an electron whose spin we know nothing about. In other words it is in a superposition state like that shown above. We then make a measurement in the vertical direction. Suppose we get the answer “up”. The electron is now in the eigenstate with spin “up”.

We then pass it through another measurement, but this time it measures the spin to the left or the right. The process of selecting the electron to be one with  spin in the “up” direction tells us nothing about whether the horizontal component of its spin is to the left or to the right. Theory thus predicts a 50-50 outcome of this measurement, as is observed experimentally.

Suppose we do such an experiment and establish that the electron’s spin vector is pointing to the left. Now our long-suffering electron passes into a third measurement which this time is again in the vertical direction. You might imagine that since we have already measured this component to be in the up direction, it would be in that direction again this time. In fact, this is not the case. The intervening measurement seems to “reset” the up-down component of the spin; the results of the third measurement are back at square one, with a 50-50 chance of getting up or down.

This is just one example of the kind of irreducible “randomness” that seems to be inherent in quantum theory. However, if you think this is what people mean when they say quantum mechanics is weird, you’re quite mistaken. It gets much weirder than this! So far I have focussed on what happens to the description of single particles when quantum measurements are made. Although there seem to be subtle things going on, it is not really obvious that anything happening is very different from systems in which we simply lack the microscopic information needed to make a prediction with absolute certainty.

At the simplest level, the difference is that quantum mechanics gives us a theory for the wave-function which somehow lies at a more fundamental level of description than the usual way we think of probabilities. Probabilities can be derived mathematically from the wave-function,  but there is more information in ψ than there is in |2; the wave-function is a complex entity whereas the square of its amplitude is entirely real. If one can construct a system of two particles, for example, the resulting wave-function is obtained by superimposing the wave-functions of the individual particles, and probabilities are then obtained by squaring this joint wave-function. This will not, in general, give the same probability distribution as one would get by adding the one-particle probabilities because, for complex entities A and B,

A2+B2 ≠(A+B)2

in general. To put this another way, one can write any complex number in the form a+ib (real part plus imaginary part) or, generally more usefully in physics , as Re, where R is the amplitude and θ  is called the phase. The square of the amplitude gives the probability associated with the wavefunction of a single particle, but in this case the phase information disappears; the truly unique character of quantum physics and how it impacts on probabilies of measurements only reveals itself when the phase information is retained. This generally requires two or more particles to be involved, as the absolute phase of a single-particle state is essentially impossible to measure.

Finding situations where the quantum phase of a wave-function is important is not easy. It seems to be quite easy to disturb quantum systems in such a way that the phase information becomes scrambled, so testing the fundamental aspects of quantum theory requires considerable experimental ingenuity. But it has been done, and the results are astonishing.

Let us think about a very simple example of a two-component system: a pair of electrons. All we care about for the purpose of this experiment is the spin of the electrons so let us write the state of this system in terms of states such as  which I take to mean that the first particle has spin up and the second one has spin down. Suppose we can create this pair of electrons in a state where we know the total spin is zero. The electrons are indistinguishable from each other so until we make a measurement we don’t know which one is spinning up and which one is spinning down. The state of the two-particle system might be this:

|ψ> = (|↑↓> – |↓↑>)/√2

squaring this up would give a 50% probability of “particle one” being up and “particle two” being down and 50% for the contrary arrangement. This doesn’t look too different from the example I discussed above, but this duplex state exhibits a bizarre phenomenon known as quantum entanglement.

Suppose we start the system out in this state and then separate the two electrons without disturbing their spin states. Before making a measurement we really can’t say what the spins of the individual particles are: they are in a mixed state that is neither up nor down but a combination of the two possibilities. When they’re up, they’re up. When they’re down, they’re down. But when they’re only half-way up they’re in an entangled state.

If one of them passes through a vertical spin-measuring device we will then know that particle is definitely spin-up or definitely spin-down. Since we know the total spin of the pair is zero, then we can immediately deduce that the other one must be spinning in the opposite direction because we’re not allowed to violate the law of conservation of angular momentum: if Particle 1 turns out to be spin-up, Particle 2  must be spin-down, and vice versa. It is known experimentally that passing two electrons through identical spin-measuring gadgets gives  results consistent with this reasoning. So far there’s nothing so very strange in this.

The problem with entanglement lies in understanding what happens in reality when a measurement is done. Suppose we have two observers, Dick and Harry, each equipped with a device that can measure the spin of an electron in any direction they choose. Particle 1 emerges from the source and travels towards Dick whereas particle 2 travels in Harry’s direction. Before any measurement, the system is in an entangled superposition state. Suppose Dick decides to measure the spin of electron 1 in the z-direction and finds it spinning up. Immediately, the wave-function for electron 2 collapses into the down direction. If Dick had instead decided to measure spin in the left-right direction and found it “left” similar collapse would have occurred for particle 2, but this time putting it in the “right” direction.

Whatever Dick does, the result of any corresponding measurement made by Harry has a definite outcome – the opposite to Dick’s result. So Dick’s decision whether to make a measurement up-down or left-right instantaneously transmits itself to Harry who will find a consistent answer, if he makes the same measurement as Dick.

If, on the other hand, Dick makes an up-down measurement but Harry measures left-right then Dick’s answer has no effect on Harry, who has a 50% chance of getting “left” and 50% chance of getting right. The point is that whatever Dick decides to do, it has an immediate effect on the wave-function at Harry’s position; the collapse of the wave-function induced by Dick immediately collapses the state measured by Harry. How can particle 1 and particle 2 communicate in this way?

This riddle is the core of a thought experiment by Einstein, Podolsky and Rosen in 1935 which has deep implications for the nature of the information that is supplied by quantum mechanics. The essence of the EPR paradox is that each of the two particles – even if they are separated by huge distances – seems to know exactly what the other one is doing. Einstein called this “spooky action at a distance” and went on to point out that this type of thing simply could not happen in the usual calculus of random variables. His argument was later tightened considerably by John Bell in a form now known as Bell’s theorem.

To see how Bell’s theorem works, consider the following roughly analagous situation. Suppose we have two suspects in prison, say Dick and Harry (Tom grassed them up and has been granted immunity from prosecution). The  two are taken apart to separate cells for individual questioning. We can allow them to use notes, electronic organizers, tablets of stone or anything to help them remember any agreed strategy they have concocted, but they are not allowed to communicate with each other once the interrogation has started. Each question they are asked has only two possible answers – “yes” or “no” – and there are only three possible questions. We can assume the questions are asked independently and in a random order to the two suspects.

When the questioning is over, the interrogators find that whenever they asked the same question, Dick and Harry always gave the same answer, but when the question was different they only gave the same answer 25% of the time. What can the interrogators conclude?

The answer is that Dick and Harry must be cheating. Either they have seen the question list ahead of time or are able to communicate with each other without the interrogator’s knowledge. If they always give the same answer when asked the same question, they must have agreed on answers to all three questions in advance. But when they are asked different questions then, because each question has only two possible responses, by following this strategy it must turn out that at least two of the three prepared answers – and possibly all of them – must be the same for both Dick and Harry. This puts a lower limit on the probability of them giving the same answer to different questions. I’ll leave it as an exercise to the reader to show that the probability of coincident answers to different questions in this case must be at least 1/3.

This a simple illustration of what in quantum mechanics is known as a Bell inequality. Dick and Harry can only keep the number of such false agreements down to the measured level of 25% by cheating.

This example is directly analogous to the behaviour of the entangled quantum state described above under repeated interrogations about its spin in three different directions. The result of each measurement can only be either “yes” or “no”. Each individual answer (for each particle) is equally probable in this case; the same question always produces the same answer for both particles, but the probability of agreement for two different questions is indeed ¼ and not larger as would be expected if the answers were random. For example one could ask particle 1 “are you spinning up” and particle 2 “are you spinning to the right”? The probability of both producing an answer “yes” is 25% according to quantum theory but would be higher if the particles weren’t cheating in some way.

Probably the most famous experiment of this type was done in the 1980s, by Alain Aspect and collaborators, involving entangled pairs of polarized photons (which are bosons), rather than electrons, primarily because these are easier to prepare.

The implications of quantum entanglement greatly troubled Einstein long before the EPR paradox. Indeed the interpretation of single-particle quantum measurement (which has no entanglement) was already troublesome. Just exactly how does the wave-function relate to the particle? What can one really say about the state of the particle before a measurement is made? What really happens when a wave-function collapses? These questions take us into philosophical territory that I have set foot in already; the difficult relationship between epistemological and ontological uses of probability theory.

Thanks largely to the influence of Niels Bohr, in the relatively early stages of quantum theory a standard approach to this question was adopted. In what became known as the  Copenhagen interpretation of quantum mechanics, the collapse of the wave-function as a result of measurement represents a real change in the physical state of the system. Before the measurement, an electron really is neither spinning up nor spinning down but in a kind of quantum purgatory. After a measurement it is released from limbo and becomes definitely something. What collapses the wave-function is something unspecified to do with the interaction of the particle with the measuring apparatus or, in some extreme versions of this doctrine, the intervention of human consciousness.

I find it amazing that such a view could have been held so seriously by so many highly intelligent people. Schrödinger hated this concept so much that he invented a thought-experiment of his own to poke fun at it. This is the famous “Schrödinger’s cat” paradox. I’ve sent Columbo out of the room while I describe this.

In a closed box there is a cat. Attached to the box is a device which releases poison into the box when triggered by a quantum-mechanical event, such as radiation produced by the decay of a radioactive substance. One can’t tell from the outside whether the poison has been released or not, so one doesn’t know whether the cat is alive or dead. When one opens the box, one learns the truth. Whether the cat has collapsed or not, the wave-function certainly does. At this point one is effectively making a quantum measurement so the wave-function of the cat is either “dead” or “alive” but before opening the box it must be in a superposition state. But do we really think the cat is neither dead nor alive? Isn’t it certainly one or the other, but that our lack of information prevents us from knowing which? And if this is true for a macroscopic object such as a cat, why can’t it be true for a microscopic system, such as that involving just a pair of electrons?

As I learned at a talk by the Nobel prize-winning physicist Tony Leggett – who has been collecting data on this recently – most physicists think Schrödinger’s cat is definitely alive or dead before the box is opened. However, most physicists don’t believe that an electron definitely spins either up or down before a measurement is made. But where does one draw the line between the microscopic and macroscopic descriptions of reality? If quantum mechanics works for 1 particle, does it work also for 10, 1000? Or, for that matter, 1023?

Most modern physicists eschew the Copenhagen interpretation in favour of one or other of two modern interpretations. One involves the concept of quantum decoherence, which is basically the idea that the phase information that is crucial to the underlying logic of quantum theory can be destroyed by the interaction of a microscopic system with one of larger size. In effect, this hides the quantum nature of macroscopic systems and allows us to use a more classical description for complicated objects. This certainly happens in practice, but this idea seems to me merely to defer the problem of interpretation rather than solve it. The fact that a large and complex system makes tends to hide its quantum nature from us does not in itself give us the right to have a different interpretations of the wave-function for big things and for small things.

Another trendy way to think about quantum theory is the so-called Many-Worlds interpretation. This asserts that our Universe comprises an ensemble – sometimes called a multiverse – and  probabilities are defined over this ensemble. In effect when an electron leaves its source it travels through infinitely many paths in this ensemble of possible worlds, interfering with itself on the way. We live in just one slice of the multiverse so at the end we perceive the electron winding up at just one point on our screen. Part of this is to some extent excusable, because many scientists still believe that one has to have an ensemble in order to have a well-defined probability theory. If one adopts a more sensible interpretation of probability then this is not actually necessary; probability does not have to be interpreted in terms of frequencies. But the many-worlds brigade goes even further than this. They assert that these parallel universes are real. What this means is not completely clear, as one can never visit parallel universes other than our own …

It seems to me that none of these interpretations is at all satisfactory and, in the gap left by the failure to find a sensible way to understand “quantum reality”, there has grown a pathological industry of pseudo-scientific gobbledegook. Claims that entanglement is consistent with telepathy, that parallel universes are scientific truths, that consciousness is a quantum phenomena abound in the New Age sections of bookshops but have no rational foundation. Physicists may complain about this, but they have only themselves to blame.

But there is one remaining possibility for an interpretation of that has been unfairly neglected by quantum theorists despite – or perhaps because of – the fact that is the closest of all to commonsense. This view that quantum mechanics is just an incomplete theory, and the reason it produces only a probabilistic description is that does not provide sufficient information to make definite predictions. This line of reasoning has a distinguished pedigree, but fell out of favour after the arrival of Bell’s theorem and related issues. Early ideas on this theme revolved around the idea that particles could carry “hidden variables” whose behaviour we could not predict because our fundamental description is inadequate. In other words two apparently identical electrons are not really identical; something we cannot directly measure marks them apart. If this works then we can simply use only probability theory to deal with inferences made on the basis of information that’s not sufficient for absolute certainty.

After Bell’s work, however, it became clear that these hidden variables must possess a very peculiar property if they are to describe out quantum world. The property of entanglement requires the hidden variables to be non-local. In other words, two electrons must be able to communicate their values faster than the speed of light. Putting this conclusion together with relativity leads one to deduce that the chain of cause and effect must break down: hidden variables are therefore acausal. This is such an unpalatable idea that it seems to many physicists to be even worse than the alternatives, but to me it seems entirely plausible that the causal structure of space-time must break down at some level. On the other hand, not all “incomplete” interpretations of quantum theory involve hidden variables.

One can think of this category of interpretation as involving an epistemological view of quantum mechanics. The probabilistic nature of the theory has, in some sense, a subjective origin. It represents deficiencies in our state of knowledge. The alternative Copenhagen and Many-Worlds views I discussed above differ greatly from each other, but each is characterized by the mistaken desire to put quantum mechanics – and, therefore, probability –  in the realm of ontology.

The idea that quantum mechanics might be incomplete  (or even just fundamentally “wrong”) does not seem to me to be all that radical. Although it has been very successful, there are sufficiently many problems of interpretation associated with it that perhaps it will eventually be replaced by something more fundamental, or at least different. Surprisingly, this is a somewhat heretical view among physicists: most, including several Nobel laureates, seem to think that quantum theory is unquestionably the most complete description of nature we will ever obtain. That may be true, of course. But if we never look any deeper we will certainly never know…

With the gradual re-emergence of Bayesian approaches in other branches of physics a number of important steps have been taken towards the construction of a truly inductive interpretation of quantum mechanics. This programme sets out to understand  probability in terms of the “degree of belief” that characterizes Bayesian probabilities. Recently, Christopher Fuchs, amongst others, has shown that, contrary to popular myth, the role of probability in quantum mechanics can indeed be understood in this way and, moreover, that a theory in which quantum states are states of knowledge rather than states of reality is complete and well-defined. I am not claiming that this argument is settled, but this approach seems to me by far the most compelling and it is a pity more people aren’t following it up…


Share/Bookmark

Get thee behind me, Plato

Posted in The Universe and Stuff with tags , , , , , , , , , , on September 4, 2010 by telescoper

The blogosphere, even the tiny little bit of it that I know anything about, has a habit of summoning up strange coincidences between things so, following EM Forster’s maxim “only connect”, I thought I’d spend a lazy saturday lunchtime trying to draw a couple of them together.

A few days ago I posted what was intended to be a fun little item about the wave-particle duality in quantum mechanics. Basically, what I was trying to say is that there’s no real problem about thinking of an electron as behaving sometimes like a wave and sometimes like a particle because, in reality (whatever that is), it is neither. “Particle” and “wave” are useful abstractions but they are not in an exact one-to-one correspondence with natural phenomena.

Before going on I should point out that the vast majority of physicists are well away of the distinction between, say,  the “theoretical” electron and whatever the “real thing” is. We physicists tend to live in theory space rather than in the real world, so we tend to teach physics by developing the formal mathematical properties of the “electron” (or “electric field”) or whatever, and working out what experimental consequences these entail in certain situations. Generally speaking, the theory works so well in practice that we often talk about the theoretical electron that exists in the realm of mathematics and the electron-in-itself as if they are one and the same thing. As long as this is just a pragmatic shorthand, it’s fine. However, I think we need to be careful to keep this sort of language under control. Pushing theoretical ideas out into the ontological domain is a dangerous game. Physics – especially quantum physics – is best understood as a branch of epistemology. What is known? is safer ground than what is there?

Anyway, my  little  piece sparked a number of interesting comments on Reddit, including a thread that went along the lines “of course an electron is neither a particle nor a wave,  it’s actually  a spin-1/2 projective representation of the Lorentz Group on a Hilbert space”. That description, involving more sophisticated mathematical concepts than those involved in bog-standard quantum mechanics, undoubtedly provides a more complete account of natural phenomena associated with the electrons and electrical fields, but I’ll stick to my guns and maintain that it still introduces a deep confusion to assert that the electron “is” something mathematical, whether that’s a “spin-1/2 projective representation” or a complex function or anything else.  That’s saying something physical is a mathematical. Both entities have some sort of existence, of course, but not the same sort, and the one cannot “be” the other. “Certain aspects of an electron’s behaviour can be described by certain mathematical structures” is as I’m  prepared to go.

Pushing deeper than quantum mechanics, into the realm of quantum field theory, there was the following contribution:

The electron field is a quantum field as described in quantum field theories. A quantum field covers all space time and in each point the quantum field is in some state, it could be the ground state or it could be an excitation above the ground state. The excitations of the electron field are the so-called electrons. The mathematical object that describes the electron field possesses, amongst others, certain properties that deal with transformations of the space-time coordinates. If, when performing a transformation of the space-time coordinates, the mathematical object changes in such a way that is compatible with the physics of the quantum field, then one says that the mathematical object of the field (also called field) is represented by a spin 1/2 (in the electron case) representation of a certain group of transformations (the Poincaré group, in this example).I understand your quibbling, it seems natural to think that “spin 1/2″ is a property of the mathematical tool to describe something, not the something itself. If you press on with that distinction however, you should be utterly puzzled of why physics should follow, step by step, the path led by mathematics.

For example, one speaks about the ¨invariance under the local action of the group SU(3)” as a fundamental property of the fields that feel the nuclear strong force. This has two implications, the mathematical object that represents quarks must have 3 ¨strong¨ degrees of freedom (the so-called color) and there must be 32-1 = 8 carriers of the force (the gluons) because the group of transformations in a SU(N) group has N2-1 generators. And this is precisely what is observed.

So an extremely abstract mathematical principle correctly accounts for the dynamics of an inmensely large quantity of phenomena. Why does then physics follow the derivations of mathematics if its true nature is somewhat different?

No doubt this line of reasoning is why so many theoretical physicists seem to adopt a view of the world that regards mathematical theories as being, as it were,  “built into” nature rather than being things we humans invented to describe nature. This is a form of Platonic realism.

I’m no expert on matters philosophical, but I’d say that I find this stance very difficult to understand, although I am prepared to go part of the way. I used to work in a Mathematics department many years ago and one of the questions that came up at coffee time occasionally was “Is mathematics invented or discovered?”. In my experience, pure mathematicians always answered “discovered” while others (especially astronomers, said “invented”). For what it’s worth, I think mathematics is a bit of both. Of course we can invent mathematical objects, endow them with certain attributes and proscribe rules for manipulating them and combining them with other entities. However, once invented anything that is worked out from them is “discovered”. In fact, one could argue that all mathematical theorems etc arising within such a system are simply tautological expressions of the rules you started with.

Of course physicists use mathematics to construct models that describe natural phenomena. Here the process is different from mathematical discovery as what we’re trying to do is work out which, if any, of the possible theories is actually the one that accounts best for whatever empirical data we have. While it’s true that this programme requires us to accept that there are natural phenomena that can be described in mathematical terms, I do not accept that it requires us to accept that nature “is” mathematical. It requires that there be some sort of law governing some  of aspects of nature’s behaviour but not that such laws account for everything.

Of course, mathematical ideas have been extremely successful in helping physicists build new physical descriptions of reality. On the other hand, however, there is a great deal of mathematical formalism that is is not useful in this way.  Physicists have had to select those mathematical object that we can use to represent natural phenomena, like selecting words from a dictionary. The fact that we can assemble a sentence using words from the Oxford English Dictionary that conveys some information about something we see doesn’t not mean that what we see “is” English. A whole load of grammatically correct sentences can be constructed that don’t make any sense in terms of observable reality, just as there is a great deal of mathematics that is internally self-consistent but makes no contact with physics.

Moreover, to the person whose quote I commented on above, I’d agree that the properties of the SU(3) gauge group have indeed accounted for many phenomena associated with the strong interaction, which is why the standard model of particle physics contains 8 gluons and quarks carrying a three-fold colour charge as described by quantum chromodynamics. Leaving aside the fact that QCD is such a terribly difficult theory to work with – in practice it involves  nightmarish lattice calculations on a scale to make even the most diehard enthusiast cringe –  what I would ask is whether this  description in any case sufficient for us to assert that it describes “true nature”?  Many physicists will no doubt disagree with me, but I don’t think so. It’s a map, not the territory.

So why am I boring you all with this rambling dissertation? Well, it  brings me to my other post – about Stephen Hawking’s comments about God. I don’t want to go over that issue again – frankly, I was bored with it before I’d finished writing my own blog post  – but it does relate to the bee that I often find in my bonnet about the tendency of many modern theoretical physicists to assign the wrong category of existence to their mathematical ideas. The prime example that springs to my mind is the multiverse. I can tolerate  certain versions of the multiverse idea, in fact. What I can’t swallow, however is the identification of the possible landscape of string theory vacua – essentially a huge set of possible solutions of a complicated set of mathematical equations – with a realised set of “parallel universes”. That particular ontological step just seems absurd to me.

I’m just about done, but one more thing I’d say to finish with is concerns the (admittedly overused) metaphor of maps and territories. Maps are undoubtedly useful in helping us find our way around, but we have to remember that there are always things that aren’t on the map at all. If we rely too heavily on one, we might miss something of great interest that the cartographer didn’t think important. Likewise if we fool ourselves into thinking our descriptions of nature are so complete that they “are” all that nature is, then we might miss the road to a better understanding.


Share/Bookmark

Dragons and Unicorns

Posted in Education, The Universe and Stuff with tags , , , , , , , on August 30, 2010 by telescoper

When I was an undergraduate I was often told by lecturers that I should find quantum mechanics very difficult, because it is unlike the classical physics I had learned about up to that point. The difference – or so I was informed – was that classical systems were predictable, but quantum systems were not. For that reason the microscopic world could only be described in terms of probabilities. I was a bit confused by this, because I already knew that many classical systems were predictable in principle, but not really in practice. I blogged about this some time ago, in fact. It was only when I had studied theory for a long time – almost three years – that I realised what was the correct way to be confused about it. In short, quantum probability is a very strange kind of probability that displays many peculiarities and subtleties  that one doesn’t see in the kind of systems we normally think of as “random”, such as coin-tossing or roulette wheels.

To illustrate how curious the quantum universe is we have to look no further than the very basic level of quantum theory, as formulated by the founder of wave mechanics, Erwin Schrödinger. Schrödinger was born in 1887 into an affluent Austrian family made rich by a successful oilcloth business run by his father. He was educated at home by a private tutor before going to the University of Vienna where he obtained his doctorate in 1910. During the First World War he served in the artillery, but was posted to an isolated fort where he found lots of time to read about physics. After the end of hostilities he travelled around Europe and started a series of inspired papers on the subject now known as wave mechanics; his first work on this topic appeared in 1926. He succeeded Planck as Professor of Theoretical Physics in Berlin, but left for Oxford when Hitler took control of Germany in 1933. He left Oxford in 1936 to return to Austria but fled when the Nazis seized the country and he ended up in Dublin, at the Institute for Advanced Studies which was created especially for him by the Irish Taoiseach, Eamon de Valera. He remained there happily for 17 years before returning to his native land at the University of Vienna. Sadly, he became ill shortly after arriving there and died in 1961.

Schrödinger was a friendly and informal man who got on extremely well with colleagues and students alike. He was also a bit scruffy even to the extent that he sometimes had trouble getting into major scientific conferences, such as the Solvay conferences which are exclusively arranged for winners of the Nobel Prize. Physicists have never been noted for their sartorial elegance, but Schrödinger must have been an extreme case.

The theory of wave mechanics arose from work published in 1924 by de Broglie who had suggested that every particle has a wave somehow associated with it, and the overall behaviour of a system resulted from some combination of its particle-like and wave-like properties. What Schrödinger did was to write down an equation, involving a Hamiltonian describing particle motion of the form I have discussed before, but written in such a way as to resemble the equation used to describe wave phenomena throughout physics. The resulting mathematical form for a single particle is

i\hbar\frac{\partial \Psi}{\partial t} = \hat{H}\Psi = -\frac{\hbar^2}{2m}\nabla^2 \Psi + V\Psi,

in which the term \Psi  is called the wave-function of the particle. As usual, the Hamiltonian H consists of two parts: one describes the kinetic energy (the first term on the right hand side) and the second its potential energy represented by V. This equation – the Schrödinger equation – is one of the most important in all physics.

At the time Schrödinger was developing his theory of wave mechanics it had a rival, called matrix mechanics, developed by Werner Heisenberg and others. Paul Dirac later proved that wave mechanics and matrix mechanics were mathematically equivalent; these days physicists generally use whichever of these two approaches is most convenient for particular problems.

Schrödinger’s equation is important historically because it brought together lots of bits and pieces of ideas connected with quantum theory into a single coherent descriptive framework. For example, in 1911 Niels Bohr had begun looking at a simple theory for the hydrogen atom which involved a nucleus consisting of a positively charged proton with a negatively charged electron moving around it in a circular orbit. According to standard electromagnetic theory this picture has a flaw in it: the electron is accelerating and consequently should radiate energy. The orbit of the electron should therefore decay rather quickly.

Bohr hypothesized that special states of this system were actually stable; these states were ones in which the orbital angular momentum of the electron was an integer multiple of Planck’s constant. This simple idea endows the hydrogen atom with a discrete set of energy levels which, as Bohr showed in 1913, were consistent with the appearance of sharp lines in the spectrum of light emitted by hydrogen gas when it is excited by, for example, an electrical discharge. The calculated positions of these lines were in good agreement with measurements made by Rydberg so the Bohr theory was in good shape. But where did the quantised angular momentum come from?

The Schrödinger equation describes some form of wave; its solutions \Psi(\vec{x},t) are generally oscillating functions of position and time. If we want it to describe a stable state then we need to have something which does not vary with time, so we proceed by setting the left-hand-side of the equation to zero. The hydrogen atom is a bit like a solar system with only one planet going around a star so we have circular symmetry which simplifies things a lot. The solutions we get are waves, and the mathematical task is to find waves that fit along a circular orbit just like standing waves on a circular string. Immediately we see why the solution must be quantized. To exist on a circle the wave can’t just have any wavelength; it has to fit into the circumference of the circle in such a way that it winds up at the same value after a round trip. In Schrödinger’s theory the quantisation of orbits is not just an ad hoc assumption, it emerges naturally from the wave-like nature of the solutions to his equation.

The Schrödinger equation can be applied successfully to systems which are much more complicated than the hydrogen atom, such as complex atoms with many electrons orbiting the nucleus and interacting with each other. In this context, this description is the basis of most work in theoretical chemistry. But it also poses very deep conceptual challenges, chiefly about how the notion of a “particle” relates to the “wave” that somehow accompanies it.

To illustrate the riddle, consider a very simple experiment where particles of some type (say electrons, but it doesn’t really matter; similar experiments can be done with photons or other particles) emerge from the source on the left, pass through the slits in the middle and are detected in the screen at the right.

In a purely “particle” description we would think of the electrons as little billiard balls being fired from the source. Each one then travels along a well-defined path, somehow interacts with the screen and ends up in some position on the detector. On the other hand, in a “wave” description we would imagine a wave front emerging from the source, being diffracted by the screen and ending up as some kind of interference pattern at the detector. This is what we see with light, for example, in the phenomenon known as Young’s fringes.

In quantum theory we have to think of the system as being in some sense both a wave and a particle. This is forced on us by the fact that we actually observe a pattern of “fringes” at the detector, indicating wave-like interference, but we also can detect the arrival of individual electrons as little dots. Somehow the propensity of electrons to arrive in positions on the screen is controlled by an element of waviness, but they manage to retain some aspect of their particleness. Moreover, one can turn the source intensity down to a level where there is only every one electron in the experiment at any time. One sees the dots arrive one by one on the detector, but adding them up over a long time still yields a pattern of fringes.

Curiouser and curiouser, said Alice.

Eventually the community of physicists settled on a party line that most still stick to: that the wave-function controls the probability of finding an electron at some position when a measurement is made. In fact the mathematical description of wave phenomena favoured by physicists involves complex numbers, so at each point in space at time \Psi is a complex number of the form \Psi= a+ib, where i =\sqrt{-1}; the corresponding probability is given by |\Psi^2|=a^2+b^2. This protocol, however, forbids one to say anything about the state of the particle before it measured. It is delocalized, not being definitely located anywhere, but only possessing a probability to be any particular place within the apparatus. One can’t even say which of the two slits it passes through. Somehow, it manages to pass through both slits. Or at least some of its wave-function does.

I’m not going to into the various philosophical arguments about the interpretation of quantum probabilities here, but I will pass on an analogy that helped me come to grips with the idea that an electron can behave in some respects like a wave and in others like a particle. At first thought, this seems a troubling paradox but it only appears so if you insist that our theoretical ideas are literal representations of what happens in reality. I think it’s much more sensible to treat the mathematics as a kind of map or sketch that is useful for us to do find our way around nature rather than confusing it with nature itself. Neither particles nor waves really exist in the quantum world – they’re just abstractions we use to try to describe as much as we can of what is going on. The fact that it doesn’t work perfectly shouldn’t surprise us, as there are are undoubtedly more things in Heaven and Earth than are dreamt of in our philosophy.

Imagine a mediaeval traveller, the first from your town to go to Africa. On his journeys he sees a rhinoceros, a bizarre creature that is unlike anything he’s ever seen before. Later on, when he gets back, he tries to describe the animal to those at home who haven’t seen it.  He thinks very hard. Well, he says, it’s got a long horn on its head, like a unicorn, and it’s got thick leathery skin, like a dragon. Neither dragons nor unicorns exist in nature, but they’re abstractions that are quite useful in conveying something about what a rhinoceros is like.

It’s the same with electrons. Except they don’t have horns and leathery skin. Obviously.


Share/Bookmark

A Little Bit of Quantum

Posted in The Universe and Stuff with tags , , , , , , , , , , , on January 16, 2010 by telescoper

I’m trying to avoid getting too depressed by writing about the ongoing funding crisis for physics in the United Kingdom, so by way of a distraction I thought I’d post something about physics itself rather than the way it is being torn apart by short-sighted bureaucrats. A number of Cardiff physics students are currently looking forward (?) to their Quantum Mechanics examinations next week, so I thought I’d try to remind them of what fascinating subject it really is…

The development of the kinetic theory of gases in the latter part of the 19th Century represented the culmination of a mechanistic approach to Natural Philosophy that had begun with Isaac Newton two centuries earlier. So successful had this programme been by the turn of the 20th century that it was a fairly common view among scientists of the time that there was virtually nothing important left to be “discovered” in the realm of natural philosophy. All that remained were a few bits and pieces to be tidied up, but nothing could possibly shake the foundations of Newtonian mechanics.

But shake they certainly did. In 1905 the young Albert Einstein – surely the greatest physicist of the 20th century, if not of all time – single-handedly overthrew the underlying basis of Newton’s world with the introduction of his special theory of relativity. Although it took some time before this theory was tested experimentally and gained widespread acceptance, it blew an enormous hole in the mechanistic conception of the Universe by drastically changing the conceptual underpinning of Newtonian physics. Out were the “commonsense” notions of absolute space and absolute time, and in was a more complex “space-time” whose measurable aspects depended on the frame of reference of the observer.

Relativity, however, was only half the story. Another, perhaps even more radical shake-up was also in train at the same time. Although Einstein played an important role in this advance too, it led to a theory he was never comfortable with: quantum mechanics. A hundred years on, the full implications of this view of nature are still far from understood, so maybe Einstein was correct to be uneasy.

The birth of quantum mechanics partly arose from the developments of kinetic theory and statistical mechanics that I discussed briefly in a previous post. Inspired by such luminaries as James Clerk Maxwell and Ludwig Boltzmann, physicists had inexorably increased the range of phenomena that could be brought within the descriptive framework furnished by Newtonian mechanics and the new modes of statistical analysis that they had founded. Maxwell had also been responsible for another major development in theoretical physics: the unification of electricity and magnetism into a single system known as electromagnetism. Out of this mathematical tour de force came the realisation that light was a form of electromagnetic wave, an oscillation of electric and magnetic fields through apparently empty space.  Optical light forms just part of the possible spectrum of electromagnetic radiation, which ranges from very long wavelength radio waves at one end to extremely short wave gamma rays at the other.

With Maxwell’s theory in hand, it became possible to think about how atoms and molecules might exchange energy and reach equilibrium states not just with each other, but with light. Everyday experience that hot things tend to give off radiation and a number of experiments – by Wilhelm Wien and others – had shown that there were well-defined rules that determined what type of radiation (i.e. what wavelength) and how much of it were given off by a body held at a certain temperature. In a nutshell, hotter bodies give off more radiation (proportional to the fourth power of their temperature), and the peak wavelength is shorter for hotter bodies. At room temperature, bodies give off infra-red radiation, stars have surface temperatures measured in thousands of degrees so they give off predominantly optical and ultraviolet light. Our Universe is suffused with microwave radiation corresponding to just a few degrees above absolute zero.

The name given to a body in thermal equilibrium with a bath of radiation is a “black body”, not because it is black – the Sun is quite a good example of a black body and it is not black at all – but because it is simultaneously a perfect absorber and perfect emitter of radiation. In other words, it is a body which is in perfect thermal contact with the light it emits. Surely it would be straightforward to apply classical Maxwell-style statistical reasoning to a black body at some temperature?

It did indeed turn out to be straightforward, but the result was a catastrophe. One can see the nature of the disaster very straightforwardly by taking a simple idea from classical kinetic theory. In many circumstances there is a “rule of thumb” that applies to systems in thermal equilibrium. Roughly speaking, the idea is that energy becomes divided equally between every possible “degree of freedom” the system possesses. For example, if a box of gas consists of particles that can move in three dimensions then, on average, each component of the velocity of a particle will carry the same amount of kinetic energy. Molecules are able to rotate and vibrate as well as move about inside the box, and the equipartition rule can apply to these modes too.

Maxwell had shown that light was essentially a kind of vibration, so it appeared obvious that what one had to do was to assign the same amount of energy to each possible vibrational degree of freedom of the ambient electromagnetic field. Lord Rayleigh and Sir James Jeans did this calculation and found that the amount of energy radiated by a black body as a function of wavelength should vary proportionally to the temperature T and to inversely as the fourth power of the wavelength λ, as shown in the diagram for an example temperature of 5000K:

Even without doing any detailed experiments it is clear that this result just has to be nonsense. The Rayleigh-Jeans law predicts that even very cold bodies should produce infinite amounts of radiation at infinitely short wavelengths, i.e. in the ultraviolet. It also predicts that the total amount of radiation – the area under the curve in the above figure – is infinite. Even a very cold body should emit infinitely intense electromagnetic radiation. Infinity is bad.

Experiments show that the Rayleigh-Jeans law does work at very long wavelengths but in reality the radiation reaches a maximum (at a wavelength that depends on the temperature) and then declines at short wavelengths, as shown also in the above Figure. Clearly something is very badly wrong with the reasoning here, although it works so well for atoms and molecules.

It wouldn’t be accurate to say that physicists all stopped in their tracks because of this difficulty. It is amazing the extent to which people are able to carry on despite the presence of obvious flaws in their theory. It takes a great mind to realise when everyone else is on the wrong track, and a considerable time for revolutionary changes to become accepted. In the meantime, the run-of-the-mill scientist tends to carry on regardless.

The resolution of this particular fundamental conundrum is accredited to Karl Ernst Ludwig “Max” Planck (right), who was born in 1858. He was the son of a law professor, and himself went to university at Berlin and Munich, receiving his doctorate in 1880. He became professor at Kiel in 1885, and moved to Berlin in 1888. In 1930 he became president of the Kaiser Wilhelm Institute, but resigned in 1937 in protest at the behaviour of the Nazis towards Jewish scientists. His life was blighted by family tragedies: his second son died in the First World War; both daughters died in childbirth; and his first son was executed in 1944 for his part in a plot to assassinate Adolf Hitler. After the Second World War the institute was named the Max Planck Institute, and Planck was reappointed director. He died in 1947; by then such a famous scientist that his likeness appeared on the two Deutschmark coin issued in 1958.

Planck had taken some ideas from Boltzmann’s work but applied them in a radically new way. The essence of his reasoning was that the ultraviolet catastrophe basically arises because Maxwell’s electromagnetic field is a continuous thing and, as such, appears to have an infinite variety of ways in which it can absorb energy. When you are allowed to store energy in whatever way you like in all these modes, and add them all together you get an infinite power output. But what if there was some fundamental limitation in the way that an atom could exchange energy with the radiation field? If such a transfer can only occur in discrete lumps or quanta – rather like “atoms” of radiation – then one could eliminate the ultraviolet catastrophe at a stroke. Planck’s genius was to realize this, and the formula he proposed contains a constant that still bears his name. The energy of a light quantum E is related to its frequency ν via E=hν, where h is Planck’s constant, one of the fundamental constants that occur throughout theoretical physics.

Boltzmann had shown that if a system possesses a  discrete energy state labelled by j separated by energy Ej then at a given temperature the likely relative occupation of the two states is determined by a “Boltzmann factor” of the form:

n_{j} \propto \exp\left(-\frac{E_{j}}{k_BT}\right),

so that the higher energy state is exponentially less probable than the lower energy state if the energy difference is much larger than the typical thermal energy kB T ; the quantity kB is Boltzmann’s constant, another fundamental constant. On the other hand, if the states are very close in energy compared to the thermal level then they will be roughly equally populated in accordance with the “equipartition” idea I mentioned above.

The trouble with the classical treatment of an electromagnetic field is that it makes it too easy for the field to store infinite energy in short wavelength oscillations: it can put  a little bit of energy in each of a lot of modes in an unlimited way. Planck realised that his idea would mean ultra-violet radiation could only be emitted in very energetic quanta, rather than in lots of little bits. Building on Boltzmann’s reasoning, he deduced the probability of exciting a quantum with very high energy is exponentially suppressed. This in turn leads to an exponential cut-off in the black-body curve at short wavelengths. Triumphantly, he was able to calculate the exact form of the black-body curve expected in his theory: it matches the Rayleigh-Jeans form at long wavelengths, but turns over and decreases at short wavelengths just as the measurements require. The theoretical Planck curve matches measurements perfectly over the entire range of wavelengths that experiments have been able to probe.

Curiously perhaps, Planck stopped short of the modern interpretation of this: that light (and other electromagnetic radiation) is composed of particles which we now call photons. He was still wedded to Maxwell’s description of light as a wave phenomenon, so he preferred to think of the exchange of energy as being quantised rather than the radiation itself. Einstein’s work on the photoelectric effect in 1905 further vindicated Planck, but also demonstrated that light travelled in packets. After Planck’s work, and the development of the quantum theory of the atom pioneered by Niels Bohr, quantum theory really began to take hold of the physics community and eventually it became acceptable to conceive of not just photons but all matter as being part particle and part wave. Photons are examples of a kind of particle known as a boson, and the atomic constituents such as electrons and protons are fermions. (This classification arises from their spin: bosons have spin which is an integer multiple of Planck’s constant, whereas fermions have half-integral spin.)

You might have expected that the radical step made by Planck would immediately have led to a drastic overhaul of the system of thermodynamics put in place in the preceding half-a-century, but you would be wrong. In many ways the realization that discrete energy levels were involved in the microscopic description of matter if anything made thermodynamics easier to understand and apply. Statistical reasoning is usually most difficult when the space of possibilities is complicated. In quantum theory one always deals fundamentally with a discrete space of possible outcomes. Counting discrete things is not always easy, but it’s usually easier than counting continuous things. Even when they’re infinite.

Much of modern physics research lies in the arena of condensed matter physics, which deals with the properties of solids and gases, often at the very low temperatures where quantum effects become important. The statistical thermodynamics of these systems is based on a very slight modification of Boltzmann’s result:

n_{j} \propto \left[\exp\left(\frac{E_{j}}{k_BT}\right)\pm 1\right]^{-1},

which gives the equilibrium occupation of states at an energy level Ej; the difference between bosons and fermions manifests itself as the sign in the denominator. Fermions take the upper “plus” sign, and the resulting statistical framework is based on the so-called Fermi-Dirac distribution; bosons have the minus sign and obey Bose-Einstein statistics. This modification of the classical theory of Maxwell and Boltzmann is simple, but leads to a range of fascinating phenomena, from neutron stars to superconductivity.

Moreover, the nature the ultraviolet catastrophe for black-body radiation at the start of the 20th Century perhaps also holds lessons for modern physics. One of the fundamental problems we have in theoretical cosmology is how to calculate the energy density of the vacuum using quantum field theory. This is a more complicated thing to do than working out the energy in an electromagnetic field, but the net result is a catastrophe of the same sort. All straightforward ways of computing this quantity produce a divergent answer unless a high-energy cut off is introduced. Although cosmological observations of the accelerating universe suggest that vacuum energy is there, its actual energy density is way too small for any plausible cutoff.

So there we are. A hundred years on, we have another nasty infinity. It’s a fundamental problem, but its answer will probably open up a new way of understanding the Universe.


Share/Bookmark

A Unified Quantum Theory of the Sexual Interaction

Posted in The Universe and Stuff with tags , , , on May 20, 2009 by telescoper

Recent changes to the criteria for allocating research funding require particle physicists  and astronomers to justify the wider social, cultural and economic impact of their science. In view of the directive to engage in work more directly relevant to the person in the street, I’ve decided to share with you my latest results, which involve the application of ideas from theoretical physics in the wider field of human activity. That is, if you’re one of those people who likes to have sex in a field.

In the simplest theories of the sexual interaction, the eigenstates of the Hamiltonian describing all allowed forms of two-body coupling are identified with the conventional gender states, “Male” and “Female”  denoted |M> and |F> in the Dirac bra-ket notation; note that the bra is superfluous in this context so, as usual, we dispense with it at the outset. Interactions between |M> and |F> states are assumed to be attractive while those between |M> and |M> or |F> and |F> are supposed either to be repulsive or, in some theories, entirely forbidden.

Observational evidence, however, strongly  suggests that two-body interactions involving either F-F or M-M coupling, though suppressed in many  situations, are by no means ruled out  in the manner one would expect from the simplest theory outlined above. Furthermore, experiments indicate that the relevant channel for M-M interactions appears to have a comparable cross-section to that of the standard M-F variety, so a similar form of tunneling is presumably involved. This suggests that a more complete theory could be obtained by a  relatively simple modification of the  version presented above.

Inspired by the recent Nobel prize awarded for the theory of quark mixing, we are now able to present a new, unified theory of the sexual interaction. In our theory the “correct” eigenstates for sexual behaviour are not the conventional |M> and |F> gender states but linear combinations of the form

|M>=cosθ|S> + sinθ|G>

|F>=-sinθ|G>+cosθ|S>

where θ is the Cabibbo mixing angle or, more appropriately in this context, the sexual orientation (measured in degrees). Extension to three states is in principle possible (but a bit complicated) and we will not discuss this issue further.

In this theory each |M> or |F> state is regarded as a linear combination of heterosexual (straight, S)  and homosexual (gay, G) states represented by a rotation of the basis by an angle θ, exactly the same mechanism that accounts for the charge-changing weak interactions between quarks.

For a purely heterosexual state, this angle is zero, in which case we recover the simple theory outlined above. At θ=90° only the G component manifests itself; in this state only classically forbidden interactions are permitted. The general state is however, one with a value of the orientation angle somewhere between these two limits and this permits all forms of interaction, at least with some probability.

Note added in proof:  the |G> states do not appear in standard QFT but are motivated by some versions of string theory, expecially those involving G-strings.

One immediate consequence of this theory is that a “pure” gender state should be generally regarded as a quantum superposition of “straight” and “gay” states. This differs from a classical theory in that the true state can not be known with certainty; only the relative frequency of straight and gay behaviour (over a large number of interactions) can be predicted, perhaps explaining the large number of married men to be found on gaydar. The state at any given time is thus entirely determined by a sum over histories up to that moment, taking into account the appropriate action. In the Copenhagen interpretation, collapse one way or another  occurs only when a measurement is made (or when enough Carlsberg is drunk).

If there is a difference in energy of the basis states a pure |M> state can oscillate between |S> and |G> according to a time-dependent phase factor arising when the two states interfere with each other:

|M(t)>=cosθ|S>exp(-iE1t) + sinθ|G>exp(-iE2t);

(obviously we are using natural units here, so that it all looks cleverer than it actually is). This equation is the origin of the expressions  “it’s just a phase he’s going through” and “he swings both ways”. In physics parlance this means that the eigenstates of the sexual interaction do not coincide with the conventional gender types, indicating that sexual behaviour is not necessarily time-invariant for a given body.

Whether single-body phenomena (i.e. self-interactions) can provide insights into this theory  depends, as can be seen from the equation,  on the energies of the relevant states (as is also the case  in neutrino oscillations). If they are equal then there is no oscillation. However,  a detailed discussion of the role of degeneracy is beyond the scope of this analysis.

Self- interactions involving a solitary phase are generally difficult to observe,  although examples have been documented that involve short-lived but highly-excited states  accompanied by various forms of stimulated emission. Unfortunately, however, the resulting fluxes are  not often well measured. This form of interaction also appears to be the current preoccupation of string theorists.

More definitive evidence for the theory might emerge from situations involving some form of entanglement, such as in the examples of M-M and F-F coupling mentioned above.  Non-local interactions of a sexual type are possible in principle, but causality and simultaneity issues exist and most researchers consequently prefer to focus on local interactions, which are generally supposed to be more satisfactory from the point-of-view of reproducibility.

Although the theory is qualitatively successful we need more experimental data to pin down the parameters needed for a robust fit. It is not known, for example, whether the rates of M-M and F-F coupling are similar or, indeed, whether the peak intensity of these interactions, when resonance is reached, is similar to those of the standard M-F form. It is generally accepted, however, that the rate of decay from peak intensity is rather slower for processes involving |F> states than for|M> which is not so easy to model in this theory, although with a bit of renormalization we can probably explain anything.

Answers to these questions can perhaps be gleaned from observations of many-body processes  (i.e. those with N≥3),  especially if they involve a multiplicity of hardon states (i.e. collective excitations). Only these permit a full exploration of all possible degrees of freedom, although higher-order Feynman diagrams are needed to depict them and they require more complicated group theoretical techniques.  Examples like the one  shown above  – representing a threesome – are not well understood, but undoubtedly contribute significantly to the bi-spectrum.

One might also speculate that in these and other highly excited states,  the sexual interaction may be described by something more like the  electroweak theory in which all forms of interaction occur in a much more symmetric fashion and at much higher rates than at lower energies. That sounds like some kind of party…

It is worth remarking that there may be finer structure than this model takes into account. For example, the |G> state is generally associated with  singlet configurations like those shown on the right. However, G-G coupling is traditionally described in terms of  “top” |t> and “bottom” |b> states, with b-t coupling the preferred mode,  leading to the possibility of doublets or even triplets. It may be even prove  necessary to introduce a further mixing angle φ of the form

|G>=cosφ |t> + sinφ |b>

so that the general state of |G>  is “versatile”. However, whether G-G interactions can be adequately described even in this extended theory is a matter for debate until the intensity of t-t and b-b  coupling is more accurately measured.

Finally, we should like to point out the difference between our model and that of the usual quark sextet, in which interacting states are described in terms of three pairs: the bottom (b) and top (t) which we have mentioned already; the strange (s) and charmed (c); and the up (u) and down (d). While it is clear that |b> and |t> do exhibit strong interactions and it appears plausible that |s> and |c> might do likewise, the sexual interaction clearly breaks the isospin symmetry between the |u> and the |d> in both M-M and M-F cases. The “up” state is definitely preferred in all forms of coupling and, indeed, the “down” has only ever been known to engage in weak interactions.

We have recently submitted an application to the Science and Technology Facilities Council for a modest sum (£754 million) to build a large-scale  UK facility  in order to carry out hands-on experimental tests of some aspects of the theory. We hope we can rely on the support of the physics community in agreeing to close down their labs and quit their jobs in order to release the funding needed to support it.