Archive for November, 2010

On the Nature of Time

Posted in Uncategorized with tags , , on November 24, 2010 by telescoper

I couldn’t resist posting this little piece, taken from an episode of The Goon Show first broadcast in 1957. Spike Milligan wrote most of the scripts for this long-running and hugely popular radio show as well as playing several of the characters including, in this clip, the gormless Eccles heard in dialogue with Bluebottle, played by Peter Sellers.

The Goon Show shattered the conventions of radio comedy with its anarchic humour, nonsensical plots, and sheer silliness; it was a direct ancestor of Monty Python’s Flying Circus, a debt acknowledged by the Python team. However, the strain of producing weekly scripts for The Goon Show exacted a heavy toll on Spike Milligan who had numerous nervous breakdowns. Not surprisingly, given the rate at which they had to be written, the episodes are uneven in quality but at times Spike Milligan’s comic writing rose to extraordinary heights of genius. Such as this joyfully absurd sequence, which I think is totally brilliant.

Postscript. After The Goon Show came to an end in 1960, Eccles and Bluebottle moved on to other careers. Rumour has it they’ve both applied to be the next Chief Executive of STFC.


Share/Bookmark

Bayes and his Theorem

Posted in Bad Statistics with tags , , , , , , on November 23, 2010 by telescoper

My earlier post on Bayesian probability seems to have generated quite a lot of readers, so this lunchtime I thought I’d add a little bit of background. The previous discussion started from the result

P(B|AC) = K^{-1}P(B|C)P(A|BC) = K^{-1} P(AB|C)

where

K=P(A|C).

Although this is called Bayes’ theorem, the general form of it as stated here was actually first written down, not by Bayes but by Laplace. What Bayes’ did was derive the special case of this formula for “inverting” the binomial distribution. This distribution gives the probability of x successes in n independent “trials” each having the same probability of success, p; each “trial” has only two possible outcomes (“success” or “failure”). Trials like this are usually called Bernoulli trials, after Daniel Bernoulli. If we ask the question “what is the probability of exactly x successes from the possible n?”, the answer is given by the binomial distribution:

P_n(x|n,p)= C(n,x) p^x (1-p)^{n-x}

where

C(n,x)= n!/x!(n-x)!

is the number of distinct combinations of x objects that can be drawn from a pool of n.

You can probably see immediately how this arises. The probability of x consecutive successes is p multiplied by itself x times, or px. The probability of (n-x) successive failures is similarly (1-p)n-x. The last two terms basically therefore tell us the probability that we have exactly x successes (since there must be n-x failures). The combinatorial factor in front takes account of the fact that the ordering of successes and failures doesn’t matter.

The binomial distribution applies, for example, to repeated tosses of a coin, in which case p is taken to be 0.5 for a fair coin. A biased coin might have a different value of p, but as long as the tosses are independent the formula still applies. The binomial distribution also applies to problems involving drawing balls from urns: it works exactly if the balls are replaced in the urn after each draw, but it also applies approximately without replacement, as long as the number of draws is much smaller than the number of balls in the urn. I leave it as an exercise to calculate the expectation value of the binomial distribution, but the result is not surprising: E(X)=np. If you toss a fair coin ten times the expectation value for the number of heads is 10 times 0.5, which is five. No surprise there. After another bit of maths, the variance of the distribution can also be found. It is np(1-p).

So this gives us the probability of x given a fixed value of p. Bayes was interested in the inverse of this result, the probability of p given x. In other words, Bayes was interested in the answer to the question “If I perform n independent trials and get x successes, what is the probability distribution of p?”. This is a classic example of inverse reasoning. He got the correct answer, eventually, but by very convoluted reasoning. In my opinion it is quite difficult to justify the name Bayes’ theorem based on what he actually did, although Laplace did specifically acknowledge this contribution when he derived the general result later, which is no doubt why the theorem is always named in Bayes’ honour.

This is not the only example in science where the wrong person’s name is attached to a result or discovery. In fact, it is almost a law of Nature that any theorem that has a name has the wrong name. I propose that this observation should henceforth be known as Coles’ Law.

So who was the mysterious mathematician behind this result? Thomas Bayes was born in 1702, son of Joshua Bayes, who was a Fellow of the Royal Society (FRS) and one of the very first nonconformist ministers to be ordained in England. Thomas was himself ordained and for a while worked with his father in the Presbyterian Meeting House in Leather Lane, near Holborn in London. In 1720 he was a minister in Tunbridge Wells, in Kent. He retired from the church in 1752 and died in 1761. Thomas Bayes didn’t publish a single paper on mathematics in his own name during his lifetime but despite this was elected a Fellow of the Royal Society (FRS) in 1742. Presumably he had Friends of the Right Sort. He did however write a paper on fluxions in 1736, which was published anonymously. This was probably the grounds on which he was elected an FRS.

The paper containing the theorem that now bears his name was published posthumously in the Philosophical Transactions of the Royal Society of London in 1764.

P.S. I understand that the authenticity of the picture is open to question. Whoever it actually is, he looks  to me a bit like Laurence Olivier…


Share/Bookmark

A Song for Saint Cecilia’s Day

Posted in Poetry with tags , on November 22, 2010 by telescoper

In case you didn’t know, today is St Cecilia‘s Day, so I thought I’d post this marvellous poem composed in 1687 by John Dryden

FROM harmony, from heavenly harmony,
This universal frame began:
When nature underneath a heap
Of jarring atoms lay,
And could not heave her head,
The tuneful voice was heard from high,
‘Arise, ye more than dead!’
Then cold, and hot, and moist, and dry,
In order to their stations leap,
And Music’s power obey.
From harmony, from heavenly harmony,
This universal frame began:
From harmony to harmony
Through all the compass of the notes it ran,
The diapason closing full in Man.

What passion cannot Music raise and quell?
When Jubal struck the chorded shell,
His listening brethren stood around,
And, wondering, on their faces fell
To worship that celestial sound:
Less than a God they thought there could not dwell
Within the hollow of that shell,
That spoke so sweetly, and so well.
What passion cannot Music raise and quell?

The trumpet’s loud clangour
Excites us to arms,
With shrill notes of anger,
And mortal alarms.
The double double double beat
Of the thundering drum
Cries Hark! the foes come;
Charge, charge, ’tis too late to retreat!

The soft complaining flute,
In dying notes, discovers
The woes of hopeless lovers,
Whose dirge is whisper’d by the warbling lute.

Sharp violins proclaim
Their jealous pangs and desperation,
Fury, frantic indignation,
Depth of pains, and height of passion,
For the fair, disdainful dame.

But O, what art can teach,
What human voice can reach,
The sacred organ’s praise?
Notes inspiring holy love,
Notes that wing their heavenly ways
To mend the choirs above.

Orpheus could lead the savage race;
And trees unrooted left their place,
Sequacious of the lyre;
But bright Cecilia rais’d the wonder higher:
When to her organ vocal breath was given,
An angel heard, and straight appear’d
Mistaking Earth for Heaven.

GRAND CHORUS.

As from the power of sacred lays
The spheres began to move,
And sung the great Creator’s praise
To all the Blest above;
So when the last and dreadful hour
This crumbling pageant shall devour,
The trumpet shall be heard on high,
The dead shall live, the living die,
And Music shall untune the sky!



Share/Bookmark

A Little Bit of Bayes

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , , on November 21, 2010 by telescoper

I thought I’d start a series of occasional posts about Bayesian probability. This is something I’ve touched on from time to time but its perhaps worth covering this relatively controversial topic in a slightly more systematic fashion especially with regard to how it works in cosmology.

I’ll start with Bayes’ theorem which for three logical propositions (such as statements about the values of parameters in theory) A, B and C can be written in the form

P(B|AC) = K^{-1}P(B|C)P(A|BC) = K^{-1} P(AB|C)

where

K=P(A|C).

This is (or should be!)  uncontroversial as it is simply a result of the sum and product rules for combining probabilities. Notice, however, that I’ve not restricted it to two propositions A and B as is often done, but carried throughout an extra one (C). This is to emphasize the fact that, to a Bayesian, all probabilities are conditional on something; usually, in the context of data analysis this is a background theory that furnishes the framework within which measurements are interpreted. If you say this makes everything model-dependent, then I’d agree. But every interpretation of data in terms of parameters of a model is dependent on the model. It has to be. If you think it can be otherwise then I think you’re misguided.

In the equation,  P(B|C) is the probability of B being true, given that C is true . The information C need not be definitely known, but perhaps assumed for the sake of argument. The left-hand side of Bayes’ theorem denotes the probability of B given both A and C, and so on. The presence of C has not changed anything, but is just there as a reminder that it all depends on what is being assumed in the background. The equation states  a theorem that can be proved to be mathematically correct so it is – or should be – uncontroversial.

Now comes the controversy. In the “frequentist” interpretation of probability, the entities A, B and C would be interpreted as “events” (e.g. the coin is heads) or “random variables” (e.g. the score on a dice, a number from 1 to 6) attached to which is their probability, indicating their propensity to occur in an imagined ensemble. These things are quite complicated mathematical objects: they don’t have specific numerical values, but are represented by a measure over the space of possibilities. They are sort of “blurred-out” in some way, the fuzziness representing the uncertainty in the precise value.

To a Bayesian, the entities A, B and C have a completely different character to what they represent for a frequentist. They are not “events” but  logical propositions which can only be either true or false. The entities themselves are not blurred out, but we may have insufficient information to decide which of the two possibilities is correct. In this interpretation, P(A|C) represents the degree of belief that it is consistent to hold in the truth of A given the information C. Probability is therefore a generalization of the “normal” deductive logic expressed by Boolean algebra: the value “0” is associated with a proposition which is false and “1” denotes one that is true. Probability theory extends  this logic to the intermediate case where there is insufficient information to be certain about the status of the proposition.

A common objection to Bayesian probability is that it is somehow arbitrary or ill-defined. “Subjective” is the word that is often bandied about. This is only fair to the extent that different individuals may have access to different information and therefore assign different probabilities. Given different information C and C′ the probabilities P(A|C) and P(A|C′) will be different. On the other hand, the same precise rules for assigning and manipulating probabilities apply as before. Identical results should therefore be obtained whether these are applied by any person, or even a robot, so that part isn’t subjective at all.

In fact I’d go further. I think one of the great strengths of the Bayesian interpretation is precisely that it does depend on what information is assumed. This means that such information has to be stated explicitly. The essential assumptions behind a result can be – and, regrettably, often are – hidden in frequentist analyses. Being a Bayesian forces you to put all your cards on the table.

To a Bayesian, probabilities are always conditional on other assumed truths. There is no such thing as an absolute probability, hence my alteration of the form of Bayes’s theorem to represent this. A probability such as P(A) has no meaning to a Bayesian: there is always conditioning information. For example, if  I blithely assign a probability of 1/6 to each face of a dice, that assignment is actually conditional on me having no information to discriminate between the appearance of the faces, and no knowledge of the rolling trajectory that would allow me to make a prediction of its eventual resting position.

In tbe Bayesian framework, probability theory  becomes not a branch of experimental science but a branch of logic. Like any branch of mathematics it cannot be tested by experiment but only by the requirement that it be internally self-consistent. This brings me to what I think is one of the most important results of twentieth century mathematics, but which is unfortunately almost unknown in the scientific community. In 1946, Richard Cox derived the unique generalization of Boolean algebra under the assumption that such a logic must involve associated a single number with any logical proposition. The result he got is beautiful and anyone with any interest in science should make a point of reading his elegant argument. It turns out that the only way to construct a consistent logic of uncertainty incorporating this principle is by using the standard laws of probability. There is no other way to reason consistently in the face of uncertainty than probability theory. Accordingly, probability theory always applies when there is insufficient knowledge for deductive certainty. Probability is inductive logic.

This is not just a nice mathematical property. This kind of probability lies at the foundations of a consistent methodological framework that not only encapsulates many common-sense notions about how science works, but also puts at least some aspects of scientific reasoning on a rigorous quantitative footing. This is an important weapon that should be used more often in the battle against the creeping irrationalism one finds in society at large.

I posted some time ago about an alternative way of deriving the laws of probability from consistency arguments.

To see how the Bayesian approach works, let us consider a simple example. Suppose we have a hypothesis H (some theoretical idea that we think might explain some experiment or observation). We also have access to some data D, and we also adopt some prior information I (which might be the results of other experiments or simply working assumptions). What we want to know is how strongly the data D supports the hypothesis H given my background assumptions I. To keep it easy, we assume that the choice is between whether H is true or H is false. In the latter case, “not-H” or H′ (for short) is true. If our experiment is at all useful we can construct P(D|HI), the probability that the experiment would produce the data set D if both our hypothesis and the conditional information are true.

The probability P(D|HI) is called the likelihood; to construct it we need to have   some knowledge of the statistical errors produced by our measurement. Using Bayes’ theorem we can “invert” this likelihood to give P(H|DI), the probability that our hypothesis is true given the data and our assumptions. The result looks just like we had in the first two equations:

P(H|DI) = K^{-1}P(H|I)P(D|HI) .

Now we can expand the “normalising constant” K because we know that either H or H′ must be true. Thus

K=P(D|I)=P(H|I)P(D|HI)+P(H^{\prime}|I) P(D|H^{\prime}I)

The P(H|DI) on the left-hand side of the first expression is called the posterior probability; the right-hand side involves P(H|I), which is called the prior probability and the likelihood P(D|HI). The principal controversy surrounding Bayesian inductive reasoning involves the prior and how to define it, which is something I’ll comment on in a future post.

The Bayesian recipe for testing a hypothesis assigns a large posterior probability to a hypothesis for which the product of the prior probability and the likelihood is large. It can be generalized to the case where we want to pick the best of a set of competing hypothesis, say H1 …. Hn. Note that this need not be the set of all possible hypotheses, just those that we have thought about. We can only choose from what is available. The hypothesis may be relatively simple, such as that some particular parameter takes the value x, or they may be composite involving many parameters and/or assumptions. For instance, the Big Bang model of our universe is a very complicated hypothesis, or in fact a combination of hypotheses joined together,  involving at least a dozen parameters which can’t be predicted a priori but which have to be estimated from observations.

The required result for multiple hypotheses is pretty straightforward: the sum of the two alternatives involved in K above simply becomes a sum over all possible hypotheses, so that

P(H_i|DI) = K^{-1}P(H_i|I)P(D|H_iI),

and

K=P(D|I)=\sum P(H_j|I)P(D|H_jI)

If the hypothesis concerns the value of a parameter – in cosmology this might be, e.g., the mean density of the Universe expressed by the density parameter Ω0 – then the allowed space of possibilities is continuous. The sum in the denominator should then be replaced by an integral, but conceptually nothing changes. Our “best” hypothesis is the one that has the greatest posterior probability.

From a frequentist stance the procedure is often instead to just maximize the likelihood. According to this approach the best theory is the one that makes the data most probable. This can be the same as the most probable theory, but only if the prior probability is constant, but the probability of a model given the data is generally not the same as the probability of the data given the model. I’m amazed how many practising scientists make this error on a regular basis.

The following figure might serve to illustrate the difference between the frequentist and Bayesian approaches. In the former case, everything is done in “data space” using likelihoods, and in the other we work throughout with probabilities of hypotheses, i.e. we think in hypothesis space. I find it interesting to note that most theorists that I know who work in cosmology are Bayesians and most observers are frequentists!


As I mentioned above, it is the presence of the prior probability in the general formula that is the most controversial aspect of the Bayesian approach. The attitude of frequentists is often that this prior information is completely arbitrary or at least “model-dependent”. Being empirically-minded people, by and large, they prefer to think that measurements can be made and interpreted without reference to theory at all.

Assuming we can assign the prior probabilities in an appropriate way what emerges from the Bayesian framework is a consistent methodology for scientific progress. The scheme starts with the hardest part – theory creation. This requires human intervention, since we have no automatic procedure for dreaming up hypothesis from thin air. Once we have a set of hypotheses, we need data against which theories can be compared using their relative probabilities. The experimental testing of a theory can happen in many stages: the posterior probability obtained after one experiment can be fed in, as prior, into the next. The order of experiments does not matter. This all happens in an endless loop, as models are tested and refined by confrontation with experimental discoveries, and are forced to compete with new theoretical ideas. Often one particular theory emerges as most probable for a while, such as in particle physics where a “standard model” has been in existence for many years. But this does not make it absolutely right; it is just the best bet amongst the alternatives. Likewise, the Big Bang model does not represent the absolute truth, but is just the best available model in the face of the manifold relevant observations we now have concerning the Universe’s origin and evolution. The crucial point about this methodology is that it is inherently inductive: all the reasoning is carried out in “hypothesis space” rather than “observation space”.  The primary form of logic involved is not deduction but induction. Science is all about inverse reasoning.

For comments on induction versus deduction in another context, see here.

So what are the main differences between the Bayesian and frequentist views?

First, I think it is fair to say that the Bayesian framework is enormously more general than is allowed by the frequentist notion that probabilities must be regarded as relative frequencies in some ensemble, whether that is real or imaginary. In the latter interpretation, a proposition is at once true in some elements of the ensemble and false in others. It seems to me to be a source of great confusion to substitute a logical AND for what is really a logical OR. The Bayesian stance is also free from problems associated with the failure to incorporate in the analysis any information that can’t be expressed as a frequency. Would you really trust a doctor who said that 75% of the people she saw with your symptoms required an operation, but who did not bother to look at your own medical files?

As I mentioned above, frequentists tend to talk about “random variables”. This takes us into another semantic minefield. What does “random” mean? To a Bayesian there are no random variables, only variables whose values we do not know. A random process is simply one about which we only have sufficient information to specify probability distributions rather than definite values.

More fundamentally, it is clear from the fact that the combination rules for probabilities were derived by Cox uniquely from the requirement of logical consistency, that any departure from these rules will generally speaking involve logical inconsistency. Many of the standard statistical data analysis techniques – including the simple “unbiased estimator” mentioned briefly above – used when the data consist of repeated samples of a variable having a definite but unknown value, are not equivalent to Bayesian reasoning. These methods can, of course, give good answers, but they can all be made to look completely silly by suitable choice of dataset.

By contrast, I am not aware of any example of a paradox or contradiction that has ever been found using the correct application of Bayesian methods, although method can be applied incorrectly. Furthermore, in order to deal with unique events like the weather, frequentists are forced to introduce the notion of an ensemble, a perhaps infinite collection of imaginary possibilities, to allow them to retain the notion that probability is a proportion. Provided the calculations are done correctly, the results of these calculations should agree with the Bayesian answers. On the other hand, frequentists often talk about the ensemble as if it were real, and I think that is very dangerous…


Share/Bookmark

Come White Van Man to Bute Park Now…

Posted in Bute Park, Politics with tags , on November 20, 2010 by telescoper

If you needed any proof of Cardiff City Council’s dishonesty about the likely effects of their new road into Bute Park then just take a look at these examples of private vehicles littering this once beautiful site. I should also say that there used to be signs proclaiming a 5mph speed limit on the public footpaths, but these have all been taken away, giving the dreaded White Van Man a licence to drive at high speed around the Park. I’ve stopped walking through it, in fact, on my way to work in the mornings as it has become too unpleasant battling my way through the traffic. Much more of this and I’m afraid Bute Park just won’t be fit for humans…


Share/Bookmark

The Trouble with Columbo

Posted in Columbo with tags , , , on November 20, 2010 by telescoper

So far it’s been a busy and extremely frustrating Saturday all on account of my old moggy, Columbo…

Today I took him to the vets for his six-monthly check-up. All went well, even to the extent that he didn’t try to take the vets arm off when they took a blood sample for the fructosamine test that checks whether his diabetes has been under control since the last visit. He’s even lost a bit of weight, which won’t do him any harm, although at 6.8 kg he’s still not exactly slim. His only indiscretion was to have a wee in his carrying box on the way there, but that’s nothing particularly unusual and was easily dealt with.

However, when I went to pick up his supplies (food, medication, syringes, and insulin) the vet informed me that the manufacturer of the kind of insulin he normally gets is no longer supplying it. This particular type is of a flavour called “Protamine Zinc”, although I don’t know really know what’s so special about that. Anyway, given that I’m running low the vet wrote me out a private prescription for human insulin, which apparently they are allowed to do if the supply of veterinary products runs out.

So I took Columbo home with the other stuff, left him in the house and, prescription in hand, romped off to the nearest pharmacy, which turned out to be the first of many I visited this morning. The problem is that human persons who are diabetic generally don’t use the old-fashioned vial-and-syringes approach to administering insulin, but get their dose from preloaded gadgets that look a bit like pens. These won’t do for cats which have skin that’s too thick. So one after the other various pharmacists explained that they would have to order the stuff I needed, and that it might take a while to arrive since there’s not much demand for it these days. None of them had a supplier that was open on saturdays either..

Eventually I gave up trying to find the insulin today and left the chit with a pharmacist to order on monday when their supplier is open. That is, if they’re able to supply it at all.

I’m not sure what I’m going to do if I can’t get the supply Columbo needs. Probably we’ll have to switch to another type of insulin, but the problem with that is that we’ll have to establish the right dose. He’s been stable on his current dose of his normal insulin for a long time now, but it did take a long time to sort how much he needs. If I have to start again on a different type, it will probably require several tests to see how he responds.

Anyway, having hoped to get the business of his insulin supply sorted out today, I’m now forced to wait until monday to see if I can get the necessary from the pharmacist. If not, I’ll have to talk to the vet when the fructosamine results come back to see what to do about starting on a new type. It’s all a bit of a pain, and I’m knackered after traipsing around half the chemists in Cardiff on a wild goose chase.

Columbo, however, is oblivious to all this and is doing pretty well. While I’ve been running around on his behalf he has been sleeping as is his wont, this time in the bathroom. He’s a picture of him taken after he’d just woken up.

Now it’s time to do a bit of relaxation of my own, in the form of the Guardian Prize crossword.


Share/Bookmark

The Inconceivable Nature of Nature

Posted in The Universe and Stuff with tags , , on November 19, 2010 by telescoper

I had a couple of requests to post yet another Feynman clip. This one – about electromagnetic waves and swimming pools – is one that I vividly remember watching on BBC when it was first broadcast donkeys’ years ago. I think it’s totally wonderful.


Share/Bookmark

At It

Posted in Poetry, The Universe and Stuff with tags , , on November 18, 2010 by telescoper

Apologies for my posts being a bit thin on original content recently. There’s a lot going on at the moment and it has not been easy to find the time to write at any length. Before too long I hope to be able to get back into the swing of things and maybe even blog about science. Or even do some! In the meantime, however, I couldn’t resist passing on this poem called, At It, by R.S. Thomas. I’ve posted some of his verse on previous occasions, but I only found this one a few days ago and couldn’t resist sharing it, not least because it mentions Sir Arthur Eddington (probably in a reference to one of his popular science books).

I think he sits at that strange table
of Eddington’s. That is not a table
at all, but nodes and molecules
pushing against molecules
and nodes; and he writes there
in invisible handwriting the instructions
the genes follow. I imagine his
face that is more the face
of a clock, and the time told by it
is now, though Greece is referred
to and Egypt and empires
not yet begun.
+++++++++ And I would have
things to say to this God
at the judgement, storming at him,
as Job stormed with the eloquence
of the abused heart. But there will
be no judgement other than the verdict
of his calculations, that abstruse
geometry that proceeds eternally
in the silence beyond right and wrong.


Share/Bookmark

A Sign of the Times

Posted in Education, Finance with tags , , on November 18, 2010 by telescoper

Given yesterday’s announcement of cuts to the Higher Education budget in Wales, and the likely outcome in terms of increased costs to students, this picture of a sign I found the other day at the entrance to Bute Park seems particularly apt…


Share/Bookmark

Higher Education Spending in Wales

Posted in Education, Politics with tags , , , on November 17, 2010 by telescoper

Just a quick post to pass on the news that the Welsh Assembly has now published its draft budget for 2011/12 (and following years). You can find the documents related to this here, the most useful one of which is this.

I haven’t got time to comment in detail but, being a university employee, I skipped directly to the section about Higher Education and found the following:

In order to direct funding to schools and skills, the majority of budget reductions have been focused on specific budgets. Higher Education will receive a reduction over the next 3 years of £51m. This amounts to some 11.8%, compared to the severe reductions proposed in England. The planned reductions will facilitate the statutory commitment to provide financial support for Higher Education students, numbers of which have increased significantly over the past two years. This does not predetermine the Welsh Assembly Government’s response to the Browne Review. The reductions include the efficiency savings we expect to be delivered through the implementation of our Higher Education strategy, For our Future. The commitment to the development of the University of the Heads of The Valleys (UHoVI) and Coleg Cymraeg Cenedlaethol (formerly Coleg Federal) will, however, remain a priority to
be funded from this budget.

In other words, Higher Education is to bear the brunt of protecting the budgets for Schools (which remains roughly level in cash terms) and  Further Education (which is cut by about 2%). Clearly the WAG must either think that  maintaining funding for Higher Education  is a low priority or that money saved from HE can be recouped some other way (i.e. through increasing fees or cutting student support).

An 12% cut in cash terms is much worse in real terms, of course, but the draft budget doesn’t give any details of how this is going to be broken down in terms of research and teaching allocations. Moreover, the Welsh Assembly has yet to formulate a response to the Browne Review which has resulted in proposals for tuition fees up to £9000 per annum in England. Since the Welsh Assembly elections are to be held next May, it is highly unlikely that a new tuition fee system for Wales  will be in place before then. Moreover, the fact that funding is being diverted into the new institutions described above suggests that even less money than this will be available for established universities.

We also don’t know the extent to which research will be protected. In England, a cut of 40% has been applied to teaching budgets from next year, with research funding largely preserved. It appears something similar is going to happen in Scotland, but with a much smaller overall cut to the universities budget there. Will Wales follow the same pattern, or will it sacrifice any chance of having high quality research-led universities by single-mindedly pursuing its “regional agenda”?


Share/Bookmark