Archive for Science

The Return of Professor Who

Posted in Biographical, Music, Television, The Universe and Stuff with tags , , , , , on September 1, 2012 by telescoper

Since the new series of Doctor Who is to start this evening on BBC1, I thought I’d mark the occasion by posting this old blog item again:

–0–

As a Professor of Astrophysics I am often asked “Why on Earth did you take up such a crazy subject?”

I guess many astronomers, physicists and other scientists have to answer this sort of question. For many of them there is probably a romantic reason, such as seeing the rings of Saturn or the majesty of the Milky Way on a dark night. Others will probably have been inspired by TV documentary series such as The Sky at Night, Carl Sagan’s Cosmos or even Horizon which, believe it or not, actually used to be quite good but which is nowadays uniformly dire. Or it could have been something a bit more mundane but no less stimulating such as a very good science teacher at school.

When I’m asked this question I’d love to be able to put my hand on my heart and give an answer of that sort but the truth is really quite a long way from those possibilities. The thing that probably did more than anything else to get me interested in science was a Science Fiction TV series or rather not exactly the series but the opening titles.

The first episode of Doctor Who was broadcast in the year of my birth, so I don’t remember it at all, but I do remember the astonishing effect the credits had on my imagination when I saw later episodes as a small child. Here is the  opening title sequence as it appeared in the very first series featuring William Hartnell as the first Doctor.

To a younger audience it probably all seems quite tame, but I think there’s a haunting, unearthly beauty to the shapes conjured up by Bernard Lodge. Having virtually no budget for graphics, he experimented in a darkened studio with an old-fashioned TV camera and a piece of black card with Doctor Who written on it in white. He created the spooky kaleidoscopic patterns you see by simply pointing the camera so it could see into its own monitor, thus producing a sort of electronic hall of mirrors.

What is so fascinating to me is how a relatively simple underlying concept could produce a rich assortment of patterns, particularly how they seem to take on an almost organic aspect as they merge and transform. I’ve continued to be struck by the idea that complexity could be produced by relatively simple natural laws which is one of the essential features of astrophysics and cosmology. As a practical demonstration of the universality of physics this sequence takes some beating.

As well as these strange and wonderful images, the titles also featured a pioneering piece of electronic music. Officially the composer was Ron Grainer, but he wasn’t very interested in the commission and simply scribbled the theme down and left it to the BBC to turn it into something useable. In stepped the wonderful Delia Derbyshire, unsung heroine of the BBC Radiophonic Workshop who, with only the crudest electronic equipment available, turned it into a little masterpiece. Ethereal yet propulsive, the original theme from Doctor Who is definitely one of my absolute favourite pieces of music and I’m glad to see that Delia Derbyshire is now receiving the acclaim she deserves from serious music critics.

It’s ironic that I’ve now moved to Cardiff where new programmes of Doctor Who and its spin-off, the anagrammatic Torchwood, are made. One of the great things about the early episodes of Doctor Who was that the technology simply didn’t exist to do very good special effects. The scripts were consequently very careful to let the viewers’ imagination do all the work. That’s what made it so good. I’m pleased that the more recent incarnations of this show also don’t go overboard on the visuals. Perhaps thats a conscious attempt to appeal to people who saw the old ones as well as those too young to have done so. It’s just a pity the modern opening title music is so bad…

Anyway, I still love Doctor Who after all these years. It must sound daft to say that it inspired me to take up astrophysics, but it’s truer than any other explanation I can think of. Of course the career path is slightly different from a Timelord, but only slightly.

At any rate I think The Doctor is overdue for promotion. How about Professor Who?

Pathways to Research

Posted in Education, The Universe and Stuff with tags , , , , , on August 24, 2012 by telescoper

The other day I had a slight disagreement with a colleague of mine about the best advice to give to new PhD students about how to tackle their research. Talking to a few other members of staff about it subsequently has convinced me that there isn’t really a consensus about it and it might therefore be worth a quick post to see what others think.

Basically the issue is whether a new research student should try to get into “hands-on” research as soon as he or she starts, or whether it’s better to spend most of the initial phase in preparation: reading all the literature, learning the techniques required, taking advanced theory courses, and so on. I know that there’s usually a mixture of these two approaches, and it will vary hugely from one discipline to another, and especially between theory and experiment, but the question is which one do you think should dominate early on?

My view of this is coloured by my own experience as a PhD (or rather DPhil student) twenty-five years ago. I went directly from a three-year undergraduate degree to a three-year postgraduate degree. I did a little bit of background reading over the summer before I started graduate studies, but basically went straight into trying to solve a problem my supervisor gave me when I arrived at Sussex to start my DPhil. I had to learn quite a lot of stuff as I went along in order to get on, which I did in a way that wasn’t at all systematic.

Fortunately I did manage to crack the problem I was given, with the consequence that got a publication out quite early during my thesis period. Looking back on it I even think that I was helped by the fact that I was too ignorant to realise how difficult more expert people thought the problem was. I didn’t know enough to be frightened. That’s the drawback with the approach of reading everything about a field before you have a go yourself…

In the case of the problem I had to solve, which was actually more to do with applied probability theory than physics, I managed to find (pretty much by guesswork) a cute mathematical trick that turned out to finesse the difficult parts of the calculation I had to do. I really don’t think I would have had the nerve to try such a trick if I had read all the difficult technical literature on the subject.

So I definitely benefited from the approach of diving headlong straight into the detail, but I’m very aware that it’s difficult to argue from the particular to the general. Clearly research students need to do some groundwork; they have to acquire a toolbox of some sort and know enough about the field to understand what’s worth doing. But what I’m saying is that sometimes you can know too much. All that literature can weigh you down so much that it actually stifles rather than nurtures your ability to do research. But then complete ignorance is no good either. How do you judge the right balance?

I’d be interested in comments on this, especially to what extent it is an issue in fields other than astrophysics.

The Return of the Inductive Detective

Posted in Bad Statistics, Literature, The Universe and Stuff with tags , , , , , , , , on August 23, 2012 by telescoper

A few days ago an article appeared on the BBC website that discussed the enduring appeal of Sherlock Holmes and related this to the processes involved in solving puzzles. That piece makes a number of points I’ve made before, so I thought I’d update and recycle my previous post on that theme. The main reason for doing so is that it gives me yet another chance to pay homage to the brilliant Jeremy Brett who, in my opinion, is unsurpassed in the role of Sherlock Holmes. It also allows me to return to a philosophical theme I visited earlier this week.

One of the  things that fascinates me about detective stories (of which I am an avid reader) is how often they use the word “deduction” to describe the logical methods involved in solving a crime. As a matter of fact, what Holmes generally uses is not really deduction at all, but inference (a process which is predominantly inductive).

In deductive reasoning, one tries to tease out the logical consequences of a premise; the resulting conclusions are, generally speaking, more specific than the premise. “If these are the general rules, what are the consequences for this particular situation?” is the kind of question one can answer using deduction.

The kind of reasoning of reasoning Holmes employs, however, is essentially opposite to this. The  question being answered is of the form: “From a particular set of observations, what can we infer about the more general circumstances that relating to them?”.

And for a dramatic illustration of the process of inference, you can see it acted out by the great Jeremy Brett in the first four minutes or so of this clip from the classic Granada TV adaptation of The Hound of the Baskervilles:

I think it’s pretty clear in this case that what’s going on here is a process of inference (i.e. inductive rather than deductive reasoning). It’s also pretty clear, at least to me, that Jeremy Brett’s acting in that scene is utterly superb.

I’m probably labouring the distinction between induction and deduction, but the main purpose doing so is that a great deal of science is fundamentally inferential and, as a consequence, it entails dealing with inferences (or guesses or conjectures) that are inherently uncertain as to their application to real facts. Dealing with these uncertain aspects requires a more general kind of logic than the  simple Boolean form employed in deductive reasoning. This side of the scientific method is sadly neglected in most approaches to science education.

In physics, the attitude is usually to establish the rules (“the laws of physics”) as axioms (though perhaps giving some experimental justification). Students are then taught to solve problems which generally involve working out particular consequences of these laws. This is all deductive. I’ve got nothing against this as it is what a great deal of theoretical research in physics is actually like, it forms an essential part of the training of an physicist.

However, one of the aims of physics – especially fundamental physics – is to try to establish what the laws of nature actually are from observations of particular outcomes. It would be simplistic to say that this was entirely inductive in character. Sometimes deduction plays an important role in scientific discoveries. For example,  Albert Einstein deduced his Special Theory of Relativity from a postulate that the speed of light was constant for all observers in uniform relative motion. However, the motivation for this entire chain of reasoning arose from previous studies of eletromagnetism which involved a complicated interplay between experiment and theory that eventually led to Maxwell’s equations. Deduction and induction are both involved at some level in a kind of dialectical relationship.

The synthesis of the two approaches requires an evaluation of the evidence the data provides concerning the different theories. This evidence is rarely conclusive, so  a wider range of logical possibilities than “true” or “false” needs to be accommodated. Fortunately, there is a quantitative and logically rigorous way of doing this. It is called Bayesian probability. In this way of reasoning,  the probability (a number between 0 and 1 attached to a hypothesis, model, or anything that can be described as a logical proposition of some sort) represents the extent to which a given set of data supports the given hypothesis.  The calculus of probabilities only reduces to Boolean algebra when the probabilities of all hypothesese involved are either unity (certainly true) or zero (certainly false). In between “true” and “false” there are varying degrees of “uncertain” represented by a number between 0 and 1, i.e. the probability.

Overlooking the importance of inductive reasoning has led to numerous pathological developments that have hindered the growth of science. One example is the widespread and remarkably naive devotion that many scientists have towards the philosophy of the anti-inductivist Karl Popper; his doctrine of falsifiability has led to an unhealthy neglect of  an essential fact of probabilistic reasoning, namely that data can make theories more probable. More generally, the rise of the empiricist philosophical tradition that stems from David Hume (another anti-inductivist) spawned the frequentist conception of probability, with its regrettable legacy of confusion and irrationality.

In fact Sherlock Holmes himself explicitly recognizes the importance of inference and rejects the one-sided doctrine of falsification. Here he is in The Adventure of the Cardboard Box (the emphasis is mine):

Let me run over the principal steps. We approached the case, you remember, with an absolutely blank mind, which is always an advantage. We had formed no theories. We were simply there to observe and to draw inferences from our observations. What did we see first? A very placid and respectable lady, who seemed quite innocent of any secret, and a portrait which showed me that she had two younger sisters. It instantly flashed across my mind that the box might have been meant for one of these. I set the idea aside as one which could be disproved or confirmed at our leisure.

My own field of cosmology provides the largest-scale illustration of this process in action. Theorists make postulates about the contents of the Universe and the laws that describe it and try to calculate what measurable consequences their ideas might have. Observers make measurements as best they can, but these are inevitably restricted in number and accuracy by technical considerations. Over the years, theoretical cosmologists deductively explored the possible ways Einstein’s General Theory of Relativity could be applied to the cosmos at large. Eventually a family of theoretical models was constructed, each of which could, in principle, describe a universe with the same basic properties as ours. But determining which, if any, of these models applied to the real thing required more detailed data.  For example, observations of the properties of individual galaxies led to the inferred presence of cosmologically important quantities of  dark matter. Inference also played a key role in establishing the existence of dark energy as a major part of the overall energy budget of the Universe. The result is now that we have now arrived at a standard model of cosmology which accounts pretty well for most relevant data.

Nothing is certain, of course, and this model may well turn out to be flawed in important ways. All the best detective stories have twists in which the favoured theory turns out to be wrong. But although the puzzle isn’t exactly solved, we’ve got good reasons for thinking we’re nearer to at least some of the answers than we were 20 years ago.

I think Sherlock Holmes would have approved.

Kuhn the Irrationalist

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , , , , , , , , on August 19, 2012 by telescoper

There’s an article in today’s Observer marking the 50th anniversary of the publication of Thomas Kuhn’s book The Structure of Scientific Revolutions.  John Naughton, who wrote the piece, claims that this book “changed the way we look at science”. I don’t agree with this view at all, actually. There’s little in Kuhn’s book that isn’t implicit in the writings of Karl Popper and little in Popper’s work that isn’t implicit in the work of a far more important figure in the development of the philosophy of science, David Hume. The key point about all these authors is that they failed to understand the central role played by probability and inductive logic in scientific research. In the following I’ll try to explain how I think it all went wrong. It might help the uninitiated to read an earlier post of mine about the Bayesian interpretation of probability.

It is ironic that the pioneers of probability theory and its application to scientific research, principally Laplace, unquestionably adopted a Bayesian rather than frequentist interpretation for his probabilities. Frequentism arose during the nineteenth century and held sway until relatively recently. I recall giving a conference talk about Bayesian reasoning only to be heckled by the audience with comments about “new-fangled, trendy Bayesian methods”. Nothing could have been less apt. Probability theory pre-dates the rise of sampling theory and all the other frequentist-inspired techniques that many modern-day statisticians like to employ.

Most disturbing of all is the influence that frequentist and other non-Bayesian views of probability have had upon the development of a philosophy of science, which I believe has a strong element of inverse reasoning or inductivism in it. The argument about whether there is a role for this type of thought in science goes back at least as far as Roger Bacon who lived in the 13th Century. Much later the brilliant Scottish empiricist philosopher and enlightenment figure David Hume argued strongly against induction. Most modern anti-inductivists can be traced back to this source. Pierre Duhem has argued that theory and experiment never meet face-to-face because in reality there are hosts of auxiliary assumptions involved in making this comparison. This is nowadays called the Quine-Duhem thesis.

Actually, for a Bayesian this doesn’t pose a logical difficulty at all. All one has to do is set up prior probability distributions for the required parameters, calculate their posterior probabilities and then integrate over those that aren’t related to measurements. This is just an expanded version of the idea of marginalization, explained here.

Rudolf Carnap, a logical positivist, attempted to construct a complete theory of inductive reasoning which bears some relationship to Bayesian thought, but he failed to apply Bayes’ theorem in the correct way. Carnap distinguished between two types or probabilities – logical and factual. Bayesians don’t – and I don’t – think this is necessary. The Bayesian definition seems to me to be quite coherent on its own.

Other philosophers of science reject the notion that inductive reasoning has any epistemological value at all. This anti-inductivist stance, often somewhat misleadingly called deductivist (irrationalist would be a better description) is evident in the thinking of three of the most influential philosophers of science of the last century: Karl Popper, Thomas Kuhn and, most recently, Paul Feyerabend. Regardless of the ferocity of their arguments with each other, these have in common that at the core of their systems of thought likes the rejection of all forms of inductive reasoning. The line of thought that ended in this intellectual cul-de-sac began, as I stated above, with the work of the Scottish empiricist philosopher David Hume. For a thorough analysis of the anti-inductivists mentioned above and their obvious debt to Hume, see David Stove’s book Popper and After: Four Modern Irrationalists. I will just make a few inflammatory remarks here.

Karl Popper really began the modern era of science philosophy with his Logik der Forschung, which was published in 1934. There isn’t really much about (Bayesian) probability theory in this book, which is strange for a work which claims to be about the logic of science. Popper also managed to, on the one hand, accept probability theory (in its frequentist form), but on the other, to reject induction. I find it therefore very hard to make sense of his work at all. It is also clear that, at least outside Britain, Popper is not really taken seriously by many people as a philosopher. Inside Britain it is very different and I’m not at all sure I understand why. Nevertheless, in my experience, most working physicists seem to subscribe to some version of Popper’s basic philosophy.

Among the things Popper has claimed is that all observations are “theory-laden” and that “sense-data, untheoretical items of observation, simply do not exist”. I don’t think it is possible to defend this view, unless one asserts that numbers do not exist. Data are numbers. They can be incorporated in the form of propositions about parameters in any theoretical framework we like. It is of course true that the possibility space is theory-laden. It is a space of theories, after all. Theory does suggest what kinds of experiment should be done and what data is likely to be useful. But data can be used to update probabilities of anything.

Popper has also insisted that science is deductive rather than inductive. Part of this claim is just a semantic confusion. It is necessary at some point to deduce what the measurable consequences of a theory might be before one does any experiments, but that doesn’t mean the whole process of science is deductive. He does, however, reject the basic application of inductive reasoning in updating probabilities in the light of measured data; he asserts that no theory ever becomes more probable when evidence is found in its favour. Every scientific theory begins infinitely improbable, and is doomed to remain so.

Now there is a grain of truth in this, or can be if the space of possibilities is infinite. Standard methods for assigning priors often spread the unit total probability over an infinite space, leading to a prior probability which is formally zero. This is the problem of improper priors. But this is not a killer blow to Bayesianism. Even if the prior is not strictly normalizable, the posterior probability can be. In any case, given sufficient relevant data the cycle of experiment-measurement-update of probability assignment usually soon leaves the prior far behind. Data usually count in the end.

The idea by which Popper is best known is the dogma of falsification. According to this doctrine, a hypothesis is only said to be scientific if it is capable of being proved false. In real science certain “falsehood” and certain “truth” are almost never achieved. Theories are simply more probable or less probable than the alternatives on the market. The idea that experimental scientists struggle through their entire life simply to prove theorists wrong is a very strange one, although I definitely know some experimentalists who chase theories like lions chase gazelles. To a Bayesian, the right criterion is not falsifiability but testability, the ability of the theory to be rendered more or less probable using further data. Nevertheless, scientific theories generally do have untestable components. Any theory has its interpretation, which is the untestable baggage that we need to supply to make it comprehensible to us. But whatever can be tested can be scientific.

Popper’s work on the philosophical ideas that ultimately led to falsificationism began in Vienna, but the approach subsequently gained enormous popularity in western Europe. The American Thomas Kuhn later took up the anti-inductivist baton in his book The Structure of Scientific Revolutions. Initially a physicist, Kuhn undoubtedly became a first-rate historian of science and this book contains many perceptive analyses of episodes in the development of physics. His view of scientific progress is cyclic. It begins with a mass of confused observations and controversial theories, moves into a quiescent phase when one theory has triumphed over the others, and lapses into chaos again when the further testing exposes anomalies in the favoured theory. Kuhn adopted the word paradigm to describe the model that rules during the middle stage,

The history of science is littered with examples of this process, which is why so many scientists find Kuhn’s account in good accord with their experience. But there is a problem when attempts are made to fuse this historical observation into a philosophy based on anti-inductivism. Kuhn claims that we “have to relinquish the notion that changes of paradigm carry scientists ..closer and closer to the truth.” Einstein’s theory of relativity provides a closer fit to a wider range of observations than Newtonian mechanics, but in Kuhn’s view this success counts for nothing.

Paul Feyerabend has extended this anti-inductivist streak to its logical (though irrational) extreme. His approach has been dubbed “epistemological anarchism”, and it is clear that he believed that all theories are equally wrong. He is on record as stating that normal science is a fairytale, and that equal time and resources should be spent on “astrology, acupuncture and witchcraft”. He also categorised science alongside “religion, prostitution, and so on”. His thesis is basically that science is just one of many possible internally consistent views of the world, and that the choice between which of these views to adopt can only be made on socio-political grounds.

Feyerabend’s views could only have flourished in a society deeply disillusioned with science. Of course, many bad things have been done in science’s name, and many social institutions are deeply flawed. One can’t expect anything operated by people to run perfectly. It’s also quite reasonable to argue on ethical grounds which bits of science should be funded and which should not. But the bottom line is that science does have a firm methodological basis which distinguishes it from pseudo-science, the occult and new age silliness. Science is distinguished from other belief-systems by its rigorous application of inductive reasoning and its willingness to subject itself to experimental test. Not all science is done properly, of course, and bad science is as bad as anything.

The Bayesian interpretation of probability leads to a philosophy of science which is essentially epistemological rather than ontological. Probabilities are not “out there” in external reality, but in our minds, representing our imperfect knowledge and understanding. Scientific theories are not absolute truths. Our knowledge of reality is never certain, but we are able to reason consistently about which of our theories provides the best available description of what is known at any given time. If that description fails when more data are gathered, we move on, introducing new elements or abandoning the theory for an alternative. This process could go on forever. There may never be a “final” theory, and scientific truths are consequently far from absolute, but that doesn’t mean that there is no progress.

Science 2.0 and all that

Posted in Open Access, Science Politics with tags , , , on July 9, 2012 by telescoper

I cam across this on Twitter today and thought I’d share it. Although I have written at various times about open access and the virtues of sharing scientific data, I hadn’t realised that such things came under the umbrella of “Science 2.0“, a term which is quite new to me. This post contains some very interesting ideas and information on the subject.

katarzynasz's avatarScience 2.0 study

We’re approaching the final stage of our study. So far, we have  opened up our bibliography on our Mendeley group here; our notes through this very blog; our model for open science; and our draft policy recommendations for EU. And we’ve benefited from your comments and insight.

Now, we need your help to improve the evidence about the importance of Science 2.0, if we want policy-makers to take it seriously.

Therefore, we share the final presentation that we have presented to the European Commission, DG RTD here.

Help us improving it, by gathering more data and evidence, showing that Science 2.0 is important and disruptive, and that it’s happening already. In particular, we ask to share evidence and data on the take-up of Science 2.0: how many scientist are adopting it? With what benefits?

We ask all people interested in Science 2.0 to share the evidence at hand, by adding

View original post 15 more words

A Return to O-levels?

Posted in Education, The Universe and Stuff with tags , , , , , , , on June 21, 2012 by telescoper

I woke up this morning as usual to the 7am news on BBC Radio 3, which included an item about how Education Secretary Michael Gove is planning to scrap the current system of GCSE Examinations and replace them with something more like the old GCE O-levels, which oldies like me took way back in the mists of time.

There is a particular angle to this in Wales, because Michael Gove doesn’t have responsibility for education here. That falls to the devolved Welsh Government, and in particular to Leighton Andrews. He’s made it quite clear on Twitter that he has no intention to take  Wales  back to O-levels. Most UK media sources – predominantly based in London – seem to have forgotten that Gove speaks for England, not for the whole United Kingdom.

This is not the central issue, however. The question is whether GCSEs are, as Michael Gove claims, “so bad that they’re beyond repair”. Politicians, teachers and educationalists are basically saying that students are doing better; others are saying that the exams are easier. It’s a shouting match that has been going for years and which achieves very little. I can’t add much to it either, because I’m too old to have done GCSEs – they hadn’t been invented then. I did O-levels.

It does, however, give me the excuse to show you  the O-level physics paper I took way back in 1979. I’ve actually posted this before, but it seems topical to put it up again:

You might want to compare this with a recent example of an Edexcel GCSE (Multiple-choice) Physics paper, about which I have also posted previously.

I think most of the questions in the GCSE paper are much easier than the O-level paper above. Worse, there are many that are so sloppily put together that they  don’t make any sense at all. Take Question 1:

I suppose the answer is meant to be C, but since it doesn’t say that A is the orbit of a planet, as far as I’m concerned it might just as well be D. Are we meant to eliminate D simply because it doesn’t have another orbit going through it?

On the other hand, the orbit of a moon around the Sun is in fact similar to the orbit of its planet around the Sun, since the orbital speed and radius of the moon around its planet are smaller than those of the planet around the Sun. At a push, therefore you could argue that A is the closest choice to a moon’s orbit around the Sun. The real thing would be something close to a circle with a 4-week wobble variation superposed.

You might say I’m being pedantic, but the whole point of exam questions is that they shouldn’t be open to ambiguities like this, at least if they’re science exams. I can imagine bright and knowledgeable students getting thoroughly confused by this question, and many of the others on the paper.

Here’s a couple more, from the “Advanced” section:

The answer to Q30 is, presumably, A. But do any scientists really think that galaxies are “moving away from the origin of the Big Bang”?  I’m worried that this implies that the Big Bang was located at a specific point. Is that what they’re teaching?

Bearing in mind that only one answer is supposed to be right, the answer to Q31 is presumably D. But is there really no evidence from “nebulae” that supports the Big Bang theory? The expansion of the Universe was discovered by observing things Hubble called “nebulae”..

I’m all in favour of school students being introduced to fundamental things such as cosmology and particle physics, but my deep worry is that this is being done at the expense of learning any real physics at all and is in any case done in a garbled and nonsensical way.

Lest I be accused of an astronomy-related bias, anyone care to try finding a correct answer to this question?

The more of this kind of stuff I see, the more admiration I have for the students coming to study physics and astronomy at University. How they managed to learn anything at all given the dire state of science education represented by this paper is really quite remarkable.

Ultimately, however, the issue is not whether we have GCSEs or O-level examinations. There’s already far too much emphasis in the education system on assessment instead of   learning. That runs all the way through schools and into the university system. The excessive time we spend examining students reduces what we can teach them and turns the students’ learning experience into something resembling a treadmill. I agree that we need better examinations than we have now, but we also need   fewer. And we need to stop being obsessed by them.

Those earthly godfathers of Heaven’s lights

Posted in Literature, Poetry, The Universe and Stuff with tags , , on May 2, 2012 by telescoper

What was it that Ernest Rutherford said about science and stamp-collecting? It seems Shakespeare had much the same idea!

Study is like the heaven’s glorious sun,
That will not be deep-search’d with saucy looks;
Small have continual plodders ever won,
Save base authority from others’ books.
These earthly godfathers of heaven’s lights
That give a name to every fixed star,
Have no more profit of their shining nights
Than those that walk and wot not what they are.

from Love’s Labour’s Lost (Act I, Scene I) by William Shakespeare.

P.S. “wot” in the last line is an archaic form of  the verb “wit”, meaning “to know”; cf “I wot not what I ought to have braught” from A Midsummer Night’s Dream.

My Guardian Science Blog…

Posted in Open Access with tags , , , , on April 20, 2012 by telescoper

Just a very quick post to direct you to a piece by me on the topic of Open Access and the Academic Journal Racket, which appeared today in the Grauniad Guardian Science Blog.

Here’s a taster, but for the whole thing you’ll have to go here.

 

Lecture less, teach more…

Posted in Education with tags , , , , , on January 2, 2012 by telescoper

I was just about to go to the shops just now, but the weather is so extreme – dark apocalyptic skies and violent hailstorms – that I thought I’d have a quick go on the blog in the hope that  things quieten down a little. I was going to write something a bit earlier, as I was up at 7am, but all that came into my head were dark imaginings about the future and I didn’t want to depress myself and everyone else going on about that. The e-astronomer has already done something along those lines anyway.

Fortunately I saw something on Twitter that is a more appropriate theme for a blog post, namely a very interesting article about the role of lectures in university physics education. This is a topic I feel very strongly about, and I agree with most of what the article says, which is basically that the traditional lecture format is a very ineffective way of teaching physics. I wouldn’t go as far as to say that lectures are inherently useless, but I think they should be used in a very different way from the way they are used now.

When I was an undergraduate, in the dim and distant past, I attended lectures assiduously because that was expected of students. To put it bluntly, though, I don’t think I ever learned anything much from doing so. My real learning was done back in my room, with books and problem sheets as well as my lecture notes, trying to figure out how the physics all went together with other things I had learned, and how to apply it in interesting situations. Sometimes the lecture notes were useful, sometimes not, but I never felt that I had learned anything until I was confident that I knew how to apply the new concepts in solving problems.

But I did find some lectures very enjoyable and worthwhile, because some lecturers were good at making students feel interested in the subject.  The enthusiasm and depth of understanding conveyed by someone who has devoted their life to the study of a subject can be  infectious, and a very enjoyable form of entertainment in its own right. That’s why public lectures remain popular; their intrinsic educational value is limited, but they serve to stimulate the audience to find out more. That’s if they’re good, of course. They can have the opposite effect also.

At Cardiff – like other universities – we hand out questionnaires to students to get feedback on lecturers. Usually the thing that stands out as making one lecturer more popular than others is their enthusiasm. Quite rightly so. If someone who has made a career out of the subject can’t be enthusiastic, why on Earth should the students?

For other comments on what makes a good lecture, see here.

What makes a lecture useless is when it is used simply to transfer material from the lecturer to the student, without passing through the mind of either participant. Slavishly copying detailed notes seems to me a remarkably pointless activity, although taking notes of the key points in a lecture devoted primarily to concepts and demonstrations is far from that. Far better to learn to use resources such as textbooks and internet sites effectively than to endure an hour’s dictation. We don’t want our students to learn physics by rote; we want them to learn to think like physicists!

While I’m on about lectures, I’ll also add that I think the increasing use of Powerpoint in lectures has its downside too. I started using it when I moved to Cardiff, but never felt comfortable with it as a medium for teaching physics. This year I’m going to scrap it. I would revert to “chalk-and-talk” if we had any blackboards, so I’ll have to make do with those hideous whiteboard things. Not all progress is good progress.

Anyway, what we’ve recently done with our new courses in the School of Physics & Astronomy at Cardiff University is to start to move away from an over-reliance on lectures. One way we’ve done this is to merge some of our smaller modules. Whereas a 10-credit module used to have two lectures a week, the new 20 credit modules now have the same number of lectures, complemented by two hours of problems classes in which the students work through exercise with staff members lending assistance. Initial reaction from the students is positive, though there have been some teething troubles. We’ll just  have to wait for the examination results to see how well it has worked.

I dare say other departments around the country are making similar changes in teaching methods in response to the availability of new technologies and changes to the school curriculum. But of course its a path that other trod before. It’s good to have the chance to end by congratulating Derek Raine of the University of Leicester for his MBE in the New Years Honours List for his contributions to science education. He was arguing for a different approach to physics teaching when many of us were still in short pants. It’s just a pity we’ve taken such a long time to realise he was right.

Now the sky’s blue so I can go and do my shopping. Toodle-pip!

A Poll about Peer Review

Posted in Science Politics with tags , , , on September 13, 2011 by telescoper

Anxious not to let the momentum dissipate about the discussion of scientific publishing, I thought I’d try a quick poll to see what people think about the issue of peer review. In my earlier posts I’ve advanced the view that, at least in the subject I work in (astrophysics), peer review achieves very little. Given that it is also extremely expensive when done by traditional journals, I think it could be replaced by a kind of crowd-sourcing, in which papers are put on an open-access archive or repository of some sort, and can then be commented upon by the community and from where they can be cited by other researchers. If you like, a sort of “arXiv plus”. Good papers will attract attention, poor ones will disappear. Such a system also has the advantage of guaranteeing open public access to research papers (although not necessarily to submission, which would have to be restricted to registered users only).

However, this is all just my view and I have no idea really how strongly others rate  the current system of peer review. The following poll is not very scientific, but ‘ve tried to include a reasonably representative range of views from “everything’s OK – let’s keep the current system” to the radical suggestion I make above.

Of course, if you have other views about peer review or academic publishing generally, please feel free to post them through the comments box.