One of my very first blog posts (from way back in 2008) was inspired by an old book of poems by William Wordsworth that I’ve had since I was a child. I was reading it again this evening and came across this short excerpt, near the end of the book, from The Excursion, and entitled for the purposes of the book The Universe a Shell. It struck me as having a message for anyone who works on the science of things either too big or too small to be sensed directly on a human scale, so I thought I’d post it.
I decided to scan it in rather than copy it from elsewhere on the net, as I really love the look of that old faded typeface on the yellowing paper, even if it is a bit wonky because it went over two pages. I’ve been fond of Wordsworth for as long as I can remember and, like a few other things, that’s something I’ll never feel the need to apologize for…
Yesterday we hosted a seminar by João Magueijo from Imperial College. It was a really interesting talk but the visit also a number of staff and students, including myself, the chance to chat to João about various things. In my case that primarily meant catching up on one another’s news, since we haven’t talked since early summer and a lot has happened since then. Then we had drinks, more drinks, dinner, drinks and then cocktails, finishing about 2am. A fairly standard night out with João, actually.
Among the topics discussed in the course of an increasingly drunken conversation was the fact that physicist Stephon Alexander had recently moved to Dartmouth College, a prestigious Ivy League institution in New Hampshire. I don’t know Stephon very well at all as I don’t really work in the same area as him. In fact, we’ve only ever met once – at a Cosmology School in Morocco (in 1996 or thereabouts); he was a graduate student and I was giving some lectures. On the left you can see a snap of him I took at that time. Can that really have been so long ago?
Anyway, I’ll resist the temptation to bemoan the passage of time and all that and get back to the point which is the connection that formed in my head between Stephon, yesterday’s post about the trials and tribulations facing prospective PhD students, and an older post of mine about the importance of not forgetting to live a life while you do a PhD.
The point is that although there are many things that may deter or prevent an undergraduate from taking the plunge into graduate studies, one thing shouldn’t put you off and that is the belief that doing a PhD is like joining a monastery in that it requires you to give up a lot of other things and retreat from the outside world. Frankly, that’s bollocks. If I’m permitted to quote myself:
I had plenty of outside interests (including music, sport and nightlife) and took time out regularly to indulge them. I didn’t – and still don’t – feel any guilt about doing that. I’m not a robot. And neither are you.
In other words, doing a PhD does not require you to give up the things that make life worth living. Actually, if you’re doing a physics PhD then physics itself should be one of the things that make life worth living for you, so I should rephrase that as “giving up any of the other things that make life worth living”.
Having a wide range of experiences and interests to draw on can even help with your research:
In fact, I can think of many times during my graduate studies when I was completely stuck on a problem – to the extent that it was seriously bothering me. On such occasions I learned to take a break. I often found that going for a walk, doing a crossword, or just trying to think about something else for a while, allowed me to return to the problem fresher and with new ideas. I think the brain gets into a rut if you try to make it work in one mode all the time.
I’d say that to be a good research student by no means requires you to be a monomaniac. And this is where Stephon comes in. As well as being a Professor of Theoretical Physics, Stephon is an extremely talented Jazz musician. He’s even had saxophone lessons from the great Ornette Coleman. I have to admit he has a few technical problems with his instrument in this clip, but I’m using him as an example here because I also love Jazz and, although I have a negligible amount of talent as a musician, have rudimentary knowledge of how to play the saxophone. In fact, I remember chatting to him in a bar in Casablanca way back in ’96 and music was the sole topic of conversation.
Anyway, in the following clip Stephon talks about how music actually helped him solve a research problem. It’s basically an extended riff on the opening notes of the John Coltrane classic Giant Steps which, incidentally, I posted about here.
I’m one of those old-fashioned types who still gets an email from the arXiv every morning notifying me of the latest contributions and listing their abstracts. I still prefer to get my daily update that way than via logging onto the website, although I suspect that’s really force of habit more than anything. The emails are longer these days than they used to be, of course, so now I only manage a quick skim but it’s still a worthwhile exercise.
I have noticed over the twenty-odd years that I’ve been subscribing to this service that as well as being more numerous now, abstracts are also unquestionably longer (at least on astro-ph), to the extent that one sees the dreaded “[abridged]”, indicating that the (approximately 20-line) length limit has been exceeded, much more frequently now than in the past.
Without criticising individual papers, it does seem to me that excessively long and ponderous abstracts are likely to be counter-productive. The whole point of an abstract is that it is a sort of executive summary of the paper which is supposed to convince the reader that the whole paper is worth reading. Given the number of papers there are flying around, a short pithy abstract with a high density of key ideas and results is much more likely to get people reading further than one that waffles on and on about “discussing” and “constraining” this that or the other. Abstracts should be about answering questions, not merely addressing them.
Another mistake that some abstract writers make is to write the abstract as if it were the introduction, which isn’t the point at all. The first few sentences of the abstract should establish why the topic is interesting, but that doesn’t mean it’s meant to be a mini-literature review. References in the abstracts are best avoided altogether, in my opinion.
When so many experienced professional scientists write poor abstracts it’s hardly surprising that our students also struggle to compose good ones for, e.g., project reports. The best advice I can offer is always write the abstract last of all, when you know exactly what is in the rest of the paper. Incidentally, it is often a good idea to write the conclusions first…
Once you have finished everything else then set yourself the task of making your abstract as brief as possible but ensure that it answers the following questions (in no more than a couple of sentences each):
Why is the topic of the paper interesting? What is the question you’re answering? Summarize the background.
What did you do? What techniques/data did you use? Summarize the method.
What were your results? Summarize the key results.
What are the wider implications of your results? In particular, how do they answer the questions in 1?
If your abstract comes out more than 20 lines long then cut it. If one of the four sections is much longer than the others then chop it mercilessly to restore the balance. The shorter the abstract the better it is, in my view, although perhaps you don’t have to go this far…
Come the revolution, when all papers will be available online, the abstract will be even more important in getting your work recognized. Digital open access publishing will increase the amount of stuff “out there”, and a good abstract is going to be essential to raise your paper’s signal above the noise level.
Abstracts no doubt play different roles in different fields. I understand that in some disciplines abstracts are even actually the primary mode of publication. I think the guidelines above are pretty good for astrophysics, physics generally, and perhaps even most physical sciences. I’d be interested to hear from folk working in other disciplines how they might be modified to suit their requirements, so please feel free to comment below.
As if this week wasn’t busy enough, I’ve just received back the student questionnaires for my second-year module The Physics of Fields and Flows (which includes some theoretical physics techniques, such as vector calculus and Fourier methods, together with applications to fluid flow, electromagnetism and a few other things). I’ve only just taken up this module this year and was planning to prepare it over the summer, but circumstances rather intervened and I’ve had to put together more-or-less on the fly. I was, therefore, not inconsiderably apprehensive about the reaction I’d get from the students.
Fortunately most of the comments were fairly positive, although there were some very useful constructive criticisms, which I’ll definitely take into account for the rest of the term.
However, one recurring comment was that I write too fast on the whiteboard. In fact I go far more slowly than the lecturers I had at University. That brings me back to an old post I did some time ago about lecture notes.
I won’t repeat the entire content of my earlier discussion, but one of the main points I made in that was about how inefficient many students are at taking notes during lectures, so much so that the effort of copying things onto paper must surely prevent them absorbing the intellectual content of the lecture.
I dealt with this problem when I was an undergraduate by learning to write very quickly without looking at the paper as I did so. That way I didn’t waste time moving my head to and fro between paper and screen or blackboard. Of course, the notes I produced using this method weren’t exactly aesthetically pleasing, but my handwriting is awful at the best of times so that didn’t make much difference to me. I always wrote my notes up more neatly after the lecture anyway. But the great advantage was that I could write down everything in real time without this interfering with my ability to listen to what the lecturer was saying.
An alternative to this approach is to learn shorthand, or invent your own form of abbreviated language. This approach is, however, unlikely to help you take down mathematical equations quickly.
My experience nowadays is that students simply aren’t used to taking notes like this – I suppose because they get given so many powerpoint presentations or other kinds of handout – so they struggle to cope with the old-fashioned chalk-and-talk style of teaching that some lecturers still prefer. That’s probably because they get much less practice at school than my generation. Most of my school education was done via the blackboard..
Nowadays, most lecturers use more “modern” methods than this. Many lecturers using powerpoint, and often they give copies of the slides to students. Others give out complete sets of printed notes before, during, or after lectures. That’s all very well, I think, but what are the students supposed to be doing during the lecture if you do that? Listen, of course, but if there is to be a long-term benefit they should take notes too.
Even if I hand out copies of slides or other notes, I always encourage my students to make their own independent set of notes, as complete as possible. I don’t mean copying down what they see on the screen and what they may have on paper already, but trying to write down what I say as I say it. I don’t think many take that advice, which means much of the spoken illustrations and explanations I give don’t find their way into any long term record of the lecture.
And if the lecturer just reads out the printed notes, adding nothing by way of illustration or explanation, then the audience is bound to get bored very quickly.
My argument, then, is that regardless of what technology the lecturer uses, whether he/she gives out printed notes or not, then if the students can’t take notes accurately and efficiently then lecturing is a complete waste of time. In fact for the module I’m doing now I don’t hand out lecture notes at all during the lectures, although I do post lecture summaries and answers to the exercises online after they’ve been done.
I like lecturing, because I like talking about physics and astronomy, but as I’ve got older I’ve become less convinced that lectures play a useful role in actually teaching anything. I think we should use lectures more sparingly, relying more on problem-based learning to instil proper understanding. When we do give lectures, they should focus much more on stimulating interest by being entertaining and thought-provoking. They should not be for the routine transmission of information, which is far too often the default.
I’m not saying we should scrap lectures altogether. At the very least they have the advantage of giving the students a shared experience, which is good for networking and building a group identity. Some students probably get a lot out of lectures anyway, perhaps more than I did when I was their age. But different people benefit from different styles of teaching, so we need to move away from lecturing as the default option.
I don’t think I ever learned very much about physics from lectures, but I’m nevertheless glad I learned out how to take notes the way I did because I find it useful in all kinds of situations. Effective note-taking is definitely a transferable skill, but it’s also a dying art.
I spent quite some time this morning going over some coursework problems with my second-year Physics class. It’s quite a big course – about 100 students take it – but I mark all the coursework myself so as to get a picture of what the students are finding easy and what difficult. After returning the marked scripts I then go through general matters arising with them, as well as making the solutions available on our on-line system called Learning Central.
Anyway, this morning I decided to devote quite a bit of time to some tips about how to tackle physics problems, not only in terms of how to solve them but also how to present the answer in an appropriate way.
That may seem either arrogant or facetious, or just a bit of a joke, but that’s really just the middle bit. Feynman’s advice on points 1 and 3 is absolutely spot on and worth repeating many times to an audience of physics students.
I’m a throwback to an older style of school education when the approach to solving unseen mathematical or scientific problems was emphasized much more than it is now. Nowadays much more detailed instructions are given in School examinations than in my day, often to the extent that students are only required to fill in blanks in a solution that has already been mapped out.
I find that many, particularly first-year, students struggle when confronted with a problem with nothing but a blank sheet of paper to write the solution on. The biggest problem we face in physics education, in my view, is not the lack of mathematical skill or background scientific knowledge needed to perform calculations, but a lack of experience of how to set the problem up in the first place and a consequent uncertainty about, or even fear of, how to start. I call this “blank paper syndrome”.
In this context, Feynman’s advice is the key to the first step of solving a problem. When I give tips to students I usually make the first step a bit more general, however. It’s important to read the question too.
The middle step is more difficult and often relies on flair or the ability to engage in lateral thinking, which some people do more easily than others, but that does not mean it can’t be nurtured. The key part is to look at what you wrote down in the first step, and then apply your little grey cells to teasing out – with the aid of your physics knowledge – things that can lead you to the answer, perhaps via some intermediate quantities not given directly in the question. This is the part where some students get stuck and what one often finds is an impenetrable jumble of mathematical symbols swirling around randomly on the page.
Everyone gets stuck sometimes, but you can do yourself a big favour by at least putting some words in amongst the algebra to explain what it is you were attempting to do. That way, even if you get it wrong, you can be given some credit for having an idea of what direction you were thinking of travelling.
The last of Feynman’s steps is also important. I lost count of the coursework attempts I marked this week in which the student got almost to the end, but didn’t finish with a clear statement of the answer to the question posed and just left a formula dangling. Perhaps it’s because the students might have forgotten what they started out trying to do, but it seems very curious to me to get so far into a solution without making absolutely sure you score the points. IHaving done all the hard work, you should learn to savour the finale in which you write “Therefore the answer is…” or “This proves the required result”. Scripts that don’t do this are like detective stories missing the last few pages in which the name of the murderer is finally revealed.
So, putting all these together, here are the three tips I gave to my undergraduate students this morning.
Read the question! Some solutions were to problems other than that which was posed. Make sure you read the question carefully. A good habit to get into is first to translate everything given in the question into mathematical form and define any variables you need right at the outset. Also drawing a diagram helps a lot in visualizing the situation, especially helping to elucidate any relevant symmetries.
Remember to explain your reasoning when doing a mathematical solution. Sometimes it is very difficult to understand what you’re trying to do from the maths alone, which makes it difficult to give partial credit if you are trying to the right thing but just make, e.g., a sign error.
Finish your solution appropriately by stating the answer clearly (and, where relevant, in correct units). Do not let your solution fizzle out – make sure the marker knows you have reached the end and that you have done what was requested.
There are other tips I might add – such as checking answers by doing the numerical parts at least twice on your calculator and thinking about whether the order-of-magnitude of the answer is physically reasonable – but these are minor compared to the overall strategy.
And another thing is not to be discouraged if you find physics problems difficult. Never give up without a fight. It’s only by trying difficult things that you can improve your ability by learning from your mistakes. It’s not the job of a physics lecturer to make physics seem easy but to encourage you to believe that you can do things that are difficult.
So anyway that’s my bit of “reflective practice” for the day. I’m sure there’ll be other folk reading this who have other tips for solving mathematical and scientific problems, in which case feel free to add them through the comments box.
I’ve been preparing material for my new 2nd year lecture course module The Physics of Fields and Flows, which starts next week. The idea of this is to put together some material on electromagnetism and fluid mechanics in a way that illustrates the connections between them as well as developing proficiency in the mathematics that underpins them, namely vector calculus. Anyway, in the course of putting together the notes and exercises it occurred to me to have a look at the stuff I was given when I was in the 2nd year at university, way back in 1983-4. When I opened the file I found this problem which caused me a great deal of trouble when I tried to do it all those years ago. It’s from an old Cambridge Part IB Advanced Physics paper. See what you can make of it..
The other day I had a slight disagreement with a colleague of mine about the best advice to give to new PhD students about how to tackle their research. Talking to a few other members of staff about it subsequently has convinced me that there isn’t really a consensus about it and it might therefore be worth a quick post to see what others think.
Basically the issue is whether a new research student should try to get into “hands-on” research as soon as he or she starts, or whether it’s better to spend most of the initial phase in preparation: reading all the literature, learning the techniques required, taking advanced theory courses, and so on. I know that there’s usually a mixture of these two approaches, and it will vary hugely from one discipline to another, and especially between theory and experiment, but the question is which one do you think should dominate early on?
My view of this is coloured by my own experience as a PhD (or rather DPhil student) twenty-five years ago. I went directly from a three-year undergraduate degree to a three-year postgraduate degree. I did a little bit of background reading over the summer before I started graduate studies, but basically went straight into trying to solve a problem my supervisor gave me when I arrived at Sussex to start my DPhil. I had to learn quite a lot of stuff as I went along in order to get on, which I did in a way that wasn’t at all systematic.
Fortunately I did manage to crack the problem I was given, with the consequence that got a publication out quite early during my thesis period. Looking back on it I even think that I was helped by the fact that I was too ignorant to realise how difficult more expert people thought the problem was. I didn’t know enough to be frightened. That’s the drawback with the approach of reading everything about a field before you have a go yourself…
In the case of the problem I had to solve, which was actually more to do with applied probability theory than physics, I managed to find (pretty much by guesswork) a cute mathematical trick that turned out to finesse the difficult parts of the calculation I had to do. I really don’t think I would have had the nerve to try such a trick if I had read all the difficult technical literature on the subject.
So I definitely benefited from the approach of diving headlong straight into the detail, but I’m very aware that it’s difficult to argue from the particular to the general. Clearly research students need to do some groundwork; they have to acquire a toolbox of some sort and know enough about the field to understand what’s worth doing. But what I’m saying is that sometimes you can know too much. All that literature can weigh you down so much that it actually stifles rather than nurtures your ability to do research. But then complete ignorance is no good either. How do you judge the right balance?
I’d be interested in comments on this, especially to what extent it is an issue in fields other than astrophysics.
A few days ago an article appeared on the BBC website that discussed the enduring appeal of Sherlock Holmes and related this to the processes involved in solving puzzles. That piece makes a number of points I’ve made before, so I thought I’d update and recycle my previous post on that theme. The main reason for doing so is that it gives me yet another chance to pay homage to the brilliant Jeremy Brett who, in my opinion, is unsurpassed in the role of Sherlock Holmes. It also allows me to return to a philosophical theme I visited earlier this week.
One of the things that fascinates me about detective stories (of which I am an avid reader) is how often they use the word “deduction” to describe the logical methods involved in solving a crime. As a matter of fact, what Holmes generally uses is not really deduction at all, but inference (a process which is predominantly inductive).
In deductive reasoning, one tries to tease out the logical consequences of a premise; the resulting conclusions are, generally speaking, more specific than the premise. “If these are the general rules, what are the consequences for this particular situation?” is the kind of question one can answer using deduction.
The kind of reasoning of reasoning Holmes employs, however, is essentially opposite to this. The question being answered is of the form: “From a particular set of observations, what can we infer about the more general circumstances that relating to them?”.
And for a dramatic illustration of the process of inference, you can see it acted out by the great Jeremy Brett in the first four minutes or so of this clip from the classic Granada TV adaptation of The Hound of the Baskervilles:
I think it’s pretty clear in this case that what’s going on here is a process of inference (i.e. inductive rather than deductive reasoning). It’s also pretty clear, at least to me, that Jeremy Brett’s acting in that scene is utterly superb.
I’m probably labouring the distinction between induction and deduction, but the main purpose doing so is that a great deal of science is fundamentally inferential and, as a consequence, it entails dealing with inferences (or guesses or conjectures) that are inherently uncertain as to their application to real facts. Dealing with these uncertain aspects requires a more general kind of logic than the simple Boolean form employed in deductive reasoning. This side of the scientific method is sadly neglected in most approaches to science education.
In physics, the attitude is usually to establish the rules (“the laws of physics”) as axioms (though perhaps giving some experimental justification). Students are then taught to solve problems which generally involve working out particular consequences of these laws. This is all deductive. I’ve got nothing against this as it is what a great deal of theoretical research in physics is actually like, it forms an essential part of the training of an physicist.
However, one of the aims of physics – especially fundamental physics – is to try to establish what the laws of nature actually are from observations of particular outcomes. It would be simplistic to say that this was entirely inductive in character. Sometimes deduction plays an important role in scientific discoveries. For example, Albert Einstein deduced his Special Theory of Relativity from a postulate that the speed of light was constant for all observers in uniform relative motion. However, the motivation for this entire chain of reasoning arose from previous studies of eletromagnetism which involved a complicated interplay between experiment and theory that eventually led to Maxwell’s equations. Deduction and induction are both involved at some level in a kind of dialectical relationship.
The synthesis of the two approaches requires an evaluation of the evidence the data provides concerning the different theories. This evidence is rarely conclusive, so a wider range of logical possibilities than “true” or “false” needs to be accommodated. Fortunately, there is a quantitative and logically rigorous way of doing this. It is called Bayesian probability. In this way of reasoning, the probability (a number between 0 and 1 attached to a hypothesis, model, or anything that can be described as a logical proposition of some sort) represents the extent to which a given set of data supports the given hypothesis. The calculus of probabilities only reduces to Boolean algebra when the probabilities of all hypothesese involved are either unity (certainly true) or zero (certainly false). In between “true” and “false” there are varying degrees of “uncertain” represented by a number between 0 and 1, i.e. the probability.
Overlooking the importance of inductive reasoning has led to numerous pathological developments that have hindered the growth of science. One example is the widespread and remarkably naive devotion that many scientists have towards the philosophy of the anti-inductivist Karl Popper; his doctrine of falsifiability has led to an unhealthy neglect of an essential fact of probabilistic reasoning, namely that data can make theories more probable. More generally, the rise of the empiricist philosophical tradition that stems from David Hume (another anti-inductivist) spawned the frequentist conception of probability, with its regrettable legacy of confusion and irrationality.
In fact Sherlock Holmes himself explicitly recognizes the importance of inference and rejects the one-sided doctrine of falsification. Here he is in The Adventure of the Cardboard Box (the emphasis is mine):
Let me run over the principal steps. We approached the case, you remember, with an absolutely blank mind, which is always an advantage. We had formed no theories. We were simply there to observe and to draw inferences from our observations. What did we see first? A very placid and respectable lady, who seemed quite innocent of any secret, and a portrait which showed me that she had two younger sisters. It instantly flashed across my mind that the box might have been meant for one of these. I set the idea aside as one which could be disproved or confirmed at our leisure.
My own field of cosmology provides the largest-scale illustration of this process in action. Theorists make postulates about the contents of the Universe and the laws that describe it and try to calculate what measurable consequences their ideas might have. Observers make measurements as best they can, but these are inevitably restricted in number and accuracy by technical considerations. Over the years, theoretical cosmologists deductively explored the possible ways Einstein’s General Theory of Relativity could be applied to the cosmos at large. Eventually a family of theoretical models was constructed, each of which could, in principle, describe a universe with the same basic properties as ours. But determining which, if any, of these models applied to the real thing required more detailed data. For example, observations of the properties of individual galaxies led to the inferred presence of cosmologically important quantities of dark matter. Inference also played a key role in establishing the existence of dark energy as a major part of the overall energy budget of the Universe. The result is now that we have now arrived at a standard model of cosmology which accounts pretty well for most relevant data.
Nothing is certain, of course, and this model may well turn out to be flawed in important ways. All the best detective stories have twists in which the favoured theory turns out to be wrong. But although the puzzle isn’t exactly solved, we’ve got good reasons for thinking we’re nearer to at least some of the answers than we were 20 years ago.
It being A-level results day, I thought I’d try a little experiment and use this blog to broadcast an unofficial announcement that, owing to additional government funding for high-achieving subjects, the School of Physics and Astronomy at Cardiff University is able to offer extra places on all undergraduate courses starting this September for suitably qualified students.
An institutional review of intake numbers by HEFCW (Higher Education Funding Council for Wales) resulted in the award of extra funded places for undergraduate entry in 2012. Of particular benefit are those STEM (science, technology, engineering and mathematics) subjects seen as strategically important by the UK government. Therefore, the School of Physics and Astronomy is pleased to announce acceptance of late UCAS applications from those candidates expected to achieve our entrance requirements.
Those current applicants who have already applied through the standard UCAS procedure and who have been offered places need not be concerned as these new places are IN ADDITION to those we were expecting to fill.
Applications can be made through Clearing on UCAS after discussions with the Admissions Team.
Course codes (for information)
BSc Physics (F300) and BSc Astrophysics (F511)
MPhys Physics (F303) and MPhys Astrophysics (F510)
BSc Physics with professional placement (F302)
BSc Theoretical and Computational Physics (F340)
BSc Physics with Medical Physics (F350)
Course enquiries can be made to Dr Carole Tucker, Undergraduate Admissions Tutor, via email to Physics-ug@cardiff.ac.uk or call the admissions teams on 029 2087 4144 / 6457.
I’ve been trying to make myself useful over the last few days thinking about the new module I’m supposed to start teaching in October. I’m a bit daunted by it to be honest. The title is The Physics of Fields and Flows and it will be taken by students when they return to start their second year after the summer break. It’s twice the size of our usual modules, which means a lot of teaching and it’s all new for me, which means a lot of preparation.
The idea behind introducing this module was to teach a number of things together which previously had been taught in separate modules, specifically electromagnetism and vector calculus, or not at all, e.g. fluid mechanics. I’m not sure when or why classical fluid mechanics dropped out the syllabus, but I think it’s an essential part of a physics curriculum in its own right and also helps develop a physical understanding of the mathematics used to describe electric and magnetic fields. It’s one of the unhappy side-effects of modular teaching that it hides the important underlying connections between apparently disparate phenomena which are the essence of what physics is about.
Another thing I reckon we don’t do enough of these days is use lecture demonstrations. That’s harder to do these days because we tend to use pooled lecture theatres that don’t have the specialist equipment that they might have if they were dedicated to physics lectures only. Practical demonstrations are now usually given second-hand, by using video clips. That’s fine, but not as good as the real thing.
Anyway, it struck me that it would be quite easy to arrange a demonstration of the transition between laminar and turbulent flow using the simple and relatively inexpensive equipment shown in the rather beautiful image. Unfortunately, however, demonstrating this sort of thing isn’t allowed on University premises even for scientific purposes…
The views presented here are personal and not necessarily those of my employer (or anyone else for that matter).
Feel free to comment on any of the posts on this blog but comments may be moderated; anonymous comments and any considered by me to be vexatious and/or abusive and/or defamatory will not be accepted. I do not necessarily endorse, support, sanction, encourage, verify or agree with the opinions or statements of any information or other content in the comments on this site and do not in any way guarantee their accuracy or reliability.