Archive for Science

The Radio and Microwave Sky from Juno

Posted in The Universe and Stuff with tags , , , , , , , on May 16, 2024 by telescoper

I found out about an interesting paper by Anderson et al. at a discussion group this morning. The abstract reads:

We present six nearly full-sky maps made from data taken by radiometers on the Juno satellite during its 5-year flight to Jupiter. The maps represent integrated emission over ∼4% passbands spaced approximately in octaves between 600 MHz and 21.9 GHz. Long time-scale offset drifts are removed in all bands, and, for the two lowest frequency bands, gain drifts are also removed from the maps via a self-calibration algorithm similar to the NPIPE pipeline used by the Planck collaboration. We show that, after this solution is applied, residual noise in the maps is consistent with thermal radiometer noise. We verify our map solutions with several consistency tests and end-to-end simulations. We also estimate the level of pixelization noise and polarization leakage via simulations.

arXiv:2405.08388

For those of you unaware about Juno, it is a NASA space mission (launched in 2011) intended to study the planet Jupiter (which it is still doing). On the way there, however, this spacecraft made continuous measurements of the radiation field around it, at radio and microwave frequencies. The work described by Anderson et al. involved turning these observations into maps at a range of frequency; they also studied the polarization properties of the radiation.

The full maps and other relevant data can be downloaded here. Here are some pretty pictures (the grey bits represent the parts of the sky that were not covered; radio emission from our own Galaxy is the most obvious component at low frequencies, but it looks more complicated at higher frequencies).

It’s always fun when data sets are used for something so different from the purpose originally intended, and what has come out of this analysis are rather nice maps of the emission from the Milky Way. These might turn out to be useful for many things, such as foreground removal for extragalactic surveys or studies of our own Galaxy.

Cosmology Talks: Cosmological Constraints from BAO

Posted in The Universe and Stuff, YouTube with tags , , , , , , , , , on April 5, 2024 by telescoper

Here’s another video in the Cosmology Talks series curated by Shaun Hotchkiss. This one very timely after yesterday’s announcement. Here is the description on the YouTube page:

The Dark Energy Spectroscopic Instrument (DESI) has produced cosmological constraints! And it is living up to its name. Two researchers from DESI, Seshadri Nadathur and Andreu Font-Ribera, tell us about DESI’s measurements of the Baryon Acoustic Oscillations (BAO) released today. These results use one full year of DESI data and are the first cosmological constraints from the telescope that have been released. Mostly, it is what you might expect: tighter constraints. However, in the realm of the equation of state of dark energy, they find, even with BAO alone, that there is a hint of evidence for evolving dark energy. When they combine their data with CMB and Supernovae, who both also find small hints of evolving dark energy on their own, the evidence for dark energy not being a cosmological constant jumps as high as 3.9σ with one combination of the datasets. It seems there still is “concordance cosmology”, it’s just not ΛCDM for these datasets. The fact that all three probes are tentatively favouring this is intriguing, as it makes it unlikely to be due to systematic errors in one measurement pipeline.

My own take is that the results are very interesting but I think we need to know a lot more about possible systematics before jumping to conclusions about time-varying dark energy. Am I getting conservative in my old age? These results from DESI do of course further underline the motivation for Euclid (another Stage IV survey), which may have an even better capability to identify departures from the standard model.

P.S. Here’s a nice graphic showing the cosmic web showing revealed by the DESI survey:

Euclid on Ice

Posted in Euclid, The Universe and Stuff with tags , , , , , , on March 25, 2024 by telescoper

I thought it would be appropriate to add a little update about the European Space Agency’s Euclid mission. I’ll keep it brief here because you can read the full story on the official website here.

You may have seen in the news that the Euclid telescope has been having an issue with ice forming on surfaces in its optical systems, especially the VIS instrument. This is a common problem with telescopes in space, but the extent of it is not something that can be predicted very accurately in advance so a detailed strategy for dealing with it had to be developed on the go.

The layers of ice that form are very thin – just tens of nanometres thick – but that is enough to blur the images and also reduce the throughput of the instruments. Given that the objects we want Euclid to see are faint, and we need very sharp images then this is an issue that must be dealt with.

Soon after launch, the telescope was heated up for a while in order to evaporate as much ice as possible, but it was not known how quickly the ice would return and to what parts of the optical system. After months in the cold of space the instrument scientists now understand the behaviour of the pesky ice a lot better, and have devised a strategy for dealing with it.

The approach is fairly simple in principle: heat the affected instruments up every now and again, and then let them cool down again so they operate; repeat as necessary as ice forms again. This involves an interruption in observations, it is known to work pretty well, but exactly how frequently this de-icing cycle should be implemented and what parts of the optical system require this treatment are questions that need to be answered in practical experimentation. The hope is that after a number of operations of this kind, the amount of ice returning each time will gradually reduce. I am not an expert in these things but I gather from colleagues that the signs are encouraging.

For more details, see here.

UPDATE: The latest news is that the de-icing procedure has worked better than expected! There’s even a video about the result of the process here:

Cosmology Talks – To Infinity and Beyond (Probably)

Posted in mathematics, The Universe and Stuff with tags , , , , , , , , , , , , , on March 20, 2024 by telescoper

Here’s an interestingly different talk in the series of Cosmology Talks curated by Shaun Hotchkiss. The speaker, Sylvia Wenmackers, is a philosopher of science. According to the blurb on Youtube:

Her focus is probability and she has worked on a few theories that aim to extend and modify the standard axioms of probability in order to tackle paradoxes related to infinite spaces. In particular there is a paradox of the “infinite fair lottery” where within standard probability it seems impossible to write down a “fair” probability function on the integers. If you give the integers any non-zero probability, the total probability of all integers is unbounded, so the function is not normalisable. If you give the integers zero probability, the total probability of all integers is also zero. No other option seems viable for a fair distribution. This paradox arises in a number of places within cosmology, especially in the context of eternal inflation and a possible multiverse of big bangs bubbling off. If every bubble is to be treated fairly, and there will ultimately be an unbounded number of them, how do we assign probability? The proposed solutions involve hyper-real numbers, such as infinitesimals and infinities with different relative sizes, (reflecting how quickly things converge or diverge respectively). The multiverse has other problems, and other areas of cosmology where this issue arises also have their own problems (e.g. the initial conditions of inflation); however this could very well be part of the way towards fixing the cosmological multiverse.

The paper referred to in the presentation can be found here. There is a lot to digest in this thought-provoking talk, from the starting point on Kolmogorov’s axioms to the application to the multiverse, but this video gives me an excuse to repeat my thoughts on infinities in cosmology.

Most of us – whether scientists or not – have an uncomfortable time coping with the concept of infinity. Physicists have had a particularly difficult relationship with the notion of boundlessness, as various kinds of pesky infinities keep cropping up in calculations. In most cases this this symptomatic of deficiencies in the theoretical foundations of the subject. Think of the ‘ultraviolet catastrophe‘ of classical statistical mechanics, in which the electromagnetic radiation produced by a black body at a finite temperature is calculated to be infinitely intense at infinitely short wavelengths; this signalled the failure of classical statistical mechanics and ushered in the era of quantum mechanics about a hundred years ago. Quantum field theories have other forms of pathological behaviour, with mathematical components of the theory tending to run out of control to infinity unless they are healed using the technique of renormalization. The general theory of relativity predicts that singularities in which physical properties become infinite occur in the centre of black holes and in the Big Bang that kicked our Universe into existence. But even these are regarded as indications that we are missing a piece of the puzzle, rather than implying that somehow infinity is a part of nature itself.

The exception to this rule is the field of cosmology. Somehow it seems natural at least to consider the possibility that our cosmos might be infinite, either in extent or duration, or both, or perhaps even be a multiverse comprising an infinite collection of sub-universes. If the Universe is defined as everything that exists, why should it necessarily be finite? Why should there be some underlying principle that restricts it to a size our human brains can cope with?

On the other hand, there are cosmologists who won’t allow infinity into their view of the Universe. A prominent example is George Ellis, a strong critic of the multiverse idea in particular, who frequently quotes David Hilbert

The final result then is: nowhere is the infinite realized; it is neither present in nature nor admissible as a foundation in our rational thinking—a remarkable harmony between being and thought

But to every Hilbert there’s an equal and opposite Leibniz

I am so in favor of the actual infinite that instead of admitting that Nature abhors it, as is commonly said, I hold that Nature makes frequent use of it everywhere, in order to show more effectively the perfections of its Author.

You see that it’s an argument with quite a long pedigree!

Many years ago I attended a lecture by Alex Vilenkin, entitled The Principle of Mediocrity. This was a talk based on some ideas from his book Many Worlds in One: The Search for Other Universes, in which he discusses some of the consequences of the so-called eternal inflation scenario, which leads to a variation of the multiverse idea in which the universe comprises an infinite collection of causally-disconnected “bubbles” with different laws of low-energy physics applying in each. Indeed, in Vilenkin’s vision, all possible configurations of all possible things are realised somewhere in this ensemble of mini-universes.

One of the features of this scenario is that it brings the anthropic principle into play as a potential “explanation” for the apparent fine-tuning of our Universe that enables life to be sustained within it. We can only live in a domain wherein the laws of physics are compatible with life so it should be no surprise that’s what we find. There is an infinity of dead universes, but we don’t live there.

I’m not going to go on about the anthropic principle here, although it’s a subject that’s quite fun to write or, better still, give a talk about, especially if you enjoy winding people up! What I did want to say mention, though, is that Vilenkin correctly pointed out that three ingredients are needed to make this work:

  1. An infinite ensemble of realizations
  2. A discretizer
  3. A randomizer

Item 2 involves some sort of principle that ensures that the number of possible states of the system we’re talking about  is not infinite. A very simple example from  quantum physics might be the two spin states of an electron, up (↑) or down(↓). No “in-between” states are allowed, according to our tried-and-tested theories of quantum physics, so the state space is discrete.  In the more general context required for cosmology, the states are the allowed “laws of physics” ( i.e. possible  false vacuum configurations). The space of possible states is very much larger here, of course, and the theory that makes it discrete much less secure. In string theory, the number of false vacua is estimated at 10500. That’s certainly a very big number, but it’s not infinite so will do the job needed.

Item 3 requires a process that realizes every possible configuration across the ensemble in a “random” fashion. The word “random” is a bit problematic for me because I don’t really know what it’s supposed to mean. It’s a word that far too many scientists are content to hide behind, in my opinion. In this context, however, “random” really means that the assigning of states to elements in the ensemble must be ergodic, meaning that it must visit the entire state space with some probability. This is the kind of process that’s needed if an infinite collection of monkeys is indeed to type the (large but finite) complete works of shakespeare. It’s not enough that there be an infinite number and that the works of shakespeare be finite. The process of typing must also be ergodic.

Now it’s by no means obvious that monkeys would type ergodically. If, for example, they always hit two adjoining keys at the same time then the process would not be ergodic. Likewise it is by no means clear to me that the process of realizing the ensemble is ergodic. In fact I’m not even sure that there’s any process at all that “realizes” the string landscape. There’s a long and dangerous road from the (hypothetical) ensembles that exist even in standard quantum field theory to an actually existing “random” collection of observed things…

More generally, the mere fact that a mathematical solution of an equation can be derived does not mean that that equation describes anything that actually exists in nature. In this respect I agree with Alfred North Whitehead:

There is no more common error than to assume that, because prolonged and accurate mathematical calculations have been made, the application of the result to some fact of nature is absolutely certain.

It’s a quote I think some string theorists might benefit from reading!

Items 1, 2 and 3 are all needed to ensure that each particular configuration of the system is actually realized in nature. If we had an infinite number of realizations but with either infinite number of possible configurations or a non-ergodic selection mechanism then there’s no guarantee each possibility would actually happen. The success of this explanation consequently rests on quite stringent assumptions.

I’m a sceptic about this whole scheme for many reasons. First, I’m uncomfortable with infinity – that’s what you get for working with George Ellis, I guess. Second, and more importantly, I don’t understand string theory and am in any case unsure of the ontological status of the string landscape. Finally, although a large number of prominent cosmologists have waved their hands with commendable vigour, I have never seen anything even approaching a rigorous proof that eternal inflation does lead to realized infinity of  false vacua. If such a thing exists, I’d really like to hear about it!

Irrationalism and Deductivism in Science

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , , , , , , , , , , on March 11, 2024 by telescoper

I thought I would use today’s post to share the above reading list which was posted on the wall at the meeting I was at this weekend; it was only two days long and has now finished. Seeing the first book on the list, however, it seems a good idea to follow this up with a brief discussion -largely inspired by David Stove’s book – of some of the philosophical issues raised at the workshop.

It is ironic that the pioneers of probability theory, principally Laplace, unquestionably adopted a Bayesian rather than frequentist interpretation for his probabilities. Frequentism arose during the nineteenth century and held sway until recently. I recall giving a conference talk about Bayesian reasoning only to be heckled by the audience with comments about “new-fangled, trendy Bayesian methods”. Nothing could have been less apt. Probability theory pre-dates the rise of sampling theory and all the frequentist-inspired techniques that modern-day statisticians like to employ.

Most disturbing of all is the influence that frequentist and other non-Bayesian views of probability have had upon the development of a philosophy of science, which I believe has a strong element of inverse reasoning or inductivism in it. The argument about whether there is a role for this type of thought in science goes back at least as far as Roger Bacon who lived in the 13th Century. Much later the brilliant Scottish empiricist philosopher and enlightenment figure David Hume argued strongly against induction. Most modern anti-inductivists can be traced back to this source. Pierre Duhem has argued that theory and experiment never meet face-to-face because in reality there are hosts of auxiliary assumptions involved in making this comparison. This is nowadays called the Quine-Duhem thesis.

Actually, for a Bayesian this doesn’t pose a logical difficulty at all. All one has to do is set up prior probability distributions for the required parameters, calculate their posterior probabilities and then integrate over those that aren’t related to measurements. This is just an expanded version of the idea of marginalization, explained here.

Rudolf Carnap, a logical positivist, attempted to construct a complete theory of inductive reasoning which bears some relationship to Bayesian thought, but he failed to apply Bayes’ theorem in the correct way. Carnap distinguished between two types or probabilities – logical and factual. Bayesians don’t – and I don’t – think this is necessary. The Bayesian definition seems to me to be quite coherent on its own.

Other philosophers of science reject the notion that inductive reasoning has any epistemological value at all. This anti-inductivist stance, often somewhat misleadingly called deductivist (irrationalist would be a better description) is evident in the thinking of three of the most influential philosophers of science of the last century: Karl PopperThomas Kuhn and, most recently, Paul Feyerabend. Regardless of the ferocity of their arguments with each other, these have in common that at the core of their systems of thought likes the rejection of all forms of inductive reasoning. The line of thought that ended in this intellectual cul-de-sac began, as I stated above, with the work of the Scottish empiricist philosopher David Hume. For a thorough analysis of the anti-inductivists mentioned above and their obvious debt to Hume, see David Stove’s book Popper and After: Four Modern Irrationalists. I will just make a few inflammatory remarks here.

Karl Popper really began the modern era of science philosophy with his Logik der Forschung, which was published in 1934. There isn’t really much about (Bayesian) probability theory in this book, which is strange for a work which claims to be about the logic of science. Popper also managed to, on the one hand, accept probability theory (in its frequentist form), but on the other, to reject induction. I find it therefore very hard to make sense of his work at all. It is also clear that, at least outside Britain, Popper is not really taken seriously by many people as a philosopher. Inside Britain it is very different,and I’m not at all sure I understand why. Nevertheless, in my experience, most working physicists seem to subscribe to some version of Popper’s basic philosophy.

Among the things Popper has claimed is that all observations are “theory-laden” and that “sense-data, untheoretical items of observation, simply do not exist”. I don’t think it is possible to defend this view, unless one asserts that numbers do not exist. Data are numbers. They can be incorporated in the form of propositions about parameters in any theoretical framework we like. It is of course true that the possibility space is theory-laden. It is a space of theories, after all. Theory does suggest what kinds of experiment should be done and what data is likely to be useful. But data can be used to update probabilities of anything.

Popper has also insisted that science is deductive rather than inductive. Part of this claim is just a semantic confusion. It is necessary at some point to deduce what the measurable consequences of a theory might be before one does any experiments, but that doesn’t mean the whole process of science is deductive. He does, however, reject the basic application of inductive reasoning in updating probabilities in the light of measured data; he asserts that no theory ever becomes more probable when evidence is found in its favour. Every scientific theory begins infinitely improbable, and is doomed to remain so.

Now there is a grain of truth in this, or can be if the space of possibilities is infinite. Standard methods for assigning priors often spread the unit total probability over an infinite space, leading to a prior probability which is formally zero. This is the problem of improper priors. But this is not a killer blow to Bayesianism. Even if the prior is not strictly normalizable, the posterior probability can be. In any case, given sufficient relevant data the cycle of experiment-measurement-update of probability assignment usually soon leaves the prior far behind. Data usually count in the end.

The idea by which Popper is best known is the dogma of falsification. According to this doctrine, a hypothesis is only said to be scientific if it is capable of being proved false. In real science certain “falsehood” and certain “truth” are almost never achieved. Theories are simply more probable or less probable than the alternatives on the market. The idea that experimental scientists struggle through their entire life simply to prove theorists wrong is a very strange one, although I definitely know some experimentalists who chase theories like lions chase gazelles. To a Bayesian, the right criterion is not falsifiability but testability, the ability of the theory to be rendered more or less probable using further data. Nevertheless, scientific theories generally do have untestable components. Any theory has its interpretation, which is the untestable baggage that we need to supply to make it comprehensible to us. But whatever can be tested can be scientific.

Popper’s work on the philosophical ideas that ultimately led to falsificationism began in Vienna, but the approach subsequently gained enormous popularity in western Europe. The American Thomas Kuhn later took up the anti-inductivist baton in his book The Structure of Scientific Revolutions. Kuhn is undoubtedly a first-rate historian of science and this book contains many perceptive analyses of episodes in the development of physics. His view of scientific progress is cyclic. It begins with a mass of confused observations and controversial theories, moves into a quiescent phase when one theory has triumphed over the others, and lapses into chaos again when the further testing exposes anomalies in the favoured theory. Kuhn adopted the word paradigm to describe the model that rules during the middle stage,

The history of science is littered with examples of this process, which is why so many scientists find Kuhn’s account in good accord with their experience. But there is a problem when attempts are made to fuse this historical observation into a philosophy based on anti-inductivism. Kuhn claims that we “have to relinquish the notion that changes of paradigm carry scientists ..closer and closer to the truth.” Einstein’s theory of relativity provides a closer fit to a wider range of observations than Newtonian mechanics, but in Kuhn’s view this success counts for nothing.

Paul Feyerabend has extended this anti-inductivist streak to its logical (though irrational) extreme. His approach has been dubbed “epistemological anarchism”, and it is clear that he believed that all theories are equally wrong. He is on record as stating that normal science is a fairytale, and that equal time and resources should be spent on “astrology, acupuncture and witchcraft”. He also categorised science alongside “religion, prostitution, and so on”. His thesis is basically that science is just one of many possible internally consistent views of the world, and that the choice between which of these views to adopt can only be made on socio-political grounds.

Feyerabend’s views could only have flourished in a society deeply disillusioned with science. Of course, many bad things have been done in science’s name, and many social institutions are deeply flawed. But one can’t expect anything operated by people to run perfectly. It’s also quite reasonable to argue on ethical grounds which bits of science should be funded and which should not. But the bottom line is that science does have a firm methodological basis which distinguishes it from pseudo-science, the occult and new age silliness. Science is distinguished from other belief-systems by its rigorous application of inductive reasoning and its willingness to subject itself to experimental test. Not all science is done properly, of course, and bad science is as bad as anything.

The Bayesian interpretation of probability leads to a philosophy of science which is essentially epistemological rather than ontological. Probabilities are not “out there” in external reality, but in our minds, representing our imperfect knowledge and understanding. Scientific theories are not absolute truths. Our knowledge of reality is never certain, but we are able to reason consistently about which of our theories provides the best available description of what is known at any given time. If that description fails when more data are gathered, we move on, introducing new elements or abandoning the theory for an alternative. This process could go on forever. There may never be a final theory. But although the game might have no end, at least we know the rules….

The Euclid Survey(s)

Posted in Euclid, The Universe and Stuff with tags , , , , , , , , , on February 27, 2024 by telescoper

Since it’s been a couple of weeks since Euclid commenced its routine survey operations, I thought I would share this little video from the European Space Agency that shows how the surveying will proceed over the next six years with explanatory text adapted from here:

This animation shows the location of the fields on the sky that will be covered by Euclid’s wide (blue) and deep (yellow) surveys. The sky is shown in the Galactic coordinate system, with the bright horizontal band corresponding to the plane of our Milky Way.

The wide survey will cover more than one third of the sky as shown in blue. Other regions are avoided because they are dominated by Milky Way stars and interstellar matter, or by diffuse dust in the Solar System – the so-called zodiacal light. The wide survey is complemented by a deep survey, taking about 10% of the total observing time and repeatedly observing just three patches of the sky called the Euclid Deep Fields, highlighted in yellow.

The Euclid Deep Field North – towards the top left – has an area of 20 square degrees and is located very close to the Northern Ecliptic Pole. The proximity to the ecliptic pole ensures maximum coverage throughout the year; the exact position was chosen to obtain maximum overlap with one of the deep fields surveyed by NASA’s Spitzer Space Telescope.

The Euclid Deep Field Fornax – in the lower right of the image – spans 10 square degrees and is located in the southern constellation Fornax, the furnace. It encompasses the much smaller Chandra Deep Field South, a 0.11 square degree region of the sky that has been extensively surveyed in the past couple of decades with the Chandra and XMM-Newton X-ray observatories, as well as the Hubble Space Telescope and major ground-based telescopes.

The third and largest of the fields is the Euclid Deep Field South – between the Large Magellanic Cloud and the Euclid Deep Field Fornax. It covers 20 square degrees in the southern constellation of Horologium, the pendulum clock. This field has not been covered to date by any deep sky survey and so has a huge potential for new, exciting discoveries. It has been planned to be observed from the ground by the Vera C. Rubin Observatory.

P.S. According to my latest calculations, I shall have retired by the time the Wide survey is completed.

The Cost of Imaging Neuroscience

Posted in Open Access with tags , , , , , , on February 13, 2024 by telescoper

Last year I wrote a piece about the resignation of the entire Editorial Board of an Elsevier journal. The main reason for this action was `extreme’ Article Processing Charges imposed by the publisher for so-called Gold Open Access to the papers. As I wrote then, the

… current system of ‘Gold’ Open Access is a scam, and it’s a terrible shame we have ended up having it foisted upon us. Fortunately, being forced to pay APCs of many thousands of euros to publish their papers, researchers are at last starting to realize that they are being ripped off. Recently, the entire Editorial Board of Neuroimage and its sister journal Neuroimage: Reports resigned in protest at the `extreme’ APC levels imposed by the publisher, Elsevier. I’m sure other academics will follow this example, as it becomes more and more obvious that the current arrangements are unsustainable. Previously the profits of the big publishers were hidden in library budgets. Now they are hitting researchers and their grants directly, as authors now have to pay, and people who previously hadn’t thought much about the absurdity of it all are now realizing what a racket academic publishing really is.

Well, the new journal founded by former Editorial Board of Neuroimage and Neuroimage: Reports has now appeared. It’s called Imaging Neuroscience and its rather website can be found here.

Good news, you would think.

But no…

Imaging Neuroscience is itself a Gold Open Access journal which charges an APC of $1600 per paper. That’s about half the Elsevier were charging ($3,450) but is still far too high. It simply does not cost this much to publish papers online! (There’s a paper that gives a summary of the commercial costs of different aspects of publishing here.) The journal claims to be non-profit making so I’d love to see what they are spending this money on. It can’t be on their website, which is very rudimentary.

It seems that the neuroscientists concerned have just decided to replace Elsevier’s absurd APCs with their own absurd APCs. Oh dear. And they seemed so close to getting it…

The Reinvention of Science

Posted in History, Literature, The Universe and Stuff with tags , , , , , on January 28, 2024 by telescoper

I’ve known about the existence of this new book for quite a long time – the first two of the authors are former collaborators of mine and I’m still in fairly regular touch with them – but I only received a copy a few weeks ago. Had I been less busy when it was in proof stage I might have been in a position to add to the many generous comments on the back cover from such luminaries as Martin Rees, Jim Peebles, Alan Heavens and, my hosts in Barcelona, Licia Verde and Raúl Jiménez. Anyway, now that I’ve read it I’m happy to endorse their enthusiastic comments and to give the book a plug on this blog.

One can summarize The Reinvention of Science as a journey through the history of science from ancient times to modern, signposted by mistakes, fallacies and dogma that have hindered rather than facilitated progress. These are, in other words, not so much milestones as stumbling blocks. Examples include the luminiferous aether and phlogiston to name but two. These and many other case studies are used to illustrate, for example, how supposedly rational scientists sometimes hold very irrational beliefs and act accordingly on them. The book presents a view of the evolution of science in spite of the suppression of heterodox ideas and the desire of establishment thinkers to maintain the status quo.

The volume covers a vast territory, not limited to astrophysics and cosmology (in which fields the authors specialize). It is a very well-written and enjoyable read that is strong on accuracy as well as being accessible and pedagogical. I congratulate the authors on a really excellent book.

P.S. I am of course sufficiently vain that I looked in the index before reading the book to see if I got a mention and was delighted to see my name listed not once but twice. The first time is in connection with the coverage of the BICEP2 controversy on this very blog, e.g. here. I am pleased because I did feel I was sticking my head above the parapet at the time, but was subsequently vindicated. The second mention is to do with this article which the authors describe as “beautiful”. And I didn’t even pay them! I’m truly flattered.

A Major Merger in Irish Research

Posted in Science Politics with tags , , , , on May 18, 2022 by telescoper

Taking a short break from examination matters I just read a news item announcing a big shake-up in Irish research funding. As part of a new Research and Innovation Strategy, called Impact 2030, it seems the Irish Research Council (RC) and Science Foundation Ireland (SFI) are to merge to produce a single entity, perhaps as early as next year.

Changes are much needed, especially for science. Science in Ireland is in a dire state of under-investment, especially in basic (i.e. fundamental) research. For many years SFI has only funded applied science, though recently seems to have shifted its emphasis a little bit in its latest strategic plan. Currently Ireland spends just 1.1% of its GDP on scientific research and development and SFI’s current exclusive focus on research aligned with industry that can be exploited for short-term commercial gain) is making life very difficult for those in working in “blue skies” areas which are largely those that dras young people into science, and has consequently driven many researchers in such areas abroad, to the great detriment of Ireland’s standing in the international scientific community.

Here is an excerpt from an old post explaining what I think about the current approach:

For what it’s worth I’ll repeat my own view that “commercially useful” research should not be funded by the taxpayer through research grants. If it’s going to pay off in the short term it should be funded by private investors, venture capitalists of some sort, or perhaps through some form of National Investment Bank. When the public purse is so heavily constrained, it should only be asked to fund those things that can’t in practice be funded any other way. That means long-term, speculative, curiosity driven research.

SFI recently announced a new strategy, to cover the period up to 2025, with plans for 15% annual rises that will boost the agency’s grant spending — the greater part of the SFI budget — from €200 million in 2020 to €376 million by 2025. Much of this is focused in top-down manner on specific programmes and research centres but there is at least an acknowledgement of the need to support basic research, including an allocation of €11 million in 2021 for early career researchers. The overall aim is to increase the overall R&D spend from 1.1% of gross domestic product, well below the European average of 2.2%, to 2.5% by 2025. I hope these commitments will be carried forward into the new organization.

The Irish Research Council funds research in all areas, not exclusively applied science, so what little jam it has is spread very thinly. Applying for IRC funding is a lottery, with very few winners and the vast majority rejected without even cursory feedback.

There are two main worries about the fate of IRC in the merger merger. One is that research in arts & humanities will suffer as a result of being lumped in with science, and the other is that the culture of short-termism will be adopted so the small amount of basic research that the IRC currently funds will be sacrificed on the altar of quick commercial gain.

There is a welcome emphasis in the Impact 2030 document on early career researchers, especially at doctoral level where it is currently difficult to find funding for excellent graduate students. It has to be said though that there are problems in this area which are much wider than the shortage of appropriate schemes. The cost of living in Ireland is such that PhD stipends are inadequate to provide an adequate quality of life, especially in the Dublin area. The same goes for postdoctoral salaries which make it difficult to recruit postdocs from elsewhere in Europe.

Another crucial difficulty is the complete lack of funding for Master’s degrees, for many an essential bridge from undergraduate to research degrees. Many of our best graduates leave for European countries where a Master’s degree is free (and may even attract a stipend) and it is then difficult to entice them back.

There’s no question that the current lack of opportunity, low salaries, high living costs and the availability of far better opportunities elsewhere is leading to a net exodus of young research talent from Ireland. Whether any of this will change with Impact 2030 remains to be seen, but at least it doesn’t propose an Irish version of the dreaded Research Excellence Framework!

Basic Research in Ireland

Posted in Science Politics, The Universe and Stuff with tags , , on March 21, 2021 by telescoper


I realised today that I hadn’t yet posted a reaction to theannouncement earlier this month by Science Foundation Ireland (SFI) of a new five-year strategic plan. Although much of the document Shaping Our Future is fairly bland – as strategic plans usually are – there are some very welcome things in it.

Currently Ireland spends just 1.1% of its GDP on scientific research and development and SFI currently has a heavy focus on applied research (i.e. research aligned with industry that can be exploited for short-term commercial gain). This has made life difficult for basic or fundamental science and has driven many researchers in such areas abroad, to the detriment of Ireland’s standing in the international community.

The new strategy, which will cover the period from now to 2025, plans for 15% annual rises that will boost the agency’s grant spending — the greater part of the SFI budget — from €200 million in 2020 to €376 million by 2025. Much of this is focused in top-down manner on specific programmes and research centres but there is at least an acknowledgement of the need to support basic research, including an allocation of €11 million in 2021 for early career researchers.

The overall aim is to increase the overall R&D spend from 1.1% of gross domestic product, well below the European average of 2.2%, to 2.5% by 2025.

One of the jobs I had to do last week was to write the Annual Research Report for the Department of Theoretical Physics at Maynooth University. I am very pleased that despite the Covid-19 pandemic, over the last year we managed to score some notable successes in securing new grant awards (amounting to €1.3M altogether) as well as doubling the number of refereed publications since the previous year. This is of course under the old SFI regime. Hopefully in the next few years covered by the new SFI strategic plan we’ll be able to build on that growth still further, especially in areas related to quantum computing and quantum technology generally.

Anyway, it seems that SFI listened to at least some of the submissions made to the consultation exercise I mentioned a few months ago.