Archive for the The Universe and Stuff Category

Five New Publications at the Open Journal of Astrophysics

Posted in OJAp Papers, Open Access, The Universe and Stuff with tags , , , , , , , , , on April 6, 2024 by telescoper

As promised a couple of days ago, I am taking the opportunity today to announce the batch of papers at the Open Journal of Astrophysics that were paused slightly while we updated our system. This batch includes five papers, which I now present to you here. These five take the count in Volume 7 (2024) up to 25 and the total published by OJAp up to 140. We’re publishing roughly two papers a week these days so we expect publish about 100 this year.

In chronological order, the five papers, with their overlays, are as follows. You can click on the images of the overlays to make them larger should you wish to do so.

This paper, by Yingtian Chen and Oleg Gnedin of the University of Michigan, is the 21st paper to be published in Volume 7 and the 136th altogether. It is a study of kinematic, chemical and age data of globular clusters from Gaia yielding clues to how the Milky Way Galaxy assembled. Here’s a screenshot of the overlay which includes the abstract. Note the new-style DOI at the bottom left.

You can read the article on arXiv directly here. This paper has a publication date of 20th March 2024, and is in the folder marked Astrophysics of Galaxies.

The second paper is “Generation of realistic input parameters for simulating atmospheric point-spread functions at astronomical observatories” by Claire-Alice Hébert (Stanford), Joshua E. Meyers (Stanford), My H. Do (Cal. State U, Pomona), Patricia R. Burchat (Stanford) and the LSST Dark Energy Science Collaboration. It explores the use of atmospheric modelling to generate realistic estimates of the point-spread function for observational work, especially for the Vera C. Rubin Observatory. This one is in the folder marked Instrumentation and Methods for Astrophysics and was published on 4th April 2024. Here is a screen grab of the overlay which includes the abstract:

 

You can find the officially accepted version of the paper on the arXiv here.

The third paper to announce is “Cosmic Dragons: A Two-Component Mixture Model of COSMOS Galaxies” by William K. Black and August E. Evrard of the University of Michigan (Ann Arbor, USA). This paper was also published on 4th April 2024,  is in the folder Astrophysics of Galaxies and you can see the overlay here:

 

The accepted version of this paper can be found on the arXiv here.

The next paper is “High mass function ellipsoidal variables in the Gaia Focused Product Release: searching for black hole candidates in the binary zoo” by Dominick M. Rowan, Todd A. Thompson,
Tharindu Jayasinghe, Christopher S. Kochanek and Krzysztof Z. Stanek of Ohio State University (USA). This paper, in the Solar and Stellar Astrophysics collection, describes a search for massive unseen stellar companions variable star systems found in Gaia data. This one was also published on 4th April 2024.

Here is the overlay:

 

 

You can find the full text for this one on the arXiv here.

Last in this batch, but by no means least, published yesterday (5th April 2024), we have a paper “Machine Learning the Dark Matter Halo Mass of Milky Way-Like Systems” by Elaheh Hayati & Peter Behroozi (University of Arizona, USA) and Ekta Patel (University of Utah, USA).  The primary classification for this one is once again Astrophysics of Galaxies and it presents a method for estimating the mass of a galaxy halo using neural networks that does not assume, for example,  dynamical equilibrium:

 

You can click on the image of the overlay to make it larger should you wish to do so. You can find the officially accepted version of the paper on the arXiv here.

As you can see this is quite a diverse collection of papers. Given the increase in submissions in the area of galactic astrophysics we are very happy to welcome another expert in that area to our Editorial Board, in the form of Professor Walter Dehnen of the University of Heidelberg.

Cosmology Talks: Cosmological Constraints from BAO

Posted in The Universe and Stuff, YouTube with tags , , , , , , , , , on April 5, 2024 by telescoper

Here’s another video in the Cosmology Talks series curated by Shaun Hotchkiss. This one very timely after yesterday’s announcement. Here is the description on the YouTube page:

The Dark Energy Spectroscopic Instrument (DESI) has produced cosmological constraints! And it is living up to its name. Two researchers from DESI, Seshadri Nadathur and Andreu Font-Ribera, tell us about DESI’s measurements of the Baryon Acoustic Oscillations (BAO) released today. These results use one full year of DESI data and are the first cosmological constraints from the telescope that have been released. Mostly, it is what you might expect: tighter constraints. However, in the realm of the equation of state of dark energy, they find, even with BAO alone, that there is a hint of evidence for evolving dark energy. When they combine their data with CMB and Supernovae, who both also find small hints of evolving dark energy on their own, the evidence for dark energy not being a cosmological constant jumps as high as 3.9σ with one combination of the datasets. It seems there still is “concordance cosmology”, it’s just not ΛCDM for these datasets. The fact that all three probes are tentatively favouring this is intriguing, as it makes it unlikely to be due to systematic errors in one measurement pipeline.

My own take is that the results are very interesting but I think we need to know a lot more about possible systematics before jumping to conclusions about time-varying dark energy. Am I getting conservative in my old age? These results from DESI do of course further underline the motivation for Euclid (another Stage IV survey), which may have an even better capability to identify departures from the standard model.

P.S. Here’s a nice graphic showing the cosmic web showing revealed by the DESI survey:

DESI Year 1 Results: Baryon Acoustic Oscillations

Posted in Barcelona, Euclid, The Universe and Stuff with tags , , , , on April 4, 2024 by telescoper

There has been a lot of excitement around the ICCUB today – the press have been here and everything – ahead of the release of the Year 1 results from the Dark Energy Spectroscopic Instrument (DESI). The press release from the Lawrence Berkeley Laboratory in California can be found here.

The papers were just released at 5pm CEST and can be found here. The key results pertain to Baryon Acoustic Oscillations (BAOs) which can be used to track the expansion rate and geometry of the Universe. This is one of the techniques that will be used by Euclid.

There’s a lot of technical information to go through and I have to leave fairly soon. Fortunately we have seminar tomorrow that will explain everything at a level I can understand:

I will update this post with a bit more after the talk, but for the time being I direct you to the high-level cosmological implications are discussed in this paper (which is Paper VI from DESI).

If your main interest is in the Hubble Tension then I direct you to this Figure:

Depending on the other data sets included, the value obtained is around 68.5 ± 0.7 in the usual units, closer to the (lower) Planck CMB value than the (higher) Supernovae values but not exactly in agreement; the error bars are quite small too.

You might want to read my thoughts about distances estimated from angular diameters compared with distances measured using luminosity distances here.

If you’re wondering whether there is any evidence for departures from the standard cosmology, another pertinent comment is:

In summary, DESI data, both alone and in combination with other cosmological probes, do not show any evidence for a constant equation of state parameter different from −1 when a flat wCDM model is assumed.

DESI 2024 VI: Cosmological Constraints from the Measurements of Baryon Acoustic Oscillations

More complicated models of time-varying dark energy might work, but there’s no strong evidence from the current data.

That’s all from me for now, but feel free to comment through the box below with any hot takes!

UPDATE: As expected there has been quite a lot of press coverage about this – see the examples below – mostly concentrating on the alleged evidence for “new physics”. Personally I think the old physics is fine!

Euclid on Ice

Posted in Euclid, The Universe and Stuff with tags , , , , , , on March 25, 2024 by telescoper

I thought it would be appropriate to add a little update about the European Space Agency’s Euclid mission. I’ll keep it brief here because you can read the full story on the official website here.

You may have seen in the news that the Euclid telescope has been having an issue with ice forming on surfaces in its optical systems, especially the VIS instrument. This is a common problem with telescopes in space, but the extent of it is not something that can be predicted very accurately in advance so a detailed strategy for dealing with it had to be developed on the go.

The layers of ice that form are very thin – just tens of nanometres thick – but that is enough to blur the images and also reduce the throughput of the instruments. Given that the objects we want Euclid to see are faint, and we need very sharp images then this is an issue that must be dealt with.

Soon after launch, the telescope was heated up for a while in order to evaporate as much ice as possible, but it was not known how quickly the ice would return and to what parts of the optical system. After months in the cold of space the instrument scientists now understand the behaviour of the pesky ice a lot better, and have devised a strategy for dealing with it.

The approach is fairly simple in principle: heat the affected instruments up every now and again, and then let them cool down again so they operate; repeat as necessary as ice forms again. This involves an interruption in observations, it is known to work pretty well, but exactly how frequently this de-icing cycle should be implemented and what parts of the optical system require this treatment are questions that need to be answered in practical experimentation. The hope is that after a number of operations of this kind, the amount of ice returning each time will gradually reduce. I am not an expert in these things but I gather from colleagues that the signs are encouraging.

For more details, see here.

UPDATE: The latest news is that the de-icing procedure has worked better than expected! There’s even a video about the result of the process here:

Cosmology Talks – To Infinity and Beyond (Probably)

Posted in mathematics, The Universe and Stuff with tags , , , , , , , , , , , , , on March 20, 2024 by telescoper

Here’s an interestingly different talk in the series of Cosmology Talks curated by Shaun Hotchkiss. The speaker, Sylvia Wenmackers, is a philosopher of science. According to the blurb on Youtube:

Her focus is probability and she has worked on a few theories that aim to extend and modify the standard axioms of probability in order to tackle paradoxes related to infinite spaces. In particular there is a paradox of the “infinite fair lottery” where within standard probability it seems impossible to write down a “fair” probability function on the integers. If you give the integers any non-zero probability, the total probability of all integers is unbounded, so the function is not normalisable. If you give the integers zero probability, the total probability of all integers is also zero. No other option seems viable for a fair distribution. This paradox arises in a number of places within cosmology, especially in the context of eternal inflation and a possible multiverse of big bangs bubbling off. If every bubble is to be treated fairly, and there will ultimately be an unbounded number of them, how do we assign probability? The proposed solutions involve hyper-real numbers, such as infinitesimals and infinities with different relative sizes, (reflecting how quickly things converge or diverge respectively). The multiverse has other problems, and other areas of cosmology where this issue arises also have their own problems (e.g. the initial conditions of inflation); however this could very well be part of the way towards fixing the cosmological multiverse.

The paper referred to in the presentation can be found here. There is a lot to digest in this thought-provoking talk, from the starting point on Kolmogorov’s axioms to the application to the multiverse, but this video gives me an excuse to repeat my thoughts on infinities in cosmology.

Most of us – whether scientists or not – have an uncomfortable time coping with the concept of infinity. Physicists have had a particularly difficult relationship with the notion of boundlessness, as various kinds of pesky infinities keep cropping up in calculations. In most cases this this symptomatic of deficiencies in the theoretical foundations of the subject. Think of the ‘ultraviolet catastrophe‘ of classical statistical mechanics, in which the electromagnetic radiation produced by a black body at a finite temperature is calculated to be infinitely intense at infinitely short wavelengths; this signalled the failure of classical statistical mechanics and ushered in the era of quantum mechanics about a hundred years ago. Quantum field theories have other forms of pathological behaviour, with mathematical components of the theory tending to run out of control to infinity unless they are healed using the technique of renormalization. The general theory of relativity predicts that singularities in which physical properties become infinite occur in the centre of black holes and in the Big Bang that kicked our Universe into existence. But even these are regarded as indications that we are missing a piece of the puzzle, rather than implying that somehow infinity is a part of nature itself.

The exception to this rule is the field of cosmology. Somehow it seems natural at least to consider the possibility that our cosmos might be infinite, either in extent or duration, or both, or perhaps even be a multiverse comprising an infinite collection of sub-universes. If the Universe is defined as everything that exists, why should it necessarily be finite? Why should there be some underlying principle that restricts it to a size our human brains can cope with?

On the other hand, there are cosmologists who won’t allow infinity into their view of the Universe. A prominent example is George Ellis, a strong critic of the multiverse idea in particular, who frequently quotes David Hilbert

The final result then is: nowhere is the infinite realized; it is neither present in nature nor admissible as a foundation in our rational thinking—a remarkable harmony between being and thought

But to every Hilbert there’s an equal and opposite Leibniz

I am so in favor of the actual infinite that instead of admitting that Nature abhors it, as is commonly said, I hold that Nature makes frequent use of it everywhere, in order to show more effectively the perfections of its Author.

You see that it’s an argument with quite a long pedigree!

Many years ago I attended a lecture by Alex Vilenkin, entitled The Principle of Mediocrity. This was a talk based on some ideas from his book Many Worlds in One: The Search for Other Universes, in which he discusses some of the consequences of the so-called eternal inflation scenario, which leads to a variation of the multiverse idea in which the universe comprises an infinite collection of causally-disconnected “bubbles” with different laws of low-energy physics applying in each. Indeed, in Vilenkin’s vision, all possible configurations of all possible things are realised somewhere in this ensemble of mini-universes.

One of the features of this scenario is that it brings the anthropic principle into play as a potential “explanation” for the apparent fine-tuning of our Universe that enables life to be sustained within it. We can only live in a domain wherein the laws of physics are compatible with life so it should be no surprise that’s what we find. There is an infinity of dead universes, but we don’t live there.

I’m not going to go on about the anthropic principle here, although it’s a subject that’s quite fun to write or, better still, give a talk about, especially if you enjoy winding people up! What I did want to say mention, though, is that Vilenkin correctly pointed out that three ingredients are needed to make this work:

  1. An infinite ensemble of realizations
  2. A discretizer
  3. A randomizer

Item 2 involves some sort of principle that ensures that the number of possible states of the system we’re talking about  is not infinite. A very simple example from  quantum physics might be the two spin states of an electron, up (↑) or down(↓). No “in-between” states are allowed, according to our tried-and-tested theories of quantum physics, so the state space is discrete.  In the more general context required for cosmology, the states are the allowed “laws of physics” ( i.e. possible  false vacuum configurations). The space of possible states is very much larger here, of course, and the theory that makes it discrete much less secure. In string theory, the number of false vacua is estimated at 10500. That’s certainly a very big number, but it’s not infinite so will do the job needed.

Item 3 requires a process that realizes every possible configuration across the ensemble in a “random” fashion. The word “random” is a bit problematic for me because I don’t really know what it’s supposed to mean. It’s a word that far too many scientists are content to hide behind, in my opinion. In this context, however, “random” really means that the assigning of states to elements in the ensemble must be ergodic, meaning that it must visit the entire state space with some probability. This is the kind of process that’s needed if an infinite collection of monkeys is indeed to type the (large but finite) complete works of shakespeare. It’s not enough that there be an infinite number and that the works of shakespeare be finite. The process of typing must also be ergodic.

Now it’s by no means obvious that monkeys would type ergodically. If, for example, they always hit two adjoining keys at the same time then the process would not be ergodic. Likewise it is by no means clear to me that the process of realizing the ensemble is ergodic. In fact I’m not even sure that there’s any process at all that “realizes” the string landscape. There’s a long and dangerous road from the (hypothetical) ensembles that exist even in standard quantum field theory to an actually existing “random” collection of observed things…

More generally, the mere fact that a mathematical solution of an equation can be derived does not mean that that equation describes anything that actually exists in nature. In this respect I agree with Alfred North Whitehead:

There is no more common error than to assume that, because prolonged and accurate mathematical calculations have been made, the application of the result to some fact of nature is absolutely certain.

It’s a quote I think some string theorists might benefit from reading!

Items 1, 2 and 3 are all needed to ensure that each particular configuration of the system is actually realized in nature. If we had an infinite number of realizations but with either infinite number of possible configurations or a non-ergodic selection mechanism then there’s no guarantee each possibility would actually happen. The success of this explanation consequently rests on quite stringent assumptions.

I’m a sceptic about this whole scheme for many reasons. First, I’m uncomfortable with infinity – that’s what you get for working with George Ellis, I guess. Second, and more importantly, I don’t understand string theory and am in any case unsure of the ontological status of the string landscape. Finally, although a large number of prominent cosmologists have waved their hands with commendable vigour, I have never seen anything even approaching a rigorous proof that eternal inflation does lead to realized infinity of  false vacua. If such a thing exists, I’d really like to hear about it!

The Vernal Equinox 2024

Posted in Barcelona, The Universe and Stuff with tags , , on March 20, 2024 by telescoper

Loughcrew (County Meath, Ireland), near Newgrange, an ancient burial site and  traditional place to observe the sunrise at the Equinox

Just a quick note to mention that the Vernal Equinox, or Spring Equinox, (in the Northern hemisphere) took place today, Wednesday 20th March 2024, at 3.06 UTC (which was 4.06am CET, where I am at, though I was sound asleep at the time). Many people in the Northern hemisphere regard the Vernal Equinox as the first day of spring; of course in the Southern hemisphere, this is the Autumnal Equinox.

The date of the Vernal Equinox is often given as 21st March, but in fact it has only been on 21st March twice this century so far (2003 and 2007); it was on 20th March in 2008, has been on 20th March every spring from then until now, and will be until 2044 (when it will be on March 19th). This year the equinox happened before dawn, so sunrise this morning could be taken to be the first sunrise of spring. It felt more like summer, sipping coffee on my terrace in Barcelona:

This reminds me of a strange conversation I had on a plane recently. I was chatting to the person sitting next to me, who happened to be British. When he asked what I did for a living, I replied that I was an astrophysicist. He then complained that he preferred the old days when the Spring Equinox was on March 21st, and that now that Britain was out of the European Union he hoped it would change back…

Anyway, people sometimes ask me how one can define the `equinox’ so precisely when surely it just refers to a day on which day and night are of equal length, implying that it’s a day not a specific time?

The answer is that the equinox is defined by a specific event, the event in question being when the plane defined by Earth’s equator passes through the centre of the Sun’s disk (or, if you prefer, when the centre of the Sun passes through the plane defined by Earth’s equator). Day and night are not necessarily exactly equal on the equinox, but they’re the closest they get. From now until the Autumnal Equinox, days in the Northern hemisphere will be longer than nights, and they’ll get longer until the Summer Solstice before beginning to shorten again.

Three New Publications at the Open Journal of Astrophysics

Posted in OJAp Papers, The Universe and Stuff with tags , , , , , , , , , , , , , , , , on March 19, 2024 by telescoper

Now that I’m safely back in Barcelona it’s a time for a roundup of the latest business at the  Open Journal of Astrophysics. The latest batch of publications consists of three papers, taking the count in Volume 7 (2024) up to 20 and the total published by OJAp up to 135.

This time the papers are all related, have many authors in common, and have the same first author, Philip F. Hopkins of Caltech. In fact the second and third papers in this batch were accepted well before the first one, but it seemed to make much more sense to publish them together so I held those two back a bit and published all three on 14th March.

The three papers published, with their overlays, are as follows. You can click on the images of the overlays to make them larger should you wish to do so. You can read these publications directly on arXiv if you wish; you will find them here, here and here.

First one up is “FORGE’d in FIRE: Resolving the End of Star Formation and Structure of AGN Accretion Disks from Cosmological Initial Conditions” in which, using a full cosmological simulation, incorporating radiation and magnetohydrodynamics, the authors study the formation and structure of AGN accretion disks and their impact on star formation. This one is in the folder marked Astrophysics of Galaxies.

The authors (ten from the USA and one from Canada) are Philip F. Hopkins (Caltech), Michael Y. Grudic (Carnegie Observatories), Kung-Yi Su (Harvard), Sarah Wellons (Wesleyan University), Daniel Angles-Alcazar (University of Connecticut & Flatiron Institute), Ulrich P. Steinwandel (Flatiron Institute), David Guszejnov (University of Texas at Austin), Norman Murray (CITA, Toronto, Canada), Claude-Andre Faucher-Giguere (Northwestern University), Eliot Quataert (Princeton), and Dusan Keres (University of California, San Diego or UCSD for short).

Here is a screen grab of the overlay, which includes the abstract:

 

 

The second paper to announce is “FORGE’d in FIRE II: The Formation of Magnetically-Dominated Quasar Accretion Disks from Cosmological Initial Conditions” which is a study of the formation and properties of highly magnetized accretion disks using numerical simulations that include the effects of radiation, magnetic fields, thermochemistry, and star formation.

This one is in the folder High-Energy Astrophysical Phenomena. The authors (ten based in the USA, one fin Canada, and one in New Zealand) are Philip F. Hopkins, Jonathan Squire (University of Dunedin, New Zealand), Kung-Yi Su (Harvard), Ulrich P. Steinwandel (Flatiron Institute), Kyle Kremer (Caltech), Yanlong Shi (Caltech), Michael Y. Grudic (Carnegie Observatories), Sarah Wellons (Wesleyan University), Claude-Andre Faucher-Giguere (Northwestern University), Daniel Angles-Alcazar (University of Connecticut & Flatiron Institute), Norman Murray (CITA, Toronto), and Eliot Quataert (Princeton).

 

The last paper of this batch, also in the folder High-Energy Astrophysical Phenomena, is  entitled “An Analytic Model For Magnetically-Dominated Accretion Disks” and is closely related to the previous one; this particular paper presents an analytic similarity model for accretion disks that agrees remarkably well with the simulations in the previous one. Animations of the simulations referred to in both papers can be found here.

Here is the overlay:

The authors of this one are Philip F. Hopkins, Jonathan Squire, Eliot Quataert, Norman Murray, Kung-Yi Su, Ulrich P. Steinwandel, Kyle Kremer, Claude-Andre Faucher-Giguere, and Sarah Wellons. You can find all their affiliations above.
That’s all for now. More news in a week or so!

 

 

My First Maynooth PhD!

Posted in Maynooth, The Universe and Stuff with tags , , , , , , on March 14, 2024 by telescoper

Today saw the viva voce examination of the first PhD student at Maynooth to have completed their degree under my supervision, although in this case the student started his postgraduate degree under another supervisor and I only took over responsibility when that person retired, a few years ago.

Anyway, I delayed my return to Barcelona so I could be present today. It’s not normal practice for the supervisor of a PhD to be present at the examination of the candidate. The rules allow for it – usually at the request of the student – but the supervisor must remain silent unless and until invited to comment by the examiners. I think it’s a very bad idea for both student and supervisor, and the one example that I can recall of a supervisor attending the PhD examination of his student was a very uncomfortable experience. My presence today was limited to supplying a couple of anticipatory bottles of champagne and then waiting nervously for the examination to finish.

I always feel nervous when a student of mine is having their viva voce examination, probably because I’m a bit protective and such an occasion always brings back painful memories of the similar ordeal I went through thirty-odd years ago. However, this is something a PhD candidate has to go through on their own, a sort of rite of passage during which the supervisor has to stand aside and let them stand up for their own work.

The examination turned out to be quite a long one – about three and a half hours – but ended happily. Unfortunately, I had to leave the celebrations early in order to do yet another Euclid-related Zoom call but when that was over I was able to find the pub to which everyone had adjourned and had a pint there with them. I have a feeling the celebrants might make a night of it tonight, but I’m a bit too tired after recent exertions to join them.

The student’s name, by the way, is Aonghus Hunter-McCabe and the title of the thesis is Differential geometric and general relativistic techniques in non-relativistic laboratory systems. If you’re looking for a postdoc to work in related areas then Aonghus might just be the person you want!

P.S. About a decade ago I did a post on the occasion of the PhD examination of another student of mine, Ian Harrison. I found out recently that Ian now has a permanent position at Cardiff University. Congratulations to him!

New Publication at the Open Journal of Astrophysics

Posted in OJAp Papers, Open Access, The Universe and Stuff with tags , , , , , on March 12, 2024 by telescoper

It’s my last morning in Phoenix and since I was too busy at the weekend to post the usual update from the Open Journal of Astrophysics I will do so now, before I go to the Airport for my flight home.

Looking at the workflow I see that there is a considerable backlog of papers that have been accepted but are waiting for the authors to put the final version on arXiv.  As a result there is only one paper to report for last week, being the 17th paper in Volume 7 (2024)  and the 132nd altogether; it was published on March 6 2024. I expect more soon!

The title of the latest paper is “Bayesian analysis of a Unified Dark Matter model with transition: can it alleviate the H0tension?” and it  is in the folder marked Cosmology and NonGalactic Astrophysics.  The article presents an investigation using Bayesian techniques of a specific cosmological model, in which dark matter and dark energy are aspects of a single component, with particular emphasis on the Hubble tension.

The authors are seven in number: Emmanuel Frion (University of Helsinki, Finland, and Western University, Canada); David Camarena (University of New Mexico, USA); Leonardo Giani (University of Queensland, Australia); Tays Miranda (University of Helsinki and University of Jyväskylä, both in Finland); Daniele Bertacca (Università degli Studi di Padova, Italy); Valerio Marra (Universidade Federal do Espírito Santo, Brazil and Osservatorio Astronomico di Trieste, Italy);
and Oliver F. Piattella (Università degli Studi dell’Insubria, Como, Italy).

Here is the overlay of the paper containing the abstract:

 

You can click on the image of the overlay to make it larger should you wish to do so. You can also find the officially accepted version of the paper on the arXiv here.

 

Irrationalism and Deductivism in Science

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , , , , , , , , , , on March 11, 2024 by telescoper

I thought I would use today’s post to share the above reading list which was posted on the wall at the meeting I was at this weekend; it was only two days long and has now finished. Seeing the first book on the list, however, it seems a good idea to follow this up with a brief discussion -largely inspired by David Stove’s book – of some of the philosophical issues raised at the workshop.

It is ironic that the pioneers of probability theory, principally Laplace, unquestionably adopted a Bayesian rather than frequentist interpretation for his probabilities. Frequentism arose during the nineteenth century and held sway until recently. I recall giving a conference talk about Bayesian reasoning only to be heckled by the audience with comments about “new-fangled, trendy Bayesian methods”. Nothing could have been less apt. Probability theory pre-dates the rise of sampling theory and all the frequentist-inspired techniques that modern-day statisticians like to employ.

Most disturbing of all is the influence that frequentist and other non-Bayesian views of probability have had upon the development of a philosophy of science, which I believe has a strong element of inverse reasoning or inductivism in it. The argument about whether there is a role for this type of thought in science goes back at least as far as Roger Bacon who lived in the 13th Century. Much later the brilliant Scottish empiricist philosopher and enlightenment figure David Hume argued strongly against induction. Most modern anti-inductivists can be traced back to this source. Pierre Duhem has argued that theory and experiment never meet face-to-face because in reality there are hosts of auxiliary assumptions involved in making this comparison. This is nowadays called the Quine-Duhem thesis.

Actually, for a Bayesian this doesn’t pose a logical difficulty at all. All one has to do is set up prior probability distributions for the required parameters, calculate their posterior probabilities and then integrate over those that aren’t related to measurements. This is just an expanded version of the idea of marginalization, explained here.

Rudolf Carnap, a logical positivist, attempted to construct a complete theory of inductive reasoning which bears some relationship to Bayesian thought, but he failed to apply Bayes’ theorem in the correct way. Carnap distinguished between two types or probabilities – logical and factual. Bayesians don’t – and I don’t – think this is necessary. The Bayesian definition seems to me to be quite coherent on its own.

Other philosophers of science reject the notion that inductive reasoning has any epistemological value at all. This anti-inductivist stance, often somewhat misleadingly called deductivist (irrationalist would be a better description) is evident in the thinking of three of the most influential philosophers of science of the last century: Karl PopperThomas Kuhn and, most recently, Paul Feyerabend. Regardless of the ferocity of their arguments with each other, these have in common that at the core of their systems of thought likes the rejection of all forms of inductive reasoning. The line of thought that ended in this intellectual cul-de-sac began, as I stated above, with the work of the Scottish empiricist philosopher David Hume. For a thorough analysis of the anti-inductivists mentioned above and their obvious debt to Hume, see David Stove’s book Popper and After: Four Modern Irrationalists. I will just make a few inflammatory remarks here.

Karl Popper really began the modern era of science philosophy with his Logik der Forschung, which was published in 1934. There isn’t really much about (Bayesian) probability theory in this book, which is strange for a work which claims to be about the logic of science. Popper also managed to, on the one hand, accept probability theory (in its frequentist form), but on the other, to reject induction. I find it therefore very hard to make sense of his work at all. It is also clear that, at least outside Britain, Popper is not really taken seriously by many people as a philosopher. Inside Britain it is very different,and I’m not at all sure I understand why. Nevertheless, in my experience, most working physicists seem to subscribe to some version of Popper’s basic philosophy.

Among the things Popper has claimed is that all observations are “theory-laden” and that “sense-data, untheoretical items of observation, simply do not exist”. I don’t think it is possible to defend this view, unless one asserts that numbers do not exist. Data are numbers. They can be incorporated in the form of propositions about parameters in any theoretical framework we like. It is of course true that the possibility space is theory-laden. It is a space of theories, after all. Theory does suggest what kinds of experiment should be done and what data is likely to be useful. But data can be used to update probabilities of anything.

Popper has also insisted that science is deductive rather than inductive. Part of this claim is just a semantic confusion. It is necessary at some point to deduce what the measurable consequences of a theory might be before one does any experiments, but that doesn’t mean the whole process of science is deductive. He does, however, reject the basic application of inductive reasoning in updating probabilities in the light of measured data; he asserts that no theory ever becomes more probable when evidence is found in its favour. Every scientific theory begins infinitely improbable, and is doomed to remain so.

Now there is a grain of truth in this, or can be if the space of possibilities is infinite. Standard methods for assigning priors often spread the unit total probability over an infinite space, leading to a prior probability which is formally zero. This is the problem of improper priors. But this is not a killer blow to Bayesianism. Even if the prior is not strictly normalizable, the posterior probability can be. In any case, given sufficient relevant data the cycle of experiment-measurement-update of probability assignment usually soon leaves the prior far behind. Data usually count in the end.

The idea by which Popper is best known is the dogma of falsification. According to this doctrine, a hypothesis is only said to be scientific if it is capable of being proved false. In real science certain “falsehood” and certain “truth” are almost never achieved. Theories are simply more probable or less probable than the alternatives on the market. The idea that experimental scientists struggle through their entire life simply to prove theorists wrong is a very strange one, although I definitely know some experimentalists who chase theories like lions chase gazelles. To a Bayesian, the right criterion is not falsifiability but testability, the ability of the theory to be rendered more or less probable using further data. Nevertheless, scientific theories generally do have untestable components. Any theory has its interpretation, which is the untestable baggage that we need to supply to make it comprehensible to us. But whatever can be tested can be scientific.

Popper’s work on the philosophical ideas that ultimately led to falsificationism began in Vienna, but the approach subsequently gained enormous popularity in western Europe. The American Thomas Kuhn later took up the anti-inductivist baton in his book The Structure of Scientific Revolutions. Kuhn is undoubtedly a first-rate historian of science and this book contains many perceptive analyses of episodes in the development of physics. His view of scientific progress is cyclic. It begins with a mass of confused observations and controversial theories, moves into a quiescent phase when one theory has triumphed over the others, and lapses into chaos again when the further testing exposes anomalies in the favoured theory. Kuhn adopted the word paradigm to describe the model that rules during the middle stage,

The history of science is littered with examples of this process, which is why so many scientists find Kuhn’s account in good accord with their experience. But there is a problem when attempts are made to fuse this historical observation into a philosophy based on anti-inductivism. Kuhn claims that we “have to relinquish the notion that changes of paradigm carry scientists ..closer and closer to the truth.” Einstein’s theory of relativity provides a closer fit to a wider range of observations than Newtonian mechanics, but in Kuhn’s view this success counts for nothing.

Paul Feyerabend has extended this anti-inductivist streak to its logical (though irrational) extreme. His approach has been dubbed “epistemological anarchism”, and it is clear that he believed that all theories are equally wrong. He is on record as stating that normal science is a fairytale, and that equal time and resources should be spent on “astrology, acupuncture and witchcraft”. He also categorised science alongside “religion, prostitution, and so on”. His thesis is basically that science is just one of many possible internally consistent views of the world, and that the choice between which of these views to adopt can only be made on socio-political grounds.

Feyerabend’s views could only have flourished in a society deeply disillusioned with science. Of course, many bad things have been done in science’s name, and many social institutions are deeply flawed. But one can’t expect anything operated by people to run perfectly. It’s also quite reasonable to argue on ethical grounds which bits of science should be funded and which should not. But the bottom line is that science does have a firm methodological basis which distinguishes it from pseudo-science, the occult and new age silliness. Science is distinguished from other belief-systems by its rigorous application of inductive reasoning and its willingness to subject itself to experimental test. Not all science is done properly, of course, and bad science is as bad as anything.

The Bayesian interpretation of probability leads to a philosophy of science which is essentially epistemological rather than ontological. Probabilities are not “out there” in external reality, but in our minds, representing our imperfect knowledge and understanding. Scientific theories are not absolute truths. Our knowledge of reality is never certain, but we are able to reason consistently about which of our theories provides the best available description of what is known at any given time. If that description fails when more data are gathered, we move on, introducing new elements or abandoning the theory for an alternative. This process could go on forever. There may never be a final theory. But although the game might have no end, at least we know the rules….