Archive for philosophy

Bogus Scopus

Posted in Open Access with tags , , , , on June 17, 2024 by telescoper

Just to show that I’m not alone in having severe doubts about the reliability and integrity of Scopus here is an article from Retraction Watch that points out that three of the top ten philosophy journals (according to that database) are fake. Among the facts that could easily have been checked by a competent agency is this:

The same editorial board serves for three journals, with 10 members who are dead. 

The article concludes:

Rankings based on Scopus frequently serve universities and funding bodies as indicators of the quality of research, including in philosophy. They play a crucial role in decisions regarding academic awards, hiring, and promotion, and thus may influence the publication strategies of researchers… Our findings show that research institutions should refrain from the automatic use of such rankings. 

Quite. Any institute that has signed up to the San Francisco Declaration on Research Assessment should not be basing any decisions on Scopus anyway, but I don’t think that goes far enough. Scopus is a corrupting influence. It is high time for universities and other agencies to stop paying their subscriptions and ditch it entirely.

Cosmology Talks – To Infinity and Beyond (Probably)

Posted in mathematics, The Universe and Stuff with tags , , , , , , , , , , , , , on March 20, 2024 by telescoper

Here’s an interestingly different talk in the series of Cosmology Talks curated by Shaun Hotchkiss. The speaker, Sylvia Wenmackers, is a philosopher of science. According to the blurb on Youtube:

Her focus is probability and she has worked on a few theories that aim to extend and modify the standard axioms of probability in order to tackle paradoxes related to infinite spaces. In particular there is a paradox of the “infinite fair lottery” where within standard probability it seems impossible to write down a “fair” probability function on the integers. If you give the integers any non-zero probability, the total probability of all integers is unbounded, so the function is not normalisable. If you give the integers zero probability, the total probability of all integers is also zero. No other option seems viable for a fair distribution. This paradox arises in a number of places within cosmology, especially in the context of eternal inflation and a possible multiverse of big bangs bubbling off. If every bubble is to be treated fairly, and there will ultimately be an unbounded number of them, how do we assign probability? The proposed solutions involve hyper-real numbers, such as infinitesimals and infinities with different relative sizes, (reflecting how quickly things converge or diverge respectively). The multiverse has other problems, and other areas of cosmology where this issue arises also have their own problems (e.g. the initial conditions of inflation); however this could very well be part of the way towards fixing the cosmological multiverse.

The paper referred to in the presentation can be found here. There is a lot to digest in this thought-provoking talk, from the starting point on Kolmogorov’s axioms to the application to the multiverse, but this video gives me an excuse to repeat my thoughts on infinities in cosmology.

Most of us – whether scientists or not – have an uncomfortable time coping with the concept of infinity. Physicists have had a particularly difficult relationship with the notion of boundlessness, as various kinds of pesky infinities keep cropping up in calculations. In most cases this this symptomatic of deficiencies in the theoretical foundations of the subject. Think of the ‘ultraviolet catastrophe‘ of classical statistical mechanics, in which the electromagnetic radiation produced by a black body at a finite temperature is calculated to be infinitely intense at infinitely short wavelengths; this signalled the failure of classical statistical mechanics and ushered in the era of quantum mechanics about a hundred years ago. Quantum field theories have other forms of pathological behaviour, with mathematical components of the theory tending to run out of control to infinity unless they are healed using the technique of renormalization. The general theory of relativity predicts that singularities in which physical properties become infinite occur in the centre of black holes and in the Big Bang that kicked our Universe into existence. But even these are regarded as indications that we are missing a piece of the puzzle, rather than implying that somehow infinity is a part of nature itself.

The exception to this rule is the field of cosmology. Somehow it seems natural at least to consider the possibility that our cosmos might be infinite, either in extent or duration, or both, or perhaps even be a multiverse comprising an infinite collection of sub-universes. If the Universe is defined as everything that exists, why should it necessarily be finite? Why should there be some underlying principle that restricts it to a size our human brains can cope with?

On the other hand, there are cosmologists who won’t allow infinity into their view of the Universe. A prominent example is George Ellis, a strong critic of the multiverse idea in particular, who frequently quotes David Hilbert

The final result then is: nowhere is the infinite realized; it is neither present in nature nor admissible as a foundation in our rational thinking—a remarkable harmony between being and thought

But to every Hilbert there’s an equal and opposite Leibniz

I am so in favor of the actual infinite that instead of admitting that Nature abhors it, as is commonly said, I hold that Nature makes frequent use of it everywhere, in order to show more effectively the perfections of its Author.

You see that it’s an argument with quite a long pedigree!

Many years ago I attended a lecture by Alex Vilenkin, entitled The Principle of Mediocrity. This was a talk based on some ideas from his book Many Worlds in One: The Search for Other Universes, in which he discusses some of the consequences of the so-called eternal inflation scenario, which leads to a variation of the multiverse idea in which the universe comprises an infinite collection of causally-disconnected “bubbles” with different laws of low-energy physics applying in each. Indeed, in Vilenkin’s vision, all possible configurations of all possible things are realised somewhere in this ensemble of mini-universes.

One of the features of this scenario is that it brings the anthropic principle into play as a potential “explanation” for the apparent fine-tuning of our Universe that enables life to be sustained within it. We can only live in a domain wherein the laws of physics are compatible with life so it should be no surprise that’s what we find. There is an infinity of dead universes, but we don’t live there.

I’m not going to go on about the anthropic principle here, although it’s a subject that’s quite fun to write or, better still, give a talk about, especially if you enjoy winding people up! What I did want to say mention, though, is that Vilenkin correctly pointed out that three ingredients are needed to make this work:

  1. An infinite ensemble of realizations
  2. A discretizer
  3. A randomizer

Item 2 involves some sort of principle that ensures that the number of possible states of the system we’re talking about  is not infinite. A very simple example from  quantum physics might be the two spin states of an electron, up (↑) or down(↓). No “in-between” states are allowed, according to our tried-and-tested theories of quantum physics, so the state space is discrete.  In the more general context required for cosmology, the states are the allowed “laws of physics” ( i.e. possible  false vacuum configurations). The space of possible states is very much larger here, of course, and the theory that makes it discrete much less secure. In string theory, the number of false vacua is estimated at 10500. That’s certainly a very big number, but it’s not infinite so will do the job needed.

Item 3 requires a process that realizes every possible configuration across the ensemble in a “random” fashion. The word “random” is a bit problematic for me because I don’t really know what it’s supposed to mean. It’s a word that far too many scientists are content to hide behind, in my opinion. In this context, however, “random” really means that the assigning of states to elements in the ensemble must be ergodic, meaning that it must visit the entire state space with some probability. This is the kind of process that’s needed if an infinite collection of monkeys is indeed to type the (large but finite) complete works of shakespeare. It’s not enough that there be an infinite number and that the works of shakespeare be finite. The process of typing must also be ergodic.

Now it’s by no means obvious that monkeys would type ergodically. If, for example, they always hit two adjoining keys at the same time then the process would not be ergodic. Likewise it is by no means clear to me that the process of realizing the ensemble is ergodic. In fact I’m not even sure that there’s any process at all that “realizes” the string landscape. There’s a long and dangerous road from the (hypothetical) ensembles that exist even in standard quantum field theory to an actually existing “random” collection of observed things…

More generally, the mere fact that a mathematical solution of an equation can be derived does not mean that that equation describes anything that actually exists in nature. In this respect I agree with Alfred North Whitehead:

There is no more common error than to assume that, because prolonged and accurate mathematical calculations have been made, the application of the result to some fact of nature is absolutely certain.

It’s a quote I think some string theorists might benefit from reading!

Items 1, 2 and 3 are all needed to ensure that each particular configuration of the system is actually realized in nature. If we had an infinite number of realizations but with either infinite number of possible configurations or a non-ergodic selection mechanism then there’s no guarantee each possibility would actually happen. The success of this explanation consequently rests on quite stringent assumptions.

I’m a sceptic about this whole scheme for many reasons. First, I’m uncomfortable with infinity – that’s what you get for working with George Ellis, I guess. Second, and more importantly, I don’t understand string theory and am in any case unsure of the ontological status of the string landscape. Finally, although a large number of prominent cosmologists have waved their hands with commendable vigour, I have never seen anything even approaching a rigorous proof that eternal inflation does lead to realized infinity of  false vacua. If such a thing exists, I’d really like to hear about it!

Irrationalism and Deductivism in Science

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , , , , , , , , , , on March 11, 2024 by telescoper

I thought I would use today’s post to share the above reading list which was posted on the wall at the meeting I was at this weekend; it was only two days long and has now finished. Seeing the first book on the list, however, it seems a good idea to follow this up with a brief discussion -largely inspired by David Stove’s book – of some of the philosophical issues raised at the workshop.

It is ironic that the pioneers of probability theory, principally Laplace, unquestionably adopted a Bayesian rather than frequentist interpretation for his probabilities. Frequentism arose during the nineteenth century and held sway until recently. I recall giving a conference talk about Bayesian reasoning only to be heckled by the audience with comments about “new-fangled, trendy Bayesian methods”. Nothing could have been less apt. Probability theory pre-dates the rise of sampling theory and all the frequentist-inspired techniques that modern-day statisticians like to employ.

Most disturbing of all is the influence that frequentist and other non-Bayesian views of probability have had upon the development of a philosophy of science, which I believe has a strong element of inverse reasoning or inductivism in it. The argument about whether there is a role for this type of thought in science goes back at least as far as Roger Bacon who lived in the 13th Century. Much later the brilliant Scottish empiricist philosopher and enlightenment figure David Hume argued strongly against induction. Most modern anti-inductivists can be traced back to this source. Pierre Duhem has argued that theory and experiment never meet face-to-face because in reality there are hosts of auxiliary assumptions involved in making this comparison. This is nowadays called the Quine-Duhem thesis.

Actually, for a Bayesian this doesn’t pose a logical difficulty at all. All one has to do is set up prior probability distributions for the required parameters, calculate their posterior probabilities and then integrate over those that aren’t related to measurements. This is just an expanded version of the idea of marginalization, explained here.

Rudolf Carnap, a logical positivist, attempted to construct a complete theory of inductive reasoning which bears some relationship to Bayesian thought, but he failed to apply Bayes’ theorem in the correct way. Carnap distinguished between two types or probabilities – logical and factual. Bayesians don’t – and I don’t – think this is necessary. The Bayesian definition seems to me to be quite coherent on its own.

Other philosophers of science reject the notion that inductive reasoning has any epistemological value at all. This anti-inductivist stance, often somewhat misleadingly called deductivist (irrationalist would be a better description) is evident in the thinking of three of the most influential philosophers of science of the last century: Karl PopperThomas Kuhn and, most recently, Paul Feyerabend. Regardless of the ferocity of their arguments with each other, these have in common that at the core of their systems of thought likes the rejection of all forms of inductive reasoning. The line of thought that ended in this intellectual cul-de-sac began, as I stated above, with the work of the Scottish empiricist philosopher David Hume. For a thorough analysis of the anti-inductivists mentioned above and their obvious debt to Hume, see David Stove’s book Popper and After: Four Modern Irrationalists. I will just make a few inflammatory remarks here.

Karl Popper really began the modern era of science philosophy with his Logik der Forschung, which was published in 1934. There isn’t really much about (Bayesian) probability theory in this book, which is strange for a work which claims to be about the logic of science. Popper also managed to, on the one hand, accept probability theory (in its frequentist form), but on the other, to reject induction. I find it therefore very hard to make sense of his work at all. It is also clear that, at least outside Britain, Popper is not really taken seriously by many people as a philosopher. Inside Britain it is very different,and I’m not at all sure I understand why. Nevertheless, in my experience, most working physicists seem to subscribe to some version of Popper’s basic philosophy.

Among the things Popper has claimed is that all observations are “theory-laden” and that “sense-data, untheoretical items of observation, simply do not exist”. I don’t think it is possible to defend this view, unless one asserts that numbers do not exist. Data are numbers. They can be incorporated in the form of propositions about parameters in any theoretical framework we like. It is of course true that the possibility space is theory-laden. It is a space of theories, after all. Theory does suggest what kinds of experiment should be done and what data is likely to be useful. But data can be used to update probabilities of anything.

Popper has also insisted that science is deductive rather than inductive. Part of this claim is just a semantic confusion. It is necessary at some point to deduce what the measurable consequences of a theory might be before one does any experiments, but that doesn’t mean the whole process of science is deductive. He does, however, reject the basic application of inductive reasoning in updating probabilities in the light of measured data; he asserts that no theory ever becomes more probable when evidence is found in its favour. Every scientific theory begins infinitely improbable, and is doomed to remain so.

Now there is a grain of truth in this, or can be if the space of possibilities is infinite. Standard methods for assigning priors often spread the unit total probability over an infinite space, leading to a prior probability which is formally zero. This is the problem of improper priors. But this is not a killer blow to Bayesianism. Even if the prior is not strictly normalizable, the posterior probability can be. In any case, given sufficient relevant data the cycle of experiment-measurement-update of probability assignment usually soon leaves the prior far behind. Data usually count in the end.

The idea by which Popper is best known is the dogma of falsification. According to this doctrine, a hypothesis is only said to be scientific if it is capable of being proved false. In real science certain “falsehood” and certain “truth” are almost never achieved. Theories are simply more probable or less probable than the alternatives on the market. The idea that experimental scientists struggle through their entire life simply to prove theorists wrong is a very strange one, although I definitely know some experimentalists who chase theories like lions chase gazelles. To a Bayesian, the right criterion is not falsifiability but testability, the ability of the theory to be rendered more or less probable using further data. Nevertheless, scientific theories generally do have untestable components. Any theory has its interpretation, which is the untestable baggage that we need to supply to make it comprehensible to us. But whatever can be tested can be scientific.

Popper’s work on the philosophical ideas that ultimately led to falsificationism began in Vienna, but the approach subsequently gained enormous popularity in western Europe. The American Thomas Kuhn later took up the anti-inductivist baton in his book The Structure of Scientific Revolutions. Kuhn is undoubtedly a first-rate historian of science and this book contains many perceptive analyses of episodes in the development of physics. His view of scientific progress is cyclic. It begins with a mass of confused observations and controversial theories, moves into a quiescent phase when one theory has triumphed over the others, and lapses into chaos again when the further testing exposes anomalies in the favoured theory. Kuhn adopted the word paradigm to describe the model that rules during the middle stage,

The history of science is littered with examples of this process, which is why so many scientists find Kuhn’s account in good accord with their experience. But there is a problem when attempts are made to fuse this historical observation into a philosophy based on anti-inductivism. Kuhn claims that we “have to relinquish the notion that changes of paradigm carry scientists ..closer and closer to the truth.” Einstein’s theory of relativity provides a closer fit to a wider range of observations than Newtonian mechanics, but in Kuhn’s view this success counts for nothing.

Paul Feyerabend has extended this anti-inductivist streak to its logical (though irrational) extreme. His approach has been dubbed “epistemological anarchism”, and it is clear that he believed that all theories are equally wrong. He is on record as stating that normal science is a fairytale, and that equal time and resources should be spent on “astrology, acupuncture and witchcraft”. He also categorised science alongside “religion, prostitution, and so on”. His thesis is basically that science is just one of many possible internally consistent views of the world, and that the choice between which of these views to adopt can only be made on socio-political grounds.

Feyerabend’s views could only have flourished in a society deeply disillusioned with science. Of course, many bad things have been done in science’s name, and many social institutions are deeply flawed. But one can’t expect anything operated by people to run perfectly. It’s also quite reasonable to argue on ethical grounds which bits of science should be funded and which should not. But the bottom line is that science does have a firm methodological basis which distinguishes it from pseudo-science, the occult and new age silliness. Science is distinguished from other belief-systems by its rigorous application of inductive reasoning and its willingness to subject itself to experimental test. Not all science is done properly, of course, and bad science is as bad as anything.

The Bayesian interpretation of probability leads to a philosophy of science which is essentially epistemological rather than ontological. Probabilities are not “out there” in external reality, but in our minds, representing our imperfect knowledge and understanding. Scientific theories are not absolute truths. Our knowledge of reality is never certain, but we are able to reason consistently about which of our theories provides the best available description of what is known at any given time. If that description fails when more data are gathered, we move on, introducing new elements or abandoning the theory for an alternative. This process could go on forever. There may never be a final theory. But although the game might have no end, at least we know the rules….

Is it a truth universally acknowledged?

Posted in Literature with tags , , , on January 15, 2024 by telescoper

For reasons that may or may not be revealed shortly, I am currently re-reading the novel Pride and Prejudice by Jane Austen:

My old copy of Pride and Prejudice, dated 1986.

Among many other things, this has one of the most famously ironic opening lines in all English literature:

It is a truth universally acknowledged that, that a single man in possession of a good fortune must be in want of a wife.

I recently came across this discussion of this sentence by the philosopher Ludwig Wittgenstein, which I thought it would be amusing to share:

Let us ask what it is when we say “It is a truth universally acknowledged” that something is the case. Isn’t this a queer thing to say? How can we possibly understand it? At first sight it may appear that “it” is simply the something that is the case (ie that a man possessed of a certain degree of wealth will always feel the lack, or perhaps, without feeling it, be in need, of a wife). This “it”, however, can be no more than a pronominal anterior reference to the “truth” that is being claimed, without as yet there being any evidence for it, even though it is later stated to be acknowledged as a truth by everyone. In such a case it seems to us that the truth has been claimed a priori, since nothing can be acknowledged until it is proposed, although once proposed, such a supposed truth may be further tested through opinion and behaviour. Consider the much simpler proposition: “A man in possession of a good fortune must be in want of a wife”. We might reply “How do you know?”, a response that immediately raises the idea of possible exceptions to such a generalisation, such as (among other more complex forms of exception) that he may have a wife already, or may be a secret lover of men. To claim universal acknowledgement of a truth is to claim that a probable “truth” is undeniably true, which can be no more than a specious tautology. Moreover, as we have seen, the “it” with which we began has already laid claim to the existence of something (a kind of truth, as it soon turns out) that can only be assumed through this insistent and superfluous pronoun, which is a form of private acknowledgement by the speaker alone, and is by no means obviously universal. That this “it” is true, and that truth is also true, is what is being claimed here, and the double tautology becomes a distinct puzzle. To be induced to assent to an “it”, when there may be ample reason to doubt its very relation to the proposition which follows, is to be invited not to understand it.

I hope this clarifies the situation.

What’s a good Cosmological Model?

Posted in Books, The Universe and Stuff with tags , , , on April 2, 2023 by telescoper

Some years ago – actually about 30! – I wrote a book with George Ellis about the density of matter in the Universe. Many of the details in that book are of course out of date now but the main conclusions still stand. We started the book with a general discussion of cosmological models which I think also remains relevant today so I thought I’d do a quick recap here.

Anyone who takes even a passing interest in cosmology will know that it’s a field that’s not short of controversy, sometimes reinforced by a considerable level of dogmatism in opposing camps. In understanding why this is the case, it is perhaps helpful to note that much of the problem stems from philosophical disagreements about which are the appropriate criteria for choosing a “good” (or at least acceptable) theory of cosmology. Different approaches to cosmology develop theories aimed at satisfying different criteria, and preferences for the different approaches to a large extent reflect these different initial goals. It would help to clarify this situation if one could make explicit the issues relating to choices of this kind, and separate them from the more `physical’ issues that concern the interpretation of data.

The following philosophical diversion was intended to initiate a debate within the cosmological community. Some cosmologists in effect claim that there is no philosophical content in their work and that philosophy is an irrelevant and unnecessary distraction from their work as scientists. I would contend that they are, whether they like it or not, making philosophical (and, in many cases, metaphysical) assumptions, and it is better to have these out in the open than hidden.

To provide a starting point for, consider the following criteria, which might be applied in the wider context for scientific theories in general, encapsulating the essentials of this issue:

One can imagine a kind of rating system which judges cosmological models against each of these criteria. The point is that cosmologists from different backgrounds implicitly assign a different weighting to each of them, and therefore end up trying to achieve different goals to others. There is a possibility of both positive and negative ratings in each of these areas.

Note that such categories as “importance”, “intrinsic interest” and “plausibility” are not included. Insofar as they have any meaning apart from personal prejudice, they should be reflected in the categories above, and could perhaps be defined as aggregate estimates following on from the proposed categories.

Category 1(c) (“beauty”) is difficult to define objectively but nevertheless is quite widely used, and seems independent of the others; it is the one that is most problematic . Compare, for example, the apparently “beautiful” circular orbit model of the Solar System with the apparently ugly elliptic orbits found by Kepler. Only after Newton introduced his theory of gravitation did it become clear that beauty in this situation resided in the inverse-square law itself, rather than in the outcomes of that law. Some might therefore wish to omit this category.

One might think that category 1(a) (“logical consistency'”) would be mandatory, but this is not so, basically because we do not yet have a consistent Theory of Everything.

Again one might think that negative scores in 4(b) (`confirmation’) would disqualify a theory but, again, that is not necessarily so, because measurement processes, may involve systematic errors and observational results are all to some extent uncertain due to statistical limitations. Confirmation can therefore be queried. A theory might also be testable [4(a)] in principle, but perhaps not in practice at a given time because the technology may not exist to perform the necessary experiment or observation.

The idea is that even when there is disagreement about the relative merits of different models or theories, there is a possibility of
agreement on the degree to which the different approaches could and do meet these various criteria. Thus one can explore the degree to which each of these criteria is met by a particular cosmological model or approach to cosmology. We suggest that one can distinguish five broadly different approaches to cosmology, roughly corresponding to major developments at different historical epochs:

These approaches are not completely independent of each other, but any particular model will tend to focus more on one or other aspect and may even completely leave out others. Comparing them with the criteria above, one ends up with a star rating system something like that shown in the Table, in which George and I applied a fairly arbitrary scale to the assignment of the ratings!

To a large extent you can take your pick as to the weights you assign to each of these criteria, but my underlying views is that without a solid basis of experimental support [4(b)], or at least the possibility of confirmation [4(a)], a proposed theory is not a ‘good’ one from a scientific point of view. If one can say what one likes and cannot be proved wrong, one is free from the normal constraints of scientific discipline. This contrasts with a major thrust in modern cosmological thinking which emphasizes criteria [2] and [3] at the expense of [4].

Matter and forces in quantum field theory – an attempt at a philosophical elucidation

Posted in Maynooth, The Universe and Stuff with tags , , , on March 16, 2021 by telescoper

I thought the following might be of general interest to readers of this blog. It’s a translation into English of an MSc thesis written by a certain Jon-Ivar Skullerud of the Department of Theoretical Physics at Maynooth University. I hasten to add that Dr Skullerud hasn’t just finished his MSc. It just appears that it has taken about 30 years to translate it from Norwegian into English!

Anyway, the English version is now available on the arXiv here. There isn’t really an abstract as such but the description on arXiv says:

This is a translation into English of my Masters thesis (hovedoppgave) from 1991. The main topic of the thesis is the relation between fundamental physics and philosophy, and a discussion of several possible ontologies of quantum field theory.

I note from here that hovedoppgave translates literally into English as “main task”.

I haven’t read this thesis from cover to cover yet – partly because it’s in digital form and doesn’t have any covers on it and partly because it’s 134 pages long –  but I’ve skimmed a few bits and it looks interesting.

 

 

Does Physics need Philosophy (and vice versa)?

Posted in mathematics, The Universe and Stuff with tags , , on June 1, 2018 by telescoper

There’s a new paper on the arXiv by Carlo Rovelli entitled Physics Needs Philosophy. Philosophy Needs Physics. Here is the abstract:

Contrary to claims about the irrelevance of philosophy for science, I argue that philosophy has had, and still has, far more influence on physics than is commonly assumed. I maintain that the current anti-philosophical ideology has had damaging effects on the fertility of science. I also suggest that recent important empirical results, such as the detection of the Higgs particle and gravitational waves, and the failure to detect supersymmetry where many expected to find it, question the validity of certain philosophical assumptions common among theoretical physicists, inviting us to engage in a clearer philosophical reflection on scientific method.

Read and discuss.

Cosmology and the Constants of Nature

Posted in The Universe and Stuff with tags , , , , on January 20, 2014 by telescoper

Just a brief post to advertise a very interesting meeting coming up in Cambridge:

–o–

Cosmology and the Constants of Nature

DAMTP, University of Cambridge

Monday, 17 March 2014 at 09:00 – Wednesday, 19 March 2014 at 15:00 (GMT)

Cambridge, United Kingdom

The Constants of Nature are quantities, whose numerical values we know with the greatest experimental accuracy – but about the rationale for those values, we have the greatest ignorance. We might also ask if they are indeed constant in space and time, and investigate whether their values arise at random or are uniquely determined by some deep theory.

This mini-series of talks is part of the joint Oxford-Cambridge programme on the Philosophy of Cosmology which aims to introduce philosophers of physics to fundamental problems in cosmology and associated areas of high-energy physics.

The talks are aimed at philosophers of physics but should also be of interest to a wide range of cosmologists.  Speakers will introduce the physical constants that define the standard model of particle physics and cosmology together with the data that determine them, describe observational programmes that test the constancy of traditional ʽconstantsʼ, including the cosmological constant, and discuss how self-consistent theories of varying constants can be formulated.

Speakers:

John Barrow, University of Cambridge

John Ellis, King’s College London

Pedro Ferreira, University of Oxford

Joao Magueijo, Imperial College, London

Thanu Padmanabhan, IUCAA, Pune

Martin Rees, University of Cambridge

John Webb, University of New South Wales, Sydney

Registration is free and includes morning coffee and lunch. Participants are requested to register at the conference website where the detailed programme of talks can be found:

http://www.eventbrite.co.uk/e/cosmology-and-the-constants-of-nature-registration-9356261831

For enquiries about this event please contact Margaret Bull at mmp@maths.cam.ac.uk

Kuhn the Irrationalist

Posted in Bad Statistics, The Universe and Stuff with tags , , , , , , , , , , , , on August 19, 2012 by telescoper

There’s an article in today’s Observer marking the 50th anniversary of the publication of Thomas Kuhn’s book The Structure of Scientific Revolutions.  John Naughton, who wrote the piece, claims that this book “changed the way we look at science”. I don’t agree with this view at all, actually. There’s little in Kuhn’s book that isn’t implicit in the writings of Karl Popper and little in Popper’s work that isn’t implicit in the work of a far more important figure in the development of the philosophy of science, David Hume. The key point about all these authors is that they failed to understand the central role played by probability and inductive logic in scientific research. In the following I’ll try to explain how I think it all went wrong. It might help the uninitiated to read an earlier post of mine about the Bayesian interpretation of probability.

It is ironic that the pioneers of probability theory and its application to scientific research, principally Laplace, unquestionably adopted a Bayesian rather than frequentist interpretation for his probabilities. Frequentism arose during the nineteenth century and held sway until relatively recently. I recall giving a conference talk about Bayesian reasoning only to be heckled by the audience with comments about “new-fangled, trendy Bayesian methods”. Nothing could have been less apt. Probability theory pre-dates the rise of sampling theory and all the other frequentist-inspired techniques that many modern-day statisticians like to employ.

Most disturbing of all is the influence that frequentist and other non-Bayesian views of probability have had upon the development of a philosophy of science, which I believe has a strong element of inverse reasoning or inductivism in it. The argument about whether there is a role for this type of thought in science goes back at least as far as Roger Bacon who lived in the 13th Century. Much later the brilliant Scottish empiricist philosopher and enlightenment figure David Hume argued strongly against induction. Most modern anti-inductivists can be traced back to this source. Pierre Duhem has argued that theory and experiment never meet face-to-face because in reality there are hosts of auxiliary assumptions involved in making this comparison. This is nowadays called the Quine-Duhem thesis.

Actually, for a Bayesian this doesn’t pose a logical difficulty at all. All one has to do is set up prior probability distributions for the required parameters, calculate their posterior probabilities and then integrate over those that aren’t related to measurements. This is just an expanded version of the idea of marginalization, explained here.

Rudolf Carnap, a logical positivist, attempted to construct a complete theory of inductive reasoning which bears some relationship to Bayesian thought, but he failed to apply Bayes’ theorem in the correct way. Carnap distinguished between two types or probabilities – logical and factual. Bayesians don’t – and I don’t – think this is necessary. The Bayesian definition seems to me to be quite coherent on its own.

Other philosophers of science reject the notion that inductive reasoning has any epistemological value at all. This anti-inductivist stance, often somewhat misleadingly called deductivist (irrationalist would be a better description) is evident in the thinking of three of the most influential philosophers of science of the last century: Karl Popper, Thomas Kuhn and, most recently, Paul Feyerabend. Regardless of the ferocity of their arguments with each other, these have in common that at the core of their systems of thought likes the rejection of all forms of inductive reasoning. The line of thought that ended in this intellectual cul-de-sac began, as I stated above, with the work of the Scottish empiricist philosopher David Hume. For a thorough analysis of the anti-inductivists mentioned above and their obvious debt to Hume, see David Stove’s book Popper and After: Four Modern Irrationalists. I will just make a few inflammatory remarks here.

Karl Popper really began the modern era of science philosophy with his Logik der Forschung, which was published in 1934. There isn’t really much about (Bayesian) probability theory in this book, which is strange for a work which claims to be about the logic of science. Popper also managed to, on the one hand, accept probability theory (in its frequentist form), but on the other, to reject induction. I find it therefore very hard to make sense of his work at all. It is also clear that, at least outside Britain, Popper is not really taken seriously by many people as a philosopher. Inside Britain it is very different and I’m not at all sure I understand why. Nevertheless, in my experience, most working physicists seem to subscribe to some version of Popper’s basic philosophy.

Among the things Popper has claimed is that all observations are “theory-laden” and that “sense-data, untheoretical items of observation, simply do not exist”. I don’t think it is possible to defend this view, unless one asserts that numbers do not exist. Data are numbers. They can be incorporated in the form of propositions about parameters in any theoretical framework we like. It is of course true that the possibility space is theory-laden. It is a space of theories, after all. Theory does suggest what kinds of experiment should be done and what data is likely to be useful. But data can be used to update probabilities of anything.

Popper has also insisted that science is deductive rather than inductive. Part of this claim is just a semantic confusion. It is necessary at some point to deduce what the measurable consequences of a theory might be before one does any experiments, but that doesn’t mean the whole process of science is deductive. He does, however, reject the basic application of inductive reasoning in updating probabilities in the light of measured data; he asserts that no theory ever becomes more probable when evidence is found in its favour. Every scientific theory begins infinitely improbable, and is doomed to remain so.

Now there is a grain of truth in this, or can be if the space of possibilities is infinite. Standard methods for assigning priors often spread the unit total probability over an infinite space, leading to a prior probability which is formally zero. This is the problem of improper priors. But this is not a killer blow to Bayesianism. Even if the prior is not strictly normalizable, the posterior probability can be. In any case, given sufficient relevant data the cycle of experiment-measurement-update of probability assignment usually soon leaves the prior far behind. Data usually count in the end.

The idea by which Popper is best known is the dogma of falsification. According to this doctrine, a hypothesis is only said to be scientific if it is capable of being proved false. In real science certain “falsehood” and certain “truth” are almost never achieved. Theories are simply more probable or less probable than the alternatives on the market. The idea that experimental scientists struggle through their entire life simply to prove theorists wrong is a very strange one, although I definitely know some experimentalists who chase theories like lions chase gazelles. To a Bayesian, the right criterion is not falsifiability but testability, the ability of the theory to be rendered more or less probable using further data. Nevertheless, scientific theories generally do have untestable components. Any theory has its interpretation, which is the untestable baggage that we need to supply to make it comprehensible to us. But whatever can be tested can be scientific.

Popper’s work on the philosophical ideas that ultimately led to falsificationism began in Vienna, but the approach subsequently gained enormous popularity in western Europe. The American Thomas Kuhn later took up the anti-inductivist baton in his book The Structure of Scientific Revolutions. Initially a physicist, Kuhn undoubtedly became a first-rate historian of science and this book contains many perceptive analyses of episodes in the development of physics. His view of scientific progress is cyclic. It begins with a mass of confused observations and controversial theories, moves into a quiescent phase when one theory has triumphed over the others, and lapses into chaos again when the further testing exposes anomalies in the favoured theory. Kuhn adopted the word paradigm to describe the model that rules during the middle stage,

The history of science is littered with examples of this process, which is why so many scientists find Kuhn’s account in good accord with their experience. But there is a problem when attempts are made to fuse this historical observation into a philosophy based on anti-inductivism. Kuhn claims that we “have to relinquish the notion that changes of paradigm carry scientists ..closer and closer to the truth.” Einstein’s theory of relativity provides a closer fit to a wider range of observations than Newtonian mechanics, but in Kuhn’s view this success counts for nothing.

Paul Feyerabend has extended this anti-inductivist streak to its logical (though irrational) extreme. His approach has been dubbed “epistemological anarchism”, and it is clear that he believed that all theories are equally wrong. He is on record as stating that normal science is a fairytale, and that equal time and resources should be spent on “astrology, acupuncture and witchcraft”. He also categorised science alongside “religion, prostitution, and so on”. His thesis is basically that science is just one of many possible internally consistent views of the world, and that the choice between which of these views to adopt can only be made on socio-political grounds.

Feyerabend’s views could only have flourished in a society deeply disillusioned with science. Of course, many bad things have been done in science’s name, and many social institutions are deeply flawed. One can’t expect anything operated by people to run perfectly. It’s also quite reasonable to argue on ethical grounds which bits of science should be funded and which should not. But the bottom line is that science does have a firm methodological basis which distinguishes it from pseudo-science, the occult and new age silliness. Science is distinguished from other belief-systems by its rigorous application of inductive reasoning and its willingness to subject itself to experimental test. Not all science is done properly, of course, and bad science is as bad as anything.

The Bayesian interpretation of probability leads to a philosophy of science which is essentially epistemological rather than ontological. Probabilities are not “out there” in external reality, but in our minds, representing our imperfect knowledge and understanding. Scientific theories are not absolute truths. Our knowledge of reality is never certain, but we are able to reason consistently about which of our theories provides the best available description of what is known at any given time. If that description fails when more data are gathered, we move on, introducing new elements or abandoning the theory for an alternative. This process could go on forever. There may never be a “final” theory, and scientific truths are consequently far from absolute, but that doesn’t mean that there is no progress.

Research Opportunities in the Philosophy of Cosmology

Posted in The Universe and Stuff with tags , , , , , , on March 16, 2012 by telescoper

I got an email this morning telling me about the following interesting opportunities for research fellowships. They are in quite an unusual area – the philosophy of cosmology – and one I’m quite interested in myself so I thought it might ahieve wider circulation if I posted the advertisement on here.

–0–

Applications are invited for two postdoctoral fellowships in the area of philosophy of cosmology, one to be held at Cambridge University and one to be held at Oxford University, starting 1 Jan 2013 to run until 31 Aug 2014. The two positions have similar job-descriptions and the deadline for applications is the same: 18 April 2012.

For more details, see here, for the Cambridge fellowship and  here for the Oxford fellowship.

Applicants are encouraged to apply for both positions. The Oxford group is led by Joe Silk, Simon Saunders and David Wallace, and that at Cambridge by John Barrow and Jeremy Butterfield.

These appointments are part of the initiative ‘establishing the philosophy of cosmology’, involving a consortium of universities in the UK and USA, funded by the John Templeton Foundation. Its aim is to identify, define and explore new foundational questions in cosmology. Key questions already identified concern:

  • The issue of measure, including potential uses of anthropic reasoning
  • Space-time structure, both at very large and very small scales
  • The cosmological constant problem
  • Entropy, time and complexity, in understanding the various arrows of time
  • Symmetries and invariants, and the nature of the description of the universe as a whole

Applicants with philosophical interests in cosmology outside these areas will also be considered.

For more background on the initiative, see here and the project website (still under construction).