A paper came out a few weeks ago on the arXiv that’s ruffled a few feathers here and there so I thought I would make a few inflammatory comments about it on this blog. The article concerned, by Gubitosi et al., has the abstract:

I have to be a little careful as one of the authors is a good friend of mine. Also there’s already been a critique of some of the claims in this paper here. For the record, I agree with the critique and disagree with the original paper, that the claim below cannot be justfied.
…we illustrate how unfalsifiable models and paradigms are always favoured by the Bayes factor.
If I get a bit of time I’ll write a more technical post explaining why I think that. However, for the purposes of this post I want to take issue with a more fundamental problem I have with the philosophy of this paper, namely the way it adopts “falsifiablity” as a required characteristic for a theory to be scientific. The adoption of this criterion can be traced back to the influence of Karl Popper and particularly his insistence that science is deductive rather than inductive. Part of Popper’s claim is just a semantic confusion. It is necessary at some point to deduce what the measurable consequences of a theory might be before one does any experiments, but that doesn’t mean the whole process of science is deductive. As a non-deductivist I’ll frame my argument in the language of Bayesian (inductive) inference.
Popper rejects the basic application of inductive reasoning in updating probabilities in the light of measured data; he asserts that no theory ever becomes more probable when evidence is found in its favour. Every scientific theory begins infinitely improbable, and is doomed to remain so. There is a grain of truth in this, or can be if the space of possibilities is infinite. Standard methods for assigning priors often spread the unit total probability over an infinite space, leading to a prior probability which is formally zero. This is the problem of improper priors. But this is not a killer blow to Bayesianism. Even if the prior is not strictly normalizable, the posterior probability can be. In any case, given sufficient relevant data the cycle of experiment-measurement-update of probability assignment usually soon leaves the prior far behind. Data usually count in the end.
I believe that deductvism fails to describe how science actually works in practice and is actually a dangerous road to start out on. It is indeed a very short ride, philosophically speaking, from deductivism (as espoused by, e.g., David Hume) to irrationalism (as espoused by, e.g., Paul Feyeraband).
The idea by which Popper is best known is the dogma of falsification. According to this doctrine, a hypothesis is only said to be scientific if it is capable of being proved false. In real science certain “falsehood” and certain “truth” are almost never achieved. The claimed detection of primordial B-mode polarization in the cosmic microwave background by BICEP2 was claimed by some to be “proof” of cosmic inflation, which it wouldn’t have been even if it hadn’t subsequently shown not to be a cosmological signal at all. What we now know to be the failure of BICEP2 to detect primordial B-mode polarization doesn’t disprove inflation either.
Theories are simply more probable or less probable than the alternatives available on the market at a given time. The idea that experimental scientists struggle through their entire life simply to prove theorists wrong is a very strange one, although I definitely know some experimentalists who chase theories like lions chase gazelles. The disparaging implication that scientists live only to prove themselves wrong comes from concentrating exclusively on the possibility that a theory might be found to be less probable than a challenger. In fact, evidence neither confirms nor discounts a theory; it either makes the theory more probable (supports it) or makes it less probable (undermines it). For a theory to be scientific it must be capable having its probability influenced in this way, i.e. amenable to being altered by incoming data “i.e. evidence”. The right criterion for a scientific theory is therefore not falsifiability but testability. It follows straightforwardly from Bayes theorem that a testable theory will not predict all things with equal facility. Scientific theories generally do have untestable components. Any theory has its interpretation, which is the untestable penumbra that we need to supply to make it comprehensible to us. But whatever can be tested can be regared as scientific.
So I think the Gubitosi et al. paper starts on the wrong foot by focussing exclusively on “falsifiability”. The issue of whether a theory is testable is complicated in the context of inflation because prior probabilities for most observables are difficult to determine with any confidence because we know next to nothing about either (a) the conditions prevailing in the early Universe prior to the onset of inflation or (b) how properly to define a measure on the space of inflationary models. Even restricting consideration to the simplest models with a single scalar field, initial data are required for the scalar field (and its time derivative) and there is also a potential whose functional form is not known. It is therfore a far from trivial task to assign meaningful prior probabilities on inflationary models and thus extremely difficult to determine the relative probabilities of observables and how these probabilities may or may not be influenced by interactions with data. Moreover, the Bayesian approach involves comparing probabilities of competing theories, so we also have the issue of what to compare inflation with…
The question of whether cosmic inflation (whether in general concept or in the form of a specific model) is testable or not seems to me to boil down to whether it predicts all possible values of relevant observables with equal ease. A theory might be testable in principle, but not testable at a given time if the available technology at that time is not able to make measurements that can distingish between that theory and another. Most theories have to wait some time for experiments can be designed and built to test them. On the other hand a theory might be untestable even in principle, if it is constructed in such a way that its probability can’t be changed at all by any amount of experimental data. As long as a theory is testable in principle, however, it has the right to be called scientific. If the current available evidence can’t test it we need to do better experiments. On other words, there’s a problem with the evidence not the theory.
Gubitosi et al. are correct in identifying the important distinction between the inflationary paradigm, which encompasses a large set of specific models each formulated in a different way, and an individual member of that set. I also agree – in contrast to many of my colleagues – that it is actually difficult to argue that the inflationary paradigm is currently falsfiable testable. But that doesn’t necessarily mean that it isn’t scientific. A theory doesn’t have to have been tested in order to be testable.