The Return of the Inductive Detective
A few days ago an article appeared on the BBC website that discussed the enduring appeal of Sherlock Holmes and related this to the processes involved in solving puzzles. That piece makes a number of points I’ve made before, so I thought I’d update and recycle my previous post on that theme. The main reason for doing so is that it gives me yet another chance to pay homage to the brilliant Jeremy Brett who, in my opinion, is unsurpassed in the role of Sherlock Holmes. It also allows me to return to a philosophical theme I visited earlier this week.
One of the things that fascinates me about detective stories (of which I am an avid reader) is how often they use the word “deduction” to describe the logical methods involved in solving a crime. As a matter of fact, what Holmes generally uses is not really deduction at all, but inference (a process which is predominantly inductive).
In deductive reasoning, one tries to tease out the logical consequences of a premise; the resulting conclusions are, generally speaking, more specific than the premise. “If these are the general rules, what are the consequences for this particular situation?” is the kind of question one can answer using deduction.
The kind of reasoning of reasoning Holmes employs, however, is essentially opposite to this. The question being answered is of the form: “From a particular set of observations, what can we infer about the more general circumstances that relating to them?”.
And for a dramatic illustration of the process of inference, you can see it acted out by the great Jeremy Brett in the first four minutes or so of this clip from the classic Granada TV adaptation of The Hound of the Baskervilles:
I think it’s pretty clear in this case that what’s going on here is a process of inference (i.e. inductive rather than deductive reasoning). It’s also pretty clear, at least to me, that Jeremy Brett’s acting in that scene is utterly superb.
I’m probably labouring the distinction between induction and deduction, but the main purpose doing so is that a great deal of science is fundamentally inferential and, as a consequence, it entails dealing with inferences (or guesses or conjectures) that are inherently uncertain as to their application to real facts. Dealing with these uncertain aspects requires a more general kind of logic than the simple Boolean form employed in deductive reasoning. This side of the scientific method is sadly neglected in most approaches to science education.
In physics, the attitude is usually to establish the rules (“the laws of physics”) as axioms (though perhaps giving some experimental justification). Students are then taught to solve problems which generally involve working out particular consequences of these laws. This is all deductive. I’ve got nothing against this as it is what a great deal of theoretical research in physics is actually like, it forms an essential part of the training of an physicist.
However, one of the aims of physics – especially fundamental physics – is to try to establish what the laws of nature actually are from observations of particular outcomes. It would be simplistic to say that this was entirely inductive in character. Sometimes deduction plays an important role in scientific discoveries. For example, Albert Einstein deduced his Special Theory of Relativity from a postulate that the speed of light was constant for all observers in uniform relative motion. However, the motivation for this entire chain of reasoning arose from previous studies of eletromagnetism which involved a complicated interplay between experiment and theory that eventually led to Maxwell’s equations. Deduction and induction are both involved at some level in a kind of dialectical relationship.
The synthesis of the two approaches requires an evaluation of the evidence the data provides concerning the different theories. This evidence is rarely conclusive, so a wider range of logical possibilities than “true” or “false” needs to be accommodated. Fortunately, there is a quantitative and logically rigorous way of doing this. It is called Bayesian probability. In this way of reasoning, the probability (a number between 0 and 1 attached to a hypothesis, model, or anything that can be described as a logical proposition of some sort) represents the extent to which a given set of data supports the given hypothesis. The calculus of probabilities only reduces to Boolean algebra when the probabilities of all hypothesese involved are either unity (certainly true) or zero (certainly false). In between “true” and “false” there are varying degrees of “uncertain” represented by a number between 0 and 1, i.e. the probability.
Overlooking the importance of inductive reasoning has led to numerous pathological developments that have hindered the growth of science. One example is the widespread and remarkably naive devotion that many scientists have towards the philosophy of the anti-inductivist Karl Popper; his doctrine of falsifiability has led to an unhealthy neglect of an essential fact of probabilistic reasoning, namely that data can make theories more probable. More generally, the rise of the empiricist philosophical tradition that stems from David Hume (another anti-inductivist) spawned the frequentist conception of probability, with its regrettable legacy of confusion and irrationality.
In fact Sherlock Holmes himself explicitly recognizes the importance of inference and rejects the one-sided doctrine of falsification. Here he is in The Adventure of the Cardboard Box (the emphasis is mine):
Let me run over the principal steps. We approached the case, you remember, with an absolutely blank mind, which is always an advantage. We had formed no theories. We were simply there to observe and to draw inferences from our observations. What did we see first? A very placid and respectable lady, who seemed quite innocent of any secret, and a portrait which showed me that she had two younger sisters. It instantly flashed across my mind that the box might have been meant for one of these. I set the idea aside as one which could be disproved or confirmed at our leisure.
My own field of cosmology provides the largest-scale illustration of this process in action. Theorists make postulates about the contents of the Universe and the laws that describe it and try to calculate what measurable consequences their ideas might have. Observers make measurements as best they can, but these are inevitably restricted in number and accuracy by technical considerations. Over the years, theoretical cosmologists deductively explored the possible ways Einstein’s General Theory of Relativity could be applied to the cosmos at large. Eventually a family of theoretical models was constructed, each of which could, in principle, describe a universe with the same basic properties as ours. But determining which, if any, of these models applied to the real thing required more detailed data. For example, observations of the properties of individual galaxies led to the inferred presence of cosmologically important quantities of dark matter. Inference also played a key role in establishing the existence of dark energy as a major part of the overall energy budget of the Universe. The result is now that we have now arrived at a standard model of cosmology which accounts pretty well for most relevant data.
Nothing is certain, of course, and this model may well turn out to be flawed in important ways. All the best detective stories have twists in which the favoured theory turns out to be wrong. But although the puzzle isn’t exactly solved, we’ve got good reasons for thinking we’re nearer to at least some of the answers than we were 20 years ago.
I think Sherlock Holmes would have approved.
Follow @telescoper
August 23, 2012 at 10:06 am
[…] A few days ago an article appeared on the BBC website that discussed the enduring appeal of Sherlock Holmes and related this to the processes involved in solving puzzles. That piece makes a number … […]
August 23, 2012 at 11:45 am
Reblogged this on Sherlock Holmes.
August 23, 2012 at 2:58 pm
I’m afraid this is your most interesting blog entry to date. Probably.
August 24, 2012 at 6:23 pm
Do you know the books of Sarah Caudwell? I recommend them to everyone I know who likes detective stories. They’re detective stories, but they’re also extremely funny comic novels, with characters who speak in an arch style more or less like descendents of Wodehouse or Wilde characters.
The narrator is an Oxford professor named Hillary Tamar, who has an odd characteristic that I won’t reveal because it’s more fun if you discover it for yourself.
There are only four of them, unfortunately, starting with Thus Was Adonis Murdered.
August 24, 2012 at 9:54 pm
Great post. My supervisor at Trinity College Dublin always encouraged us postgrads to see ourselves as detectives and I think it was good training.
As regards the exact type of reasoning, I always liked the term ‘inference to the best explanation’, it conveys the common-sense aspect of scientific reasoning very well.
It’s funny that no-one writes long books on the ‘philosophy of crime detection’, especially if they have never been in the police force; yet lots of philosophers write ‘definitive’ tomes on the scientific method who have never been in a lab!
August 25, 2012 at 11:28 pm
Not sure it’s fair to call Hume an “anti-inductivist”. To me, Hume’s writings on the problem of induction (as it is now known) and probability can be seen as consistent with and in fact motivation for Bayesian reasoning. E.g. (from the Enquiry)
“There is certainly a probability, which arises from a superiority of
chances on any side; and according as this superiority encreases, and surpasses the opposite chances, the probability receives a proportionable encrease, and begets still a higher degree of belief or assent to that side, in which we discover the superiority… . The case is the same with the probability of causes, as with that
of chance…”
September 16, 2020 at 7:27 pm
[…] Another good book about Bayesian probability is From Cosmos to Chaos: The Science of Unpredictability, by by Peter Coles. Coles assumes a little more comfort with mathematical notation than Silver, but the actual arguments do not require more than algebra. While discussing the history of probability theory from its roots in gambling, he concentrates on physics and astronomy, which also contributed significantly to the development of statistics. He is a strong advocate of Bayesian probability and suggests the Bayesian view avoids some nasty issues in the interpretation of statistical mechanics and quantum mechanics, notably that in the latter subject there is no reason for the Many Worlds Interpretation. Incidentally, he has also argued that the conventional interpretation of Sherlock Holmes is wrong. See The Return of the Inductive Detective. […]